top of page

Part 2: The building blocks of a Prompt – What Goes Into a Good AI Instruction

  • Writer: Rajashree Rajadhyax
    Rajashree Rajadhyax
  • Apr 27, 2025
  • 7 min read

Components of a good prompt. Image by Rajashree Rajadhyax
Components of a good prompt. Image by Rajashree Rajadhyax

This is the second article in the three-part series on prompt engineering. In the first part, we explored various prompting techniques—different ways you can instruct AI to generate responses. The article introduced some theoretical approaches to interacting with AI, highlighting how different techniques influence the kind of output you receive. You can read the first part [here].


In this part, we’ll shift our focus to the building blocks of a prompt—the essential elements that make up a clear, effective instruction.


Let’s revisit what a prompt is


At a basic level, a prompt is simply the way you tell AI what you expect from it. It’s your way of saying, “Hey, here’s what I need—please respond accordingly.”

But while it may seem as simple as typing a question, what you say and how you say it can drastically influence the output you get. This is because AI models, like LLMs (Large Language Models), understand prompts by recognizing patterns and making educated guesses based on what they’ve learnedBefore diving into the different components of a prompt, it helps to first understand how the AI model (large language model) behind these responses actually works.


How Do LLMs Actually Work?


To understand what makes a good prompt, it helps to know (in simple terms) how Large Language Models (LLMs) generate their responses. Do you remember the “complete the sentence…” exercises we did as children?

For example:

  • "The cat sat on the ___."

  • You might guess: "mat" or "rug"—words that make sense based on the context.

In a way, LLMs are doing the same thing—just on a much larger scale.


Here’s how it works:


  1. You give the AI a prompt → This is your starting sentence or question.

  2. The AI uses its existing knowledge (from massive datasets like books, articles, websites, etc.) to narrow down the most relevant information related to your prompt.

  3. It predicts the next word (called a token) based on the context it has formed.

  4. It adds this predicted word to the sentence and then predicts the next one, repeating the process until the response is complete.

For example:Prompt: "The capital of France is ___."

  • The AI predicts the next token based on what it has learned from vast amounts of text.

  • It selects "Paris" because, statistically, it is the most probable word to follow that sentence.

  • It continues predicting the next words to expand the response, adding relevant details like "Paris is known for its landmarks, including the Eiffel Tower."


How Does an AI Know When to Stop Responding?


Since LLMs generate one word at a time, you might wonder:“If the AI is just guessing the next word, what makes it stop generating?”

There are four main reasons why AI knows when to stop:


1. It Recognizes When the Sentence Feels Complete

Just like when you’re speaking or writing, you naturally know when you’ve finished your thought. LLMs do something similar. Since they’ve been trained on huge amounts of text—books, articles, and conversations—they’ve learned what complete sentences and natural conclusions look like. When the AI detects that the sentence or paragraph feels finished, it may stop naturally.

Example: Prompt: "Tell me about the Eiffel Tower."

  • The AI might say: "The Eiffel Tower is a famous landmark in Paris, France. It was built in 1889 and attracts millions of visitors every year."

  • Since this answers the question clearly, the AI may stop on its own.


2. It Uses Punctuation as a Signal

Just like how you recognize a period or a line break as the end of a sentence, LLMs use punctuation as stopping cues. When the AI generates a period, comma, or paragraph break, it may interpret this as a natural point to stop.

Example: Prompt: "Write a short story about a cat and a mouse."

  • The AI might end with: "The cat watched the mouse scurry away, feeling surprisingly relieved. The sun set, and both creatures returned to their homes."

  • The period at the end signals a conclusion, so the AI stops generating.


3. It Follows Pre-Set Limits

Even though the AI could keep generating text forever, it has a built-in limit on how much it can write at once. Think of it like an essay word count—even if you have more to say, you have to stop when you hit the limit.

  • LLMs have a maximum word or token limit for each response.

  • Once the AI hits this limit, it stops automatically, even if the answer feels incomplete.

Example: Prompt: "Explain the history of the Roman Empire in detail."

  • The AI might start describing events but stop mid-sentence when it reaches the limit, leaving the response unfinished.


4. It Loses Confidence in What Comes Next

As the AI generates words, it constantly assesses how confident it is about the next word. If it becomes unsure or runs out of relevant content, it may stop.

  • This is why some AI responses end abruptly—the model feels less certain about what should follow.

Example: Prompt: "Explain how photosynthesis works in detail."

  • The AI might give a detailed answer but stop unexpectedly if it feels it has covered the key points or doesn’t know how to continue meaningfully.


Why Does This Matter for Prompting?


Understanding how AI knows when to stop helps you create better prompts. If your prompt is too vague, the AI might stop early, giving you a short or incomplete answer. If you provide more context or request specific details, the AI is more likely to generate a thorough, complete response.


The Building Blocks of a Good Prompt


Now that we’ve got a sense of how LLMs work and how they interpret and generate responses, it’s easy to see why the way you phrase your prompt makes all the difference. Since LLMs generate responses one word at a time based on probabilities, the quality of your input directly shapes the quality of the output. If your prompt is vague or unclear, the AI is more likely to produce generic answers. But when your prompt is clear, specific, and well-structured, the AI has a much better chance of guessing what you’re expecting and giving you a meaningful response. We will be discussing the best practices of prompt engineering in the next article but for now let's look at what are the building blocks of a prompt.


1. Instruction: The Core Directive


The instruction is the primary command or request you give to the AI. It tells the model what you want it to do.

The effectiveness of your instruction depends on:

  • Clarity: State your request explicitly.

  • Specificity: The more detailed you are, the better the response.

  • Action-oriented language: Use clear action verbs like "list," "describe," "summarize," or "compare" to make the task clear.

Example:

  • Vague: "Explain photosynthesis."

  • Clear: "Explain the process of photosynthesis in simple terms for a 12-year-old, covering the role of sunlight, water, and carbon dioxide."

The clear instruction gives the AI context and the right level of detail.


2. Role: Assigning a Perspective or Persona


Assigning a role helps the AI adopt a specific voice, tone, level of expertise, or style when generating responses. By specifying a role, you shape the AI’s tone, depth of knowledge, and communication style. This is particularly useful when you want the response to reflect a certain profession, personality, or context.

Example:

  • Professional: "You are a financial analyst. Explain the impact of inflation on household savings."

  • Creative: "You are a children’s storyteller. Write a bedtime story about a brave rabbit."

  • Technical: "You are a software developer. Explain recursion in Python with a simple code example."

The role makes the AI respond as an expert rather than providing a general answer.


3. Context: Give Background Information


Context is the relevant information or scenario that helps the AI understand your request better. Without context, the AI may generate a generic or irrelevant response. Adding context makes the output more accurate and useful.

Examples:

  • Instead of: "Suggest marketing strategies." ➝ Add context: "Suggest marketing strategies for a small, local bakery looking to increase foot traffic."

  • Instead of: "Explain how AI works." ➝ Add context: "Explain how AI works to a 10-year-old child using simple, everyday examples."


4. Examples: Demonstrate What You Expect


Sometimes, explaining what you want isn’t enough. Providing examples gives the AI a clear reference for the tone, format, or style you expect.

Examples help the AI:

  • Match the style or format you want.

  • Understand the level of detail you expect.

  • Generate more accurate or creative responses.

Example:Prompt: "Write a social media post promoting a fitness app. Here’s an example of the style I want: 'Stay fit on your own terms. Our app lets you create personalized workout plans, track progress, and achieve your goals—anytime, anywhere.' "

By including the example, the AI is more likely to replicate the tone and format you’re aiming for.


5. Output Format: Specify How You Want the Response


Specifying the format makes the response more organized and easier to read. You can request responses in different formats, such as:

  • Bullet points: For lists or key takeaways.

  • Tables: For comparisons or structured data.

  • Paragraphs: For detailed explanations or narratives.

  • Code blocks: For programming-related tasks.

Example: Prompt: "List the benefits of regular exercise in bullet points."


6. Constraints or Rules: Set Boundaries or Preferences


Adding constraints helps you control the length, tone, or complexity of the response. Constraints can include:

  • Word or character limits: "Explain in 100 words or less."

  • Complexity preferences: "Explain as if you’re talking to a 10-year-old."

  • Stylistic rules: "Use conversational language, avoid technical jargon."

Example:Prompt: "Describe deep learning in simple terms. Keep it under 150 words and use non-technical language."

These constraints help the AI stay within the desired boundaries.


7. Style and Tone: Guide the Writing Personality


The tone and style you specify shape the personality of the AI’s response. Common tones include:

  • Formal: Suitable for business reports or professional communication.

  • Conversational: Friendly and relaxed, good for blogs or casual content.

  • Humorous: Playful or witty, ideal for lighthearted content.

  • Persuasive: Compelling and convincing, useful for marketing copy.

Example:Prompt: "Write a product description for a smartwatch in a conversational tone. Make it sound friendly and approachable."The AI will adjust its language to match the requested tone, making the response more relatable.


8. Clarifying Instructions: Add Follow-Up Directions


Sometimes, a prompt can benefit from follow-up instructions that clarify how the AI should respond. These can include:

  • Multiple steps: "First, describe the problem. Then, suggest three solutions."

  • Clarifying questions: "If you are unsure of the answer, say so rather than guessing."

  • Conditional instructions: "If the answer requires complex math, show the steps."

Example:Prompt: "Explain photosynthesis. If you use any scientific term, provide a simple definition right after it."

The AI will automatically add clarifications, making the answer clearer and more informative.




Mastering the building blocks of a prompt is the first step toward getting better results from AI. By combining clear instructions, relevant context, and the right structure, you can guide the AI more effectively and reduce the chances of vague or irrelevant responses.

But knowing the components alone isn’t enough—you also need to understand how to put them together skillfully. In the next article, we’ll dive into practical tips and strategies for writing effective prompts, helping you fine-tune your approach and get even more accurate and valuable results from AI.


Comments


bottom of page