Every AI prompt has parts. Some matter a lot. Others are filler.
Understanding which is which changes how you write prompts. Instead of guessing why something worked or didn’t, you can see the structure underneath. You can tweak specific pieces instead of starting over.
Think of it like a recipe. Once you know the ingredients and what they do, you stop blindly following instructions. You start making adjustments on purpose.
The Five Parts of Every Prompt
Most effective prompts contain five elements, though not every prompt needs all five. Learn Prompting’s research on prompt structure identifies these as the building blocks that appear across different prompting frameworks.
Here’s what each one does.
1. The Directive (The Task)
This is the core instruction. What do you actually want the AI to do?
“Write a product description.” “Summarize this document.” “Analyze these sales numbers.” “Debug this code.”
Simple, right? But here’s where most prompts go wrong: they’re vague about the task, or they bury it in context. The AI ends up guessing at what you need.
A strong directive is specific and unambiguous.
Weak directive: “Help me with this email.” Strong directive: “Write a follow-up email to a prospect who hasn’t responded in two weeks.”
The first leaves everything open to interpretation. The second tells the AI exactly what you need.
2. The Context (Background Information)
Context is everything the AI needs to know about your situation. Who are you? What are you working on? What’s the background here?
Without context, the AI fills in blanks with generic assumptions. With context, it can tailor its response to your actual situation.
Let’s say you need a presentation outline.
Without context: “Create a presentation outline about AI.”
The AI has no idea who the audience is, what angle to take, or how technical to go. You’ll get something generic.
With context: “I’m presenting to our executive team next week. They approved our AI pilot last quarter and want a 6-month progress update. They care about ROI and timeline, not technical details. Create a presentation outline.”
Now the AI understands the situation. It knows to emphasize business outcomes, avoid jargon, and structure for an executive audience.
Context answers the questions the AI would ask if it could: Who’s this for? What’s the situation? What matters here?
3. The Role (Persona)
Assigning a role tells the AI what perspective and expertise to bring. “Act as a marketing director” shapes responses differently than “act as a data analyst.”
This isn’t just roleplay. Research compiled by K2View shows that role-based prompting helps models draw on relevant domain knowledge and adjust tone appropriately. When you tell an AI to “act as a senior financial analyst,” it adjusts its vocabulary, level of detail, and assumptions about what you already know.
Without role: “Explain our Q4 numbers.” With role: “You’re a CFO explaining Q4 numbers to the board. Focus on the story behind the data, not just the figures.”
The role shapes everything: word choice, depth of explanation, what gets emphasized, what gets skipped.
Use roles when:
- You need domain-specific expertise
- Tone and perspective matter
- You want the AI to make assumptions appropriate to a particular viewpoint
4. Output Format
How should the response look? Bullet points? Numbered list? Table? Paragraph form? JSON?
IBM’s prompt engineering guide emphasizes that structured inputs and outputs improve reliability. When you specify format, you reduce ambiguity about what “done” looks like.
Without format: “Give me ideas for blog topics.” With format: “Give me 10 blog topic ideas as a numbered list. For each, include the topic and a one-sentence description of the angle.”
Format specifications prevent a lot of back-and-forth. Instead of getting a rambling paragraph when you wanted bullets, you get exactly the structure you need.
Common format options:
- Bullet points or numbered lists
- Tables (great for comparisons)
- Step-by-step instructions
- Structured templates
- Code blocks (for technical content)
- Specific word or character counts
5. Examples (Demonstrations)
Sometimes showing beats telling. Examples demonstrate exactly what you want.
This is particularly useful when format or style is hard to describe but easy to recognize. Instead of explaining your brand voice, show a sample. Instead of describing your email format, include one.
Without example: “Write a product description in our brand voice. We’re casual but professional.”
With example: “Write a product description in our brand voice. Here’s an example of our style:
‘The Horizon Backpack isn’t just storage. It’s your mobile office, gym bag, and weekend escape kit rolled into one. Fits a 15-inch laptop, three days of clothes, and still has room for snacks. Because priorities.’
Now write a similar description for our new water bottle.”
The example communicates more than paragraphs of explanation could. The AI picks up the rhythm, the humor, the sentence structure.
Studies on few-shot prompting consistently show that 3-5 diverse examples improve output quality more reliably than lengthy explanations. The key word is diverse. Show the range of what you want, not just one type.
How to Order the Parts
Order matters more than you might think.
Language models process text sequentially. They’re trained to predict what comes next based on what came before. This means the last thing in your prompt often gets the most attention.
Learn Prompting recommends placing the directive toward the end of your prompt, after the context and examples. The AI focuses on the instruction rather than continuing the contextual information.
A logical sequence:
- Examples (if using them) - Sets the pattern
- Context - Provides necessary background
- Role - Establishes perspective
- Directive - The core task
- Format - How you want the output
That said, this isn’t rigid. Many prompts work fine in different orders. The principle is: make sure your directive is clear and prominent, not buried in a wall of context.
What You Can Skip
Not every prompt needs all five parts.
Skip examples when:
- The task is straightforward
- You don’t care about specific format or style
- You’re asking for analysis or reasoning (examples can constrain)
Skip role when:
- The task doesn’t require specialized perspective
- You want the AI’s “default” voice
- The context already implies the relevant expertise
Skip extensive context when:
- The task is self-contained
- Background doesn’t change the answer
- You’re doing something generic
Simple questions need simple prompts. “What’s the capital of France?” doesn’t need role, context, or examples. The complexity of your prompt should match the complexity of your task.
The Cost of Complexity
Here’s something worth knowing: more isn’t always better.
Research on prompt engineering costs found that structured, concise prompts often outperform verbose ones. One analysis showed that well-structured short prompts reduced API costs by 76% while maintaining the same quality. That’s $706 per day versus $3,000 per day for 100,000 API calls.
The lesson: add components when they improve results, not by default. Every word in your prompt costs tokens, and longer prompts don’t automatically mean better output.
Test this yourself. Try a detailed prompt and a shorter version that keeps just the essential parts. Often the shorter one works just as well.
Putting the Parts Together
Let’s build a prompt step by step.
Task: You need a sales email for a new product launch.
Start with just the directive:
Write a sales email about our new project management software.
That’ll work. You’ll get a generic sales email. But let’s see what adding parts does.
Add context:
We’re launching TaskFlow, a project management tool built specifically for marketing agencies. Our target customer is agency owners managing 10-50 person teams. They’re frustrated with tools built for software companies, not creative work.
Write a sales email about TaskFlow.
Better. Now the AI understands the product and audience.
Add role:
We’re launching TaskFlow, a project management tool built specifically for marketing agencies. Our target customer is agency owners managing 10-50 person teams. They’re frustrated with tools built for software companies, not creative work.
You’re a B2B SaaS marketer with experience selling to agencies. Write a sales email about TaskFlow.
The role brings relevant expertise and shapes the approach.
Add format:
We’re launching TaskFlow, a project management tool built specifically for marketing agencies. Our target customer is agency owners managing 10-50 person teams. They’re frustrated with tools built for software companies, not creative work.
You’re a B2B SaaS marketer with experience selling to agencies. Write a sales email about TaskFlow. Keep it under 200 words. Include a clear call to action to book a demo.
Now there’s structure around the output.
Add example (optional):
We’re launching TaskFlow, a project management tool built specifically for marketing agencies. Our target customer is agency owners managing 10-50 person teams. They’re frustrated with tools built for software companies, not creative work.
Here’s a sales email that performed well for us:
“Subject: Your project management tool wasn’t built for you
Running a marketing agency means juggling campaigns, clients, and creative chaos. Most PM tools were designed for shipping code, not shipping campaigns.
TaskFlow is different. Built by agency people for agency people. No developer jargon. No features you’ll never use. Just clear project tracking that matches how creative teams actually work.
[Demo link] - See it in action (15 minutes, no pitch).”
Write a similar email for our webinar registration campaign. Same tone, same length. CTA should drive webinar signups.
Each component adds specificity. The final prompt leaves little to chance.
Diagnosing Prompt Problems
When output isn’t what you wanted, ask which component failed.
Output too generic? Add more context about your specific situation.
Wrong tone or expertise level? Adjust or add a role.
Format is off? Be explicit about structure.
Style doesn’t match what you need? Add an example.
Response is confused or rambling? Your directive might be unclear. Simplify it.
Too verbose or too brief? Add format constraints (word count, number of points).
This diagnostic approach beats starting over every time something goes wrong. Identify the weak part, strengthen it, and try again.
Beyond the Basics
Understanding prompt anatomy is foundation-level knowledge. It explains why some prompts work and others don’t.
From here, you can explore more advanced techniques. Chain-of-thought prompting uses the directive component differently, asking the AI to reason step by step. Few-shot prompting focuses on the examples component, showing the AI how to handle new situations through demonstration.
Each advanced technique builds on this basic anatomy. Once you see the parts, you can combine and modify them deliberately.
Quick Reference
The five parts of a prompt:
- Directive - What task to perform (required)
- Context - Background information about your situation
- Role - Perspective and expertise to adopt
- Format - How the output should be structured
- Examples - Demonstrations of what you want
Order principle: Put the directive last or near the end. Context and examples come first.
Complexity principle: Add parts only when they improve results. Simple tasks need simple prompts.
Debugging principle: When output is wrong, identify which component is weak and fix that specific part.
The best prompts aren’t the longest ones. They’re the ones where every part does necessary work.