Prompt Engineering Examples: 30 Tested Templates
A "good" prompt template is not the one a viral tweet showed off. The viral templates over-promise and break the moment your task drifts five degrees from the example. The templates below are the ones that have survived contact with real work -- thirty of them, organised by the kind of task they solve. Each names the role, the constraints, and the output format, because those three together explain almost all the variance between a prompt that works and one that does not. Copy them, but more importantly, edit them. A template you have used and adapted is worth ten templates you have collected and never tried. For the underlying theory of why these patterns work, see our prompt engineering hub.
Table of contents
- What makes a template "good"
- Templates for writing
- Templates for analysis
- Templates for code
- Templates for structured output
- Templates for judgment
- Templates for multi-step reasoning
- How to adapt to your domain
- Frequently asked questions
- The bottom line
What makes a template "good"
Three properties separate a useful template from a clever one.
First, it specifies the output format. A template that says "write me a marketing email" is barely a template; it transfers no constraint. A template that says "respond as: subject line (5-8 words), opening hook (one sentence ending with a question mark), body (60-80 words), CTA (under 8 words), nothing else" produces consistent output across dozens of inputs.
Second, it includes one concrete example, not three abstract ones. Models pattern-match on example shape. A template with one well-formed example outperforms a template with three half-baked ones, every time.
Third, it leaves the variable parts variable. A great template has [BRACKETS] for the slots where your input goes, and the slots are named clearly: [PRODUCT], [AUDIENCE], [TONE_SAMPLE]. When you copy the template, you fill the brackets. The bracket discipline is what turns a one-time prompt into a tool you can use 50 times.
The templates below follow these three rules. Where you see [BRACKETS] you substitute. Everywhere else, leave the prompt alone -- the words have been load-tested.
Templates for writing
1. Voice-anchored draft. For matching an existing writer's style. Prompt: "Below is a sample of [AUTHOR]'s writing. Read it for voice, sentence rhythm, and vocabulary. Then write [TARGET: e.g. a 400-word essay] on [TOPIC] in the same voice. Do not imitate the topic, only the voice. Sample: [PASTE 200-400 WORDS]."
2. Tight email. For producing usable B2B outreach in one shot. Prompt: "Write a cold email from [SENDER ROLE] to [RECIPIENT ROLE] at [COMPANY TYPE]. Open with a specific observation about their business (substitute [BRACKETS], do not invent). Body: 60-80 words, one concrete value claim, no adjectives. CTA: book a 15-minute call. Total under 120 words. No greeting line. No sign-off."
3. Outline first, then draft. For long-form articles. Prompt: "Topic: [TOPIC]. Audience: [AUDIENCE]. Word target: [N]. Step 1: produce an outline with 6-9 H2 sections, each with a one-sentence summary of the argument. Stop and wait. Do not draft yet." Then in turn 2: "Approve outline. Now draft section 1 in [N] words, in [VOICE] style, with one concrete example."
4. Headline generator. For producing 10 candidates fast. Prompt: "Generate 10 headline candidates for an article about [TOPIC]. Constraints: under 60 characters, no clickbait, no questions, must contain a specific noun. Format: numbered list. After the list, mark the strongest with a '*' and explain why in one sentence."
5. Edit-pass tightening. For making prose dense. Prompt: "Below is a draft. Cut 30% of the words without losing any claim. Preserve quotes verbatim. Strengthen verbs. Remove hedges ('very', 'really', 'quite'). Output the edited version only. Draft: [PASTE]."
Templates for analysis
6. Document summary with criteria. Prompt: "Read the document below. Produce: (a) a 50-word executive summary; (b) the three most important specific findings, each with a quote and section reference; (c) one question the document does not answer that a [ROLE] would want answered. Document: [PASTE]."
7. Comparison matrix. Prompt: "Compare [OPTION A] and [OPTION B] on these dimensions: cost, speed, accuracy, lock-in, learning curve. Output as a table with [A] and [B] as columns. For each dimension, give a one-line verdict and which option wins. End with one paragraph naming when each option is the right choice."
8. Pros, cons, recommendation. Prompt: "I am considering [DECISION]. Generate: (a) the three strongest arguments for; (b) the three strongest against; (c) the single counterargument that would change the recommendation; (d) your recommendation in one sentence. Use 'recommend X because Y, unless Z' format."
9. Risk identification. Prompt: "Below is a [PLAN/CONTRACT/PROPOSAL]. List the five highest-risk elements, ranked by severity. For each: what could go wrong, what evidence is in the document that triggers the risk, what mitigation would lower it. Format as numbered list. Document: [PASTE]."
10. Trend extraction. Prompt: "Below are [N] data points/articles/reviews. Extract: (a) three patterns that appear in 50%+ of items; (b) two outliers worth attention; (c) one prediction this data supports for the next 12 months, with the assumption that prediction depends on. Data: [PASTE]."
Templates for code
11. Function from spec. Prompt: "Write a [LANGUAGE] function with signature: [SIGNATURE]. Behaviour: [3-5 BULLET POINTS]. Edge cases to handle: [LIST]. Constraints: no external dependencies; pure function. Output the function body only, no commentary."
12. Test-first code. Prompt: "Below is one passing test case. Write the implementation that passes this test. Use [LANGUAGE]. Do not add features beyond what the test requires. Test: [PASTE TEST]."
13. Refactor with constraints. Prompt: "Refactor the function below. Constraints: keep the public signature; reduce cyclomatic complexity; add input validation; add one inline comment for any non-obvious branch. Output the refactored function only. Function: [PASTE]."
14. Bug diagnosis. Prompt: "Code below is failing with: [ERROR TEXT]. Identify the root cause in one sentence. List the three most likely contributing factors, ranked by likelihood. Suggest the smallest fix. Code: [PASTE]."
15. Code review. Prompt: "Review the code below as if I am a senior developer asking for honest feedback. Format: (a) what is good (max 3 points); (b) what would block this in code review (max 5 points, ordered by severity); (c) one specific improvement that would matter most. No filler praise. Code: [PASTE]."
Templates for structured output
16. JSON extraction. Prompt: "Extract from the text below into this JSON schema: {name: string, role: string, company: string, mentioned_in_quote: boolean}. Rules: respond with raw JSON only, no markdown, no prose. If a field is unknown, use null. Text: [PASTE]."
17. Tabular extraction. Prompt: "From the document below, extract every [ENTITY: e.g. financial figure] into a table with columns: value, units, source paragraph, confidence (high/medium/low). Output as pipe-delimited markdown table only. Document: [PASTE]."
18. Classification with confidence. Prompt: "Classify the input below into exactly one of these categories: [LIST]. Output JSON: {category: string, confidence: 0-100, reasoning: string (max 30 words)}. If none of the categories fit, set category to 'none' and confidence to 0. Input: [PASTE]."
19. Field validation. Prompt: "Below is a JSON object. Validate against this schema: [SCHEMA]. Output JSON: {valid: boolean, errors: [{field, problem}]}. If valid, errors is empty array. Object: [PASTE]."
20. Schema-conformant output. Prompt: "Generate [N] examples of [ENTITY] for testing. Schema: [PASTE SCHEMA]. Constraints: realistic values; vary across all enums; one example must exercise each optional field. Output as JSON array, no other text."
Templates for judgment
21. Steelman-then-decide. Prompt: "I am leaning toward [POSITION]. First, write the strongest case AGAINST my position in 80 words, as if you genuinely held it. Then write the case FOR in 80 words. Then your honest verdict, with the one piece of evidence that tipped it. Topic: [DESCRIBE]."
22. Devil's advocate review. Prompt: "Below is my reasoning for [DECISION]. Find the three weakest links in this argument. For each: name the assumption that would have to be true for the argument to hold, and the evidence that would falsify it. My reasoning: [PASTE]."
23. Stakeholder POV. Prompt: "I am proposing [PROPOSAL]. Write how the following stakeholders would react, in their voice, in 50 words each: [LIST 3-5 STAKEHOLDERS]. Each response should name the specific concern and what would address it."
24. Confidence-rated answer. Prompt: "Answer the question below. Format: one-sentence answer; confidence (0-100); the single piece of evidence that, if false, would change the answer. If you cannot answer with confidence above 60, say so explicitly. Question: [PASTE]."
25. Decision under uncertainty. Prompt: "I must decide [DECISION] but the data is incomplete. Identify the three pieces of information that would most reduce my uncertainty, ranked by impact. For each, name the cheapest way to obtain it. Then give your provisional recommendation if I cannot get any of them. Context: [PASTE]."
Templates for multi-step reasoning
26. Plan-then-execute. Prompt: "Goal: [GOAL]. Step 1: produce a plan with 5-8 numbered steps. Stop. Wait for me to approve before executing." In turn 2: "Approved. Execute step 1 only. Show the result and stop."
27. Think-act-observe. Prompt: "For each turn, follow this loop: (1) Thought: what to do next, (2) Action: the specific action, (3) Observation: what happened. After 3 cycles, give the final answer. Goal: [GOAL]."
28. Self-critique loop. Prompt: "Step 1: produce a draft answer to: [QUESTION]. Step 2: critique your own answer in 50 words, focusing on weakest claim. Step 3: produce an improved answer addressing the critique. Output all three steps."
29. Multi-perspective synthesis. Prompt: "Question: [QUESTION]. Generate three answers, each from a different perspective: [P1], [P2], [P3]. Then write a synthesis that names where the perspectives agree, where they disagree, and which disagreement is most consequential."
30. Tree of thought. Prompt: "Problem: [PROBLEM]. Generate three different approaches to solving it. For each approach, evaluate: feasibility (1-10), effort (1-10), confidence in outcome (1-10). Recommend the highest expected-value approach with a one-paragraph justification."
How to adapt to your domain
Templates are starting points, not endpoints. Three moves convert a generic template into one that performs in your specific context.
First, replace the example with one from your domain. Templates 6-10 use generic placeholders; substituting a real document from your work, with the right output style, produces dramatically better behaviour because the model now has a domain-specific shape to imitate.
Second, add domain vocabulary as a glossary. If you work in healthcare, add a one-line glossary at the top of your template: "ICD = International Classification of Diseases; NPI = National Provider Identifier." This is faster than relying on the model's general knowledge and reduces hallucination on technical terms.
Third, log the failures. The first 20 times you use a template, save the prompts where the output was unsatisfactory. Look for patterns. The fix is usually one new constraint added to the template, not a new template. Most templates plateau around the third iteration; further tweaking yields diminishing returns.
| Template family | Best for | Common failure | Counter-move |
|---|---|---|---|
| Writing | Drafts, voice match | Generic, hedged | Add length cap + one sample |
| Analysis | Compare, summarize | Surface skim | Force criteria-then-verdict |
| Code | Functions, refactors | Over-engineering | Test as spec |
| Structured | Extraction, classification | Markdown wrapper | "Raw JSON only" reminder at end |
| Judgment | Decisions, reviews | Confirmation bias | Steelman opposite first |
| Multi-step | Plans, agents | Premature execution | "Stop and wait" gate |
Frequently asked questions
Should I memorise these templates?
No. Save them in a personal library and copy-paste. The point of a template is offloading working memory, not adding to it. The 30 above will cover most tasks; over six months you will accumulate another 20 specific to your work, and those personal templates will outperform any public collection.
How do I know if a template is working?
Run it on five inputs and rate the output on a 1-5 scale. A template averaging 4+ across varied inputs is working. Below 3, the template needs revision -- usually adding a constraint, sometimes adding an example. Below 2, the template is wrong for the task; do not try to repair it.
Why are some templates so short?
The shortest prompt that produces the right output is the right prompt. Long prompts encode complexity that the model often does not need; short prompts that work indicate the model already had the relevant pattern and only needed direction. Length is not a quality signal.
Do these work on Claude and Gemini, not just GPT?
Yes, with one caveat: structured-output templates (16-20) perform best on Claude and GPT-5; Gemini sometimes adds prose around JSON despite the instruction. If you target Gemini, end the prompt with "raw JSON only, nothing else, do not explain." See our guide's portability section for the full breakdown.
Can I combine templates?
Yes, and it is often the right move. Template 26 (plan-then-execute) combined with template 11 (function from spec) produces strong agentic code generation: plan, then execute one function at a time. Combining template 21 (steelman) with template 8 (pros-cons) over-constrains -- pick one judgment template per prompt.
What if the template produces something close but not quite right?
Iterate, do not start over. Paste the output back with a single sentence: "rewrite paragraph 2 to be more specific" or "convert the bullet list to a comparison table." The model is good at editing; use it.
Where do I store my growing template library?
A markdown file in a notes app is fine for personal use. For team use, evaluate dedicated tools -- our prompt engineering tools roundup compares the major options.
The bottom line
Pick five templates from above that match the work you do most often. Save them in a notes file. Use one tomorrow. Edit it after the third use. After a month you will have ten templates of your own that beat anything you copied. The 30 above are not the answer; they are the starting point. Most professionals settle into a personal toolkit of 12-15 templates that cover 80% of their daily prompting -- the work is finding which ones, for your specific job. Continue with our beginners guide if you are early in the journey, or the system prompts guide if you are building production prompts.
Last updated: May 2026
