Why Learning Anti-Patterns Matters
You can learn faster by studying what not to do than what to do. Anti-patterns are mistakes that feel right intuitively but do not work in practice. They are insidious because they are mistakes that seem reasonable. Most people make them repeatedly without realizing the pattern.
Learning to recognize anti-patterns is like learning to recognize poor design in architecture. Once you know what to look for, you see it everywhere. And once you see it, you automatically start avoiding it. This chapter teaches you that recognition.
Most anti-patterns feel good when you write them. They seem reasonable, specific, or thorough. The problem is that language is ambiguous, and the model does not have your mental context. What seems crystal clear to you might be confusing or have a different meaning to the model. Learning to spot this gap is the key skill.
Anti-Pattern 1: Vague Instructions
The Mistake
Bad: "Write an article about remote work."
Better: "Write a 1500-word article about remote work best practices for managers who are new to leading remote teams. Focus on communication, accountability, and culture. Assume they are skeptical about remote work's viability. Include practical recommendations they can implement immediately."
The bad version gives the model no guidance about audience, length, angle, perspective, or depth. The model will make default assumptions that probably do not match what you want.
Why It Happens
Vagueness happens because what is clear in your head feels obvious. You know what you want. You assume the model knows too. But the model has no access to your mental context. It only sees words.
How to Fix It
Audit your prompts for vague language: "good," "interesting," "relevant," "thorough," "engaging." These words mean different things to different people. Replace them with specific guidance. Instead of "make it engaging," say "use conversational tone, include one surprising statistic, and provide actionable tips."
Anti-Pattern 2: Overloaded Prompts
The Mistake
Bad: "Write an article about remote work while also covering hybrid work, discussing the pros and cons of both, addressing common concerns from managers and employees, providing examples from different industries, including research citations, all in 2000 words with a friendly but professional tone that also addresses productivity concerns, security concerns, and culture building, oh and make it SEO-optimized and include a strong call to action."
This prompt tries to do too much at once. The model does not know which requirements to prioritize. Quality suffers because nothing gets the attention it deserves.
Why It Happens
Overloading happens because you want good output and you assume more requirements mean better output. But there is a balance point. Beyond that point, adding more constraints actually reduces quality. The model gets confused about priorities and ends up doing everything superficially.
How to Fix It
Prioritize ruthlessly. Identify the three most important things the output needs to have. Put those in your prompt. Cut everything else. If you have more requirements, do them as follow-up prompts or as iteration refinements. It is better to get one thing perfect than to get three things mediocre.
Anti-Pattern 3: Assuming Shared Context
The Mistake
Bad: "Suggest improvements to our product roadmap."
This assumes the model knows: what your product is, what your market is, what your constraints are, what your competitive position is, what your customer needs are. It knows none of these things.
Better: "We build a CRM for small e-commerce businesses with fewer than 50 employees. Our customers mainly use us for customer segmentation and email campaigns. Our main competitors are larger players like Shopify and HubSpot. Our differentiation is ease of use for non-technical users. What are three high-impact roadmap items that would strengthen our position against these competitors while staying true to our ease-of-use promise?"
Why It Happens
This happens because context that is obvious to you is invisible to the model. You live in your industry, know your product, understand your market. The model has zero knowledge of your specific situation. It only knows generalities.
How to Fix It
Do a context audit. For every prompt, ask: "Would someone who knows nothing about my situation understand this prompt?" If the answer is no, add context. Share key facts about your situation that an outsider would not know.
Anti-Pattern 4: Inconsistent Outputs
The Mistake
You ask the model to do something, and it produces inconsistent results even with the same prompt. Formatting is different. Tone is different. Structure is different.
Root Cause
Without specific examples or constraints, models default to variation. Variation is fine for creative writing but death for anything that requires consistency.
How to Fix It
Provide examples showing the exact format, tone, and structure you want. Use few-shot learning to anchor the model to your specific style. If you need multiple outputs, specify a template that all outputs should follow.
Example: "Generate five product ideas in this format: [TITLE]: [One-sentence description]. Each idea should target a specific customer segment and solve a specific problem. Use a tone that is friendly and direct, not hype-driven."
Anti-Pattern 5: Expecting Perfect Output on First Try
The Mistake
Writing the perfect prompt is extremely hard. Even experienced prompt engineers usually iterate. Yet many people expect their first draft to be perfect.
Why It Happens
Unrealistic expectations. People expect AI to work like Google search: you type something and get the exact right answer. But complex outputs rarely work that way on the first try. Writing requires revision. Designing requires iteration. Prompting is the same.
How to Fix It
Expect to iterate. Build in time for it. Your first prompt is a starting point, not the endpoint. The best results come from 2-3 iterations where you analyze output, hypothesize about what is wrong, and refine. Budget time for this in your workflow.
Anti-Pattern 6: Misunderstanding What Models Can Do
The Mistake
Asking a language model to do things it fundamentally cannot do. Examples: "Generate a completely novel idea that has never been thought of" (models synthesize from training data, cannot be truly novel), "Tell me about events after your knowledge cutoff" (models have knowledge cutoffs), "Answer subjective questions objectively" (subjectivity is not a bug, it is fundamental to language).
Why It Happens
Not understanding how models work. People often think models are more or less capable than they actually are.
How to Fix It
Build mental models of what language models actually are: statistical pattern matchers trained on data. They synthesize from patterns. They do not think. They do not have true understanding. They do not have knowledge of recent events. They sometimes confidently assert false information (hallucinations). Expectations that violate these fundamentals will not work.
Anti-Pattern 7: Unnecessary Complexity
The Mistake
Bad: "Using the following specialized vocabulary and technical jargon specific to the domain of quantum computing, and assuming the reader has advanced knowledge of physics and mathematics, generate an explanation of quantum entanglement that is simultaneously accessible to beginners."
This prompt is contradictory and over-specified. It is also way longer than it needs to be.
Better: "Explain quantum entanglement in simple terms that a high school student could understand. Avoid technical jargon."
Why It Happens
People often think longer, more complex prompts are better. But longer is not better. Clearer is better. Many complex prompts are just verbose versions of simple ones.
How to Fix It
Edit ruthlessly. Remove redundancy. Remove contradiction. Remove over-specification. Your goal is clarity, not comprehensiveness. Ask yourself: "Could I say this in fewer words while keeping the meaning the same?" If yes, do it.
Anti-Pattern 8: Poor Format Specification
The Mistake
Bad: "Give me ideas for improving customer retention."
What format should the ideas be in? How many should there be? How detailed? How organized?
Better: "Generate seven ideas for improving customer retention. Format each idea as: [Title]: [2-3 sentence explanation] | [Effort: Low/Medium/High] | [Expected Impact: Low/Medium/High]. Organize them by effort level, starting with low-effort ideas."
How to Fix It
Always specify format. Use a template. Use bullet points, numbered lists, tables, or structured sections. Show an example if the format is non-standard. The more explicit you are about format, the better the output matches what you need.
Anti-Pattern 9: No Guardrails or Constraints
The Mistake
Bad: "Write a summary of this article."
No guardrails. No constraints. The summary could be 100 words or 1000 words. Could be a bullet list or prose. Could be surface-level or deep.
Better: "Write a 3-4 sentence summary that focuses on the main finding and its business implication. Do not include background information or context. Be direct."
How to Fix It
Add guardrails: length, depth, structure, what to include, what to exclude, level of detail. Guardrails prevent the model from wandering into unwanted territory.
Anti-Pattern 10: Assuming the Model Knows Industry Jargon
The Mistake
Bad: "What should we do about our CAC:LTV ratio and our MRR churn?" (Assumes knowledge of Customer Acquisition Cost, Lifetime Value, Monthly Recurring Revenue, churn rate).
Better: "Our customer acquisition cost is $500 per customer. Their lifetime value is $2000. We are losing 8% of customers monthly. What should we focus on to improve unit economics?"
How to Fix It
Do not use unexplained jargon. If you must use specialized terms, define them first. Better yet, explain the actual numbers and situations rather than using jargon.
Key Takeaway
These anti-patterns cover 80% of prompting failures. Vagueness, overloading, missing context, inconsistency, unrealistic expectations, misunderstanding capabilities, unnecessary complexity, poor format specification, lack of constraints, and unexplained jargon. Learn to recognize these patterns in your own prompts, and your results will improve dramatically.
The common thread through all these anti-patterns is the same: assume nothing about what the model knows, and be relentlessly specific about what you want. Specificity is the antidote to nearly all prompting problems.
Quick Debugging Guide
Output is vague or generic: Add more context about your specific situation.
Output is wrong direction: Clarify your goal. Show an example. Use role-playing to anchor perspective.
Output is inconsistent: Provide specific examples showing desired format and tone.
Output is too long/short: Specify exact length constraints.
Output is wrong level of detail: Clarify depth and what to include/exclude.
Output misunderstands what you meant: You were probably unclear. Rephrase to be less ambiguous.
Output is technically wrong: Provide correct information in your prompt. Ask the model to explain its reasoning.
Output is incomplete: Use step-by-step structure. Provide a checklist. Use few-shot examples.