Avoiding Common Pitfalls
Avoiding Common Pitfalls
Even with good technique, there are mistakes that consistently lead to poor AI outputs. Knowing what to avoid saves time and frustration.
Pitfall 1: Being Too Vague
Problem: The model fills in gaps with assumptions that may not match your intent.
- Bad: "Write something about our product"
- Better: "Write a 150-word product description for our project management tool, targeting small business owners who currently use spreadsheets to track tasks"
Pitfall 2: Overloading a Single Prompt
Problem: Asking the model to do too many things at once leads to shallow treatment of each.
- Bad: "Analyze our competitors, write a marketing strategy, create social media posts, and draft an email campaign"
- Better: Break it into separate prompts, using the output of each as input for the next
Pitfall 3: Not Specifying Format
Problem: Without format guidance, the model picks its own structure, which may not suit your needs.
Always specify: length, structure (bullets, paragraphs, table), level of detail, and tone.
Pitfall 4: Trusting Without Verifying
Problem: Models can generate plausible-sounding but incorrect information.
- Always verify factual claims, especially numbers, dates, and citations
- Ask the model to indicate uncertainty: "If you are not confident in any claim, explicitly state that"
- Use AI output as a draft, not a final product
Pitfall 5: Ignoring Context Window Limits
Problem: If you send too much text, important information may be missed or the model may truncate its response.
- Prioritize the most relevant information
- Summarize long documents before including them
- Focus on the specific sections that matter for your task
Pitfall 6: Not Iterating
Problem: Expecting a perfect result on the first try.
The best outputs usually come from 2-3 rounds of refinement:
- Generate an initial response
- Identify what is missing or wrong
- Provide specific feedback in a follow-up prompt
- Repeat until the output meets your needs
Pitfall 7: Anthropomorphizing the Model
Problem: Treating the model as if it has opinions, feelings, or genuine understanding.
The model predicts likely text based on patterns. It does not "think," "believe," or "understand." Keeping this in mind helps you:
- Write clearer instructions
- Set realistic expectations
- Avoid frustration when outputs miss the mark
Summary Checklist
Before sending a prompt, verify:
- Is the task clearly defined?
- Did I provide necessary context?
- Is the output format specified?
- Are constraints explicit?
- Am I asking for one thing at a time?
- Will I verify the output before using it?