Learning Guide

Best Practices for How to Make AI Prompts

9 min read
Beginner to Intermediate

Topics covered:

Prompt EngineeringAIBest PracticesProductivity
Best Practices for How to Make AI Prompts

Contents

0%

Prompting is the art of telling an AI exactly what you want, and getting it. If you've experimented with ChatGPT, you've already seen how the phrasing of your request can shape the response. However, there's a world of difference between typing a question and designing a prompt that reliably delivers the results you need. As AI tools become more central to daily work, learning to guide them with precision is quickly becoming a core professional skill. This article will show you how to approach prompt engineering with purpose, clarity, and confidence, starting with the mindset that sets effective practitioners apart.

I. Why Prompt Engineering Matters

Every interaction with a language model is a negotiation. The model brings vast knowledge and linguistic skill, but it relies entirely on your instructions to know what to do. Vague or open-ended prompts often yield generic, unfocused answers. In contrast, a well-crafted prompt can extract nuanced insights, generate structured data, or even automate repetitive tasks.

Prompt engineering matters because it is the lever that turns a general-purpose AI into a tool tailored for your needs. Whether you are summarizing reports, drafting emails, or analyzing customer feedback, the quality of your prompt directly affects the usefulness of the output. As models grow more capable, the difference between a passable result and an excellent one often comes down to how you ask. By investing a little time in learning prompt design, you can unlock more value from the tools you already use.

II. The Iterative Mindset: Start Simple

Effective prompt engineering is not about writing the perfect prompt on your first try. Instead, it is an iterative process, one that rewards experimentation and adjustment. Start with the simplest prompt that can possibly work. Measure its output, then add only as much context as you need.

Suppose you want to extract place names from a paragraph. Begin with a direct instruction:

Extract all place names from the following text.

Test the output. If the model misses some locations or includes irrelevant terms, refine your prompt:

Extract the names of cities and institutions from the text below.
List them after 'Places:'.

Each iteration is a feedback loop. You observe what the model does, adjust your instructions, and try again. This approach keeps your prompts concise and focused, reducing the risk of confusion or unintended results.

Breaking down complex tasks into smaller steps is another key habit. If your goal involves multiple actions, such as summarizing, then translating, then formatting, tackle each subtask separately before combining them.

### Instruction ###
Summarize the following article in one paragraph.

### Text ###
[Paste article here]

Once you are satisfied with the summary, you can add a translation step or specify a format. This modular approach makes it easier to diagnose issues and fine-tune your prompts.

Remember, the goal is not to impress the model with clever wording, but to communicate your intent as clearly and simply as possible. Each round of testing brings you closer to a prompt that works reliably for your use case. Over time, you will develop an intuition for how much detail to include and when to stop iterating.

By adopting this iterative mindset, you will not only save time but also build prompts that are robust, reusable, and easy to adapt as your needs evolve.

III. Crafting Clear and Precise Instructions

Effective prompt engineering begins with clarity. When you draft a prompt, your goal is to communicate your intent to the language model as directly as possible. Ambiguous or open-ended instructions often yield unpredictable results. Instead, state exactly what you want the model to do, using active verbs and explicit context.

Start each prompt with a clear command. For example, if you want a summary, instruct the model to:

Summarize the following article in three sentences.

If your task involves translation, specify both the source and target languages:

### Instruction ###
Translate the text below from English to Spanish:
Text: "Welcome to the team!"

Separators such as ### help distinguish instructions from input data, reducing confusion for both you and the model. Place your instruction at the very top, followed by any necessary context or examples. This structure mirrors how you might brief a colleague: lead with the task, then provide supporting details.

When possible, define the desired output format:

Extract all email addresses from the text below.
Output format: comma-separated list.

By specifying the format, you guide the model toward consistent, usable results.

To avoid impreciseness, which is a common stumbling block, focus on directness and unambiguous language. When your instructions are vague or contradictory, the model is forced to guess your intent, often leading to unsatisfactory results. For example, a prompt instructing the model to "Explain prompt engineering briefly" leaves the term "briefly" open to interpretation. Instead, set clear boundaries:

In 2-3 sentences, explain the concept of prompt engineering to a high school student.

Avoid using negative instructions as your primary guidance. Telling the model what not to do, for example, instructing it "Do not include technical jargon," can be less effective than specifying what you want:

Explain the following concept using simple language and no technical jargon.

Impreciseness can also occur when you assume the model understands your context or goals. Always state your requirements explicitly, even if they seem obvious. If you want a list, say so. If you need a specific format, define it.

Test your prompt with a variety of inputs, including edge cases and ambiguous inputs. If the output is not what you expect or if it varies, revise your instruction for greater precision. Clarity is not about verbosity; it is about making your intent unmistakable. Each iteration should move you closer to a prompt that consistently delivers the results you need. In summary, eliminate ambiguity by stating your intent directly, specifying output formats, and framing your requests in positive, actionable terms. Precision is the foundation of reliable prompt engineering.

IV. Embracing Specificity

Specificity is the backbone of effective prompt design. The more precisely you describe the task, the more reliably the model will deliver the output you need. Vague prompts invite guesswork, while detailed instructions anchor the model's response.

Suppose you want to extract place names from a paragraph. A generic prompt such as:

Find places in this text.

may yield inconsistent results. Instead, be explicit:

Extract the names of all cities and institutions mentioned in the following text.
Output: Place: <comma_separated_list_of_places>
Input: "The conference was held at MIT in Cambridge."

This prompt tells the model exactly what to look for and how to present the answer. If you require a particular style or tone, state it directly:

Rewrite the following paragraph in a formal tone suitable for a business report.

Examples within your prompt can further clarify expectations:

Classify the following customer feedback as Positive, Negative, or Neutral.
Example: "The app is easy to use." → Positive

Balance detail with relevance. Include only information that helps the model perform the task. Adding too much unnecessary context to your prompt can reduce its effectiveness and might even confuse the model. If you are unsure how much detail to provide, start with the essentials and iterate based on the results.

Remember, specificity is not about length; it is about relevance and clarity. Each detail should serve a purpose.

V. Framing Requests Positively: Emphasize 'Do' Instead of 'Don't'

When you draft prompts, focus on what you want the model to do, not what you want it to avoid. Positive framing leads to clearer, more reliable outputs. If you tell a model not to ask for personal information, it may still generate questions about preferences or interests. Instead, specify the desired action:

Recommend a movie from the current top global list, without asking for user preferences.

This approach reduces ambiguity. The model is more likely to follow direct instructions than to interpret what should be omitted.

Consider this positively framed prompt:

### Instruction ###
Summarize the following article in three sentences, focusing on key findings.

Contrast that with a negatively framed prompt:

### Instruction ###
Summarize the article, but don't include background information or minor details.

The first prompt gives a clear target. The second leaves room for interpretation about what counts as minor details. Whenever possible, state what to include, not what to exclude. This habit will help you get more consistent, actionable results from language models.

VI. Putting It All Together: A Prompt-Design Workflow

Effective prompt engineering is a process, not a one-off task. To get the best results, adopt a workflow that emphasizes iteration, measurement, and refinement.

Prompt-Design Workflow

Define your goal. State exactly what you want the model to produce. For example, your goal might be to extract all company names from a news article.

Draft a simple prompt. Start with the minimum instruction needed.

Extract company names from the following text.

Test and observe. Run the prompt on a few examples. Review the outputs. Are they accurate? Are any companies missed or extra entities included?

Add context and constraints. If results are inconsistent, clarify the format or add examples.

Extract company names from the following text.
Desired format: Company: <comma_separated_list_of_companies>
Text: "Apple and Google announced a new partnership."

Iterate. Tweak your prompt based on what you see. If the model includes non-company entities, specify:

Only list company names. Do not include product names or locations.

Validate with edge cases. Test your prompt on tricky inputs, for example, company names that are also common words like 'Amazon' or 'Shell'.

Document your final prompt. Once you are satisfied, save the prompt and a few sample outputs. This makes it easy to reuse or share with colleagues.

By following this workflow, you will develop prompts that are both robust and adaptable. Remember: start simple, measure, and iterate. Each cycle brings you closer to the output you need.

VII. Additional Resources

OpenAI – Best practices for prompt engineeringCohere – Prompt Engineering GuideGoogle – Generative AI Prompt PatternsPrompt Engineering Glossary

📄

Continue Reading

Discover more insights and updates from our articles

Make your own AI systems with AI Flow Chat