top of page

Prompt Engineering Explained

  • Writer: Priank Ravichandar
    Priank Ravichandar
  • Jan 8
  • 7 min read

Updated: 17 hours ago

An overview of prompt engineering for product team members and stakeholders.


ree

Key Takeaways

  • Prompt engineering helps us optimize how we direct AI to complete tasks.

  • Prompts are one of the primary ways of directing AI systems to execute workflows.

  • When crafting prompts, provide structure, specific instructions, context, and examples.

  • When refining prompts, edit the prompt, not the output, and use meta-prompting.

  • To further improve outputs, create projects, and manage memory.


Overview

Prompt Engineering


FAQs

What Is Prompt Engineering

Prompt engineering helps us optimize how we direct AI to complete tasks.


Prompts are how we communicate with AI systems. The more effectively we structure our conversation, the better outputs we get from AI models. Prompt engineering is the process of crafting effective prompts. It relies heavily on iterative refinement and systematic evaluation to maximize the quality and reliability of outputs. When we use prompt engineering techniques, we help AI models better comprehend what we are trying to accomplish, so they can respond to our requests more effectively.


Note: The following section is from an article I wrote on building AI workflows.


Why Effective Prompting Is Critical

Prompts are one of the primary ways of directing AI systems to execute workflows.


An AI workflow may rely on one or more AI systems. These systems are powered by AI models (LLMs) capable of understanding natural language. Therefore, we can tell them how to complete a series of tasks like we would instruct a person. We need to effectively prompt AI tools to create effective AI workflows. The AI model must understand what we are trying to accomplish. Prompts act as the bridge between intent and execution. Prompt engineering helps us have a well-structured conversation with AI. A single-sentence prompt is unlikely to produce the desired output. A well-crafted prompt ensures that each AI system produces outputs that are consistent, reliable, and usable. They also act as reusable building blocks because they can be tested, iterated, and refined.


How To Craft Better Prompts


Tips For Designing Better Prompts


Structure Prompts

We need to effectively prompt AI tools to create effective AI workflows. The AI model must understand what you are trying to accomplish. A single-sentence prompt is unlikely to produce the desired output. A well-structured prompt includes four basic components:

  1. Task: Tell the model exactly what you need it to do (e.g., summarize, analyze, or create). Give it direct instructions, not questions, in simple language that leaves no room for ambiguity about the desired action.

  2. Persona: Assign a clear identity/role ****to the model (e.g., “You are a front-end developer” or “You are a content strategist”) to inform its perspective or approach towards the task.

  3. Context: Give the model as much context as possible. Provide information, either directly in the prompt or by uploading relevant files, that will inform the model’s output, improving its quality and relevance.

  4. Format: Define the desired output format (e.g., "Respond with bullet points," "Create a comparison table," or “generate a JSON file”).


Provide Specific Instructions

Precise instructions lead to better, more consistent outputs. It’s important to be explicit about what is expected (e.g., goals, requirements, and constraints).

  • Provide clear guidelines to direct the model toward the desired output. Review prompts for contradictory or vague instructions.

  • For AI workflows, it’s also important to outline the exact steps the model must follow to execute the task (e.g., "you must do A, B, C, D"). This ensures that the model correctly understands and executes the intended tasks.

  • Provide step-by-step instructions for complex problems, critical thinking, or decision-making scenarios, ensuring the model thinks systematically before arriving at a final answer.


Provide Examples

Just like a person, the model needs to know what “good” looks like to produce a good output. While describing attributes of good output is useful, nothing beats a clear example.

  • Give the AI model examples of good and bad outputs to inform its response generation.

  • Include context on why a certain example output is good or bad. This helps the AI model to understand what you are looking for, shaping the outputs it produces.

  • Use examples to capture corner cases, define complex output formats, show the exact style/tone desired, or handle ambiguous inputs (e.g., sarcasm).


Provide Sufficient Context

When deciding how to integrate AI tools into workflows, consider what information a person may require to complete a certain workflow step: What information would they need to successfully complete the task? While people can complete tasks without sufficient context, AI systems primarily make decisions based on the available context. This influences how they complete a task, which tools they select, and how they use those tools. If no context is provided, they will use their best judgment, which could be sub-optimal or incorrect.

  • For prompts that include large amounts of content, use markdown formatting  (e.g., #Requirements and #Process) and/or XML Tags (e.g., <my_code>or <docs>) to organize content into distinct sections, helping the model distinguish between instructions and data.

  • For multi-step workflows, provide context on the steps that come before and after, along with context on the overall goal of the workflow. Consider what information a person may require to complete the workflow step, which you are now assigning to AI.

  • When uploading files as context, clearly indicate what each file’s purpose is and how it’s meant to be used. You can also connect the tool directly to data sources (e.g., Google Drive or GitHub) to provide access to relevant files.

  • If you are not sure what context to provide, describe your goal and ask the model, "What context do you need?".


Tips For Refining Prompts


Edit The Prompt, Not The Output

Many people fail at automating workflows because they jump to automation before they fully understand the necessary prompting strategies for producing good, reliable outputs. Often, they will enter a prompt and then keep asking for tweaks (e.g., “change A, remove B, and add C”). This leads to a long back-and-forth conversation. While they eventually get the output they want, this approach makes it difficult to precisely connect inputs to outputs. When you edit the initial prompt directly, the impact of changing certain variables (e.g., instructions, requirements, or constraints) is much clearer. This approach makes it easier to identify effective vs. ineffective prompting strategies, leading to more consistent, high-quality results. Therefore, when refining AI outputs, prioritize editing the underlying prompt to the model rather than editing the output.


Use Meta-prompting

Meta-prompting is a technique where you use an AI model to generate or improve prompts. You use the model to guide, structure, and optimize our prompts, helping ensure they’re more effective in producing high-quality, relevant outputs. There are a few ways to use meta-prompting:

  • Provide the model with an initial prompt (outlining the basic goals, requirements, and constraints) and ask to optimize the prompt to improve the specific output. Test and refine the prompt until you get an output that meets your expectations.

  • If the model produces a great output, ask it to provide the instructions to produce that output. You could also ask the model to update or optimize your initial prompt to get this output in one shot. This allows you to recreate similar outputs and build a system for consistent results.


Tips For Improving Output Quality


Create Projects

Often, AI models produce generic responses because they don’t have sufficient information to determine the user’s intent. However, it can be tedious to provide lengthy prompts and context each time you have a question. Projects group together related chats. You can upload custom instructions (outlining goals, requirements, constraints, etc.) and relevant files. These provide vital guidance and context for the AI model, helping it remember what matters and stay on‑topic across multiple chat threads. This leads to more relevant responses. You can also further improve the responses by refining the instructions and adding context files.


Projects are especially helpful for workflows because everything related to a specific workflow stays in the same place. For example, if you have a customer success workflow (e.g., onboarding), create a “Customer Onboarding” project to assist you with all related tasks. You can upload (e.g., general onboarding requirements) and relevant documentation (e.g., onboarding guides, customer FAQs, onboarding emails). Anytime you ask the AI to complete a task (e.g., sending an onboarding email, customer-specific instructions), it already has enough context to generate a good response. Both ChatGPT and Claude allow users to create projects.


Manage Memory

AI models can remember information across multiple chat threads. It can use this as additional context to inform its responses. While a model’s memory can be useful in some cases, it can also be unhelpful in others. For example, if you have multiple chats about a certain coding topic (e.g., Python functions), it’s helpful if the model remembers how you prefer to implement Python functions when generating responses to related questions. However, it’s not helpful if the model keeps mentioning Python functions when you ask it to draft an email response. Therefore, it’s important to be mindful of models’ memory and actively manage it to ensure responses stay relevant.


  • Delete or archive chats that are no longer critical to keep the model’s primary memory focused.

  • Use temporary sessions for random or one-off queries that you don't want the AI to remember. This can be done via ”Turn on temporary chat” in ChatGPT, ”Use Incognito” in Claude, and Temporary Chat” in Gemini.

  • Prune Memory: Get rid of unnecessary context in the model’s memory to maintain focus and prevent confusion. This can be done by deleting irrelevant memories, disabling use of chat history, or disabling use of memories (ChatGPT > Settings > Personalisation > Memory).

    • Delete irrelevant memories (ChatGPT > Settings > Personalisation > Memory > Manage).

    • Disable use of chat history (ChatGPT > Settings > Personalisation > Memory > Reference chat history).

    • Disable use of memories (ChatGPT > Settings > Personalisation > Memory).


References & Resources

 

bottom of page