Back to Blog

Prompt Engineering in 2025

Elia Weiss

Is prompt engineering even a thing? If AI is so "smart", why do we even need to engineer prompts? Shouldn't it be able to understand plain language instructions?

The answer is both yes and no. Yes, AI can follow simple instructions, but no, it doesn't interpret them the same way humans do.

When you give a task to a human, for example, a manager assigns a task to an employee, the instructions can be vague and high-level—like "prepare a report on our latest product launch." The employee fills in the gaps, and if they're new, they might get confused, make a draft, ask questions, and improve over time. Eventually, they won't need guidance and may even help train others.

But AI doesn't function this way, While modern AI models are highly advanced, they are still language models with limited reasoning, contextual understanding, and memory. That's why prompts need to be clear, detailed, and as unambiguous as possible.

This is where prompt engineering becomes crucial—humans aren't great at giving clear, detailed, and unambiguous instructions. We're used to high-level direction, and it's either difficult or we're too lazy to spell out every detail and eliminate ambiguity.

The GraphFlow paradigm addresses this exact problem by helping humans structure prompts in a detailed, unambiguous, and clear way for the AI—while also providing a visual flow that lets humans see the big picture.

The GraphFlow paradigm offers three powerful tools:

  • Visualizes the flow;
  • Breaks the process into clear steps;
  • Assigns specific instructions and relevant tools to each step.

This not only helps manage the prompt but also ensures the AI isn't overwhelmed with too many instructions or tools, or using them at the wrong time.

GraphFlow paradigm VS traditional prompting

Basically, almost everything you can do with the GraphFlow paradigm can also be done with traditional prompt methods. While there are some differences we'll discuss later, for now, let's assume they're functionally equivalent—you can achieve the same outcomes with either.

Now, consider creating a flow where the bot asks the user a question, then branches to different follow-ups based on the user's response, and maybe even loops back to a previous step to re-ask a question. This is a more complex flow than a simple linear, one-question-after-another interaction.

Now imagine trying to explain that branching logic to the AI using words alone in a system prompt. It's difficult to describe branching clearly and unambiguously—especially when looping back to previous nodes. Try it yourself and you'll quickly get tangled: go from node A to B, then branch to C and D, but if you're in D go back to A, and if in C go to E. It becomes confusing and hard to maintain.

Wouldn't it be much easier to just sketch the flow as a diagram? That's exactly what the GraphFlow tool does—using Mermaid charts, which are unambiguous graphs written in Mermaid language and easily rendered into clear visuals for humans to understand.

There's just one challenge—most people don't have the time or energy to learn a new charting tool or language, no matter how polished the UI is. That's why the GraphFlow paradigm leverages AI to handle this. AI is great at turning vague flow descriptions into Mermaid charts, and it supports an iterative process—start with a rough idea, then add or remove nodes and refine the structure. This makes creating flows intuitive, smooth, and accessible to anyone, not just developers. It also makes future edits easy, since you can simply describe changes in natural language.

By using the GraphFlow paradigm and tool, we gain a way to build prompts that are intuitive, organized, unambiguous, and detailed—easy for humans to create and clear for AI to understand.

Chat as a state machine

The GraphFlow paradigm treats a chat like a state machine, where each node represents a state with its own tools and instructions. Transitions between states are handled by the AI using a special internal tool—not visible to the user—but controlled through instructions within each node. For example, you can guide the AI to move to specific nodes based on the user's input.

Tools in GraphFlow

In general, GraphFlow manages three types of tools:

  1. Tools for managing the graph state.
  2. Tools for interacting with APIs.
  3. Tools for interacting with the user through dedicated graphical widget UIs.

The GraphFlow paradigm is fundamentally built around messages and tool calls. So let's quickly recap what messages and tool calls are, and how they work together to shape the chat experience.

Messages: In traditional AI chats like ChatGPT, interactions are handled as text messages—user messages go into the system, and the system responds with messages back to the user (Message-In-Messsage-Out).

sequenceDiagram participant User participant AI System User->>AI System: Sends text message AI System->>AI System: Processes input AI System->>User: Returns response message

Tool calls were introduced later and changed how AI interacts by adding a new entity—the API. While the user still sends and receives plain text messages, the AI can now communicate with APIs during processing. It sends requests to APIs, receives their responses, and then uses that information to craft its reply to the user.

sequenceDiagram participant User participant AI System participant API User->>AI System: Sends text message AI System->>AI System: Processes input AI System->>API: Sends API request (Tool call) API-->>AI System: Returns API response AI System->>User: Returns response message

The point here isn't just to explain how tools work, but also to highlight that tools were introduced as an afterthought, while maintaining the original Message-In-Messsage-Out interaction with the user. This can sometimes lead to odd behaviors, because the AI was originally trained to follow a simple pattern: receive a message, call an API, and produce a message. However, when using special GUI widgets, we might need a different kind of interaction that doesn't fit neatly into this flow.

Client-side (GUI) tools

Now let's look at how client-side GUI tools work in the GraphFlow paradigm. Normally, tool calls are meant for interacting with APIs—not the user. But there's nothing stopping us from using a tool to display a widget to the user. For example, if a user uploads an image, the result of that upload can be sent back to the AI as a tool result, allowing the chat to continue based on the user's input—in this case, the uploaded image.

sequenceDiagram participant User participant AI System participant Widget User->>AI System: Sends text message AI System->>AI System: Processes input AI System->>Widget: Triggers GUI tool (e.g. image uploader) User->>Widget: Interacts with widget (e.g. uploads image) Widget-->>AI System: Sends tool result (e.g. image file) AI System->>User: Returns response message

There's a key flaw with this type of interaction: it blocks the user. Take, for example, a tool that displays buttons for choosing a subscription plan—basic, pro, or enterprise. While those buttons are shown, the chat is effectively paused. But what if the user wants to ask a question first, like what each plan includes or what "enterprise" means? That's why we need a way for tool calls to be non-blocking—so the user can still interact freely without being forced to respond immediately to the widget.

GraphFlow enables non-blocking widget calls using a kind of workaround. The tool call returns immediately without actually doing anything. Then, the system uses the tool result to render a component—like buttons—and instructs the AI to expect for the user's input.

When the user clicks a button, the system submits a message on their behalf with their choice. Since the tool already returned, the chat remains unblocked. This means the user can ask questions, switch nodes, or explore further before making a selection. When ready, the user can either click a button or type their choice, and the chat can respond accordingly.

sequenceDiagram participant User participant AI System participant Widget User->>AI System: Sends text message AI System->>AI System: Processes input AI System->>Widget: Triggers widget (e.g. plan buttons) Widget-->>AI System: Returns immediate tool call response (placeholder) AI System->>User: Displays widget (e.g. subscription options) Note over AI System,User: Chat remains active—user can keep typing User->>AI System: Asks follow-up question (e.g. "What does Enterprise mean?") AI System->>AI System: Responds as usual User->>Widget: (Later) Selects a plan Widget-->>AI System: Sends tool result (e.g. selected plan) AI System->>User: Responds based on selection

So our diagram is now becoming a bit complicated, but it clearly demonstrate how we can have a non-blocking UI tool.

Unfortunately, it's not that simple. As shown in the initial diagram, after a tool call, the AI expects to generate a message based on the tool's result—because tool calls were originally meant for APIs that return meaningful data the AI needs to process and respond to.

But in our case, the tool (like a button selector) returns nothing—it just triggers the UI to display options. There's no new data to react to, so there's no need for the AI to send another message.

However, if your prompt is structured like: "Greet the user and ask them to choose an option, then call the tool to show the options," the AI will first greet and ask the user to choose, then call the tool, which returns an empty result, and then the AI will generate another message—often just repeating the first one. This creates a weird, redundant effect where the chat feels awkward and repetitive.

Since the AI always generates a message after a tool call, we can work around this by first calling the tool, then generating the message. For this specific type of tool, we simply flip the display order in the UI—so even though the tool was technically called before the message, the user sees the message first, followed by the button options.

While this is a simple and effective fix, it can be quite confusing for a the prompt engineer. Intuitively, you'd expect to generate the message first and then call the tool—but in this case, you have to do the reverse: call the tool first, then generate the message.

These are the key takeaways here:

  • First, prompt engineering is real—you need to understand how the AI works to get the best results and behavior from your prompts, while AIs are incredibly smart and understand instructions well, they process them very differently from humans. This is because they're fundamentally language models with some reasoning ability—not reasoning models that use language like humans do.

  • Additionally, tools are powerful—they turn a language model from just a talker into a doer. But tools are also basic and opinionated, designed for specific use cases. So, to create advanced, flexible interactions that weren't originally envisioned, we often need to tweak how we use tools and carefully structure our prompts to support these behaviors.

  • Another key point we touched on but should emphasize is the importance of using clear structural cues in your prompts—especially words like ALWAYS and THEN. Capitalizing and surrounding them with asterisks helps the AI recognize their importance and better follow the intended flow. For example, if the AI isn't following an instruction as expected, try adding ALWAYS before it. If you want to ensure the AI performs one step followed by another, use THEN to explicitly link the two. These emphasized cues can significantly improve prompt reliability and clarity.

  • Tip: When giving instructions to the AI, refer to it as "the assistance"—for example, "The assistance should tell the user…". Since the AI doesn't have a real sense of self, calling it "you" or "the AI" might confuse it. It's better to address the simulated assistance directly.