Two months back, a group of my startup friends from Barcelona and Finland and I set out to create something ambitious: AIPathway, a new AI training program. As we worked on the launch, we realized we wanted to spotlight on a crucial skill — Prompt Engineering. It’s funny because, despite using ChatGPT regularly for over a year, prompting effectively was still something we hadn’t quite mastered ourselves.
At first, I went all in, crafting prompts so detailed they practically told the AI what to say. The responses were thorough but overloaded, with important insights buried in a sea of words. After a few of paragraph-like-prompts, I swung to the opposite extreme, stripping the prompts down to their bare bones — simple sentences, only to find the answers becoming shallow. Somewhere between these extremes, I discovered that good prompting is an art — a delicate balance of direction and space for the AI to “‘reason’ on its own”.
Step by Step
One of the first challenges that tipped me off to the power of effective prompting happened when I asked ChatGPT for some basic math help. I threw a few straightforward problems its way, and while the answers came back right, something didn’t sit well.
“What’s the square root of 3481?” I asked, and it gave me a correct, straightforward answer: “59.” Pleased with this, I tried a more complicated problem: “What’s the area of a triangle with a base of 12 and a height of 9?”
ChatGPT promptly gave me an answer, but something didn’t feel right. The response lacked explanation and felt too quick, like it had skipped important steps. When I asked it to explain how it got the answer, it fumbled a bit, giving me numbers that didn’t quite add up. That’s when I realized that there must be a different approach — a way to guide the AI to think through the problem systematically.
This led me to discover something we know widely now as Chain of Thought Prompting. It felt like a revelation — not just for ChatGPT but for how I approached problem-solving. By breaking down prompts into logical steps, I saw how this approach encouraged the AI to walk through each piece of reasoning, like it is explaining someone how to solve a puzzle piece by piece. It wasn’t just a prompt; it was a conversation, almost like coaching the AI.
Now, I want to share what I learned about Chain of Thought Prompting, why it matters, and how you can use it to unlock AI’s full potential.
What is Chain of Thought Prompting?
Chain of Thought Prompting is a method where you ask an AI like ChatGPT to break down its reasoning process step by step rather than jumping straight to an answer. By encouraging the model to think through problems in a logical sequence, you can get more accurate, detailed, and well-structured responses.
This method mirrors how humans often solve problems: instead of jumping to conclusions, we work through individual steps to ensure we’re on the right path. It helps prevent errors, ensures that all parts of the problem are considered, and keeps the AI’s response organized.
Mostly this approach reminds me of how I was taught to solve problems in school — by breaking down each step to make sure nothing was overlooked (remember: gut decisions are not included in AI). Seeing the AI respond to this kind of guidance was a bit surreal, almost like watching it ‘learn’ alongside me. It also made me realize that even as humans, we sometimes rush to conclusions without stepping back to see if we’ve considered all the pieces
Why is Chain of Thought Prompting Effective?
AI models like ChatGPT are designed to predict what should come next based on context, whether that’s generating text, solving math problems, or reasoning through a complex issue. However, without clear guidance, the model might skip steps or make overgeneralized assumptions. This often results in shallow answers or, worse, hallucinations — where the model generates something that sounds plausible but is factually incorrect.
Chain of Thought Prompting helps prevent this by encouraging the AI to reason methodically, ensuring that it doesn’t rush to a conclusion without considering every part of the problem.
The Problem with Poor Prompts
If you’ve ever been dissatisfied with a response from ChatGPT (or other LLMs), chances are the prompt didn’t give the model enough guidance. Poor prompts can lead to several issues:
Incomplete or vague answers: Without explicit instructions, the AI might offer a brief, shallow response that barely scratches the surface of what you were looking for.
Hallucinations: These are responses where the model generates plausible-sounding but false information. This can be especially common with poorly structured prompts.
Overgeneralization: When a prompt is too vague, the AI may offer generic information that doesn’t fully address the specific query.
Logical errors: In more complex tasks, the AI might skip critical reasoning steps, leading to flawed conclusions or incorrect calculations.
Real Examples of Good and Bad Prompts
Let’s dive into some examples of how the quality of a prompt can drastically impact the AI’s response.
Example 1: Scientific Explanation
Bad Prompt: “Why does the sky appear blue?”
Result:“The sky appears blue because of the way light interacts with Earth’s atmosphere.”
While this is correct, it’s a shallow explanation that lacks detail.
Good Prompt: “Explain why the sky appears blue. First, describe how sunlight interacts with the atmosphere, then explain why blue light is scattered more than other wavelengths.”
Result:“The sky appears blue due to Rayleigh scattering. Sunlight, although it looks white, is actually a mix of all colors. When sunlight hits the Earth’s atmosphere, it interacts with small gas molecules. Shorter wavelengths, like blue light, are scattered more easily in all directions compared to longer wavelengths, like red. Because blue light is scattered across the sky, that’s what we see.”
This detailed response shows how a well-structured prompt leads to more informative, step-by-step reasoning.
Example 2: Math Problem
Bad Prompt: “What’s 45 multiplied by 32?”
Result:“1440.”
This response is correct but doesn’t show the work or help you understand how it arrived at that result.
Good Prompt: “Solve 45 multiplied by 32. Break it down into smaller steps: first multiply 45 by 30, then by 2, and finally add the results together.”
Result:“To solve 45 times 32, we can break it down into parts. First, multiply 45 by 30, which gives 1350. Then multiply 45 by 2, which gives 90. Adding 1350 and 90 gives 1440. Therefore, the answer is 1440.”
By breaking the problem into smaller steps, the AI offers a more transparent explanation, which is particularly useful for checking the accuracy of the result.
Example 3: Creative Writing
Bad Prompt: “Write a story about a boy who finds a magical sword.”
Result:“A boy named Jake found a magical sword. He took it and went on an adventure to fight monsters.”
This is a simplistic response that lacks any detail or depth.
Good Prompt: “Write a story about a boy who finds a magical sword. First, describe his ordinary life before he finds the sword, then his reaction upon discovering it, and finally, describe the first challenge he faces with the sword.”
Result:“Jake had always been an ordinary boy, living in the quiet village of Almhaven. His days were spent helping his father in the fields, dreaming of adventure but never imagining that his life would be anything but ordinary. That all changed one fateful afternoon when, deep in the woods, he stumbled upon a strange, glowing sword lodged in a stone. His heart pounded as he reached out to touch it. As soon as his fingers brushed the hilt, a rush of power surged through him, and the air seemed to crackle with energy. But before he could celebrate, a deafening roar echoed from the trees — a monster was approaching, drawn to the sword’s magic.”
This version offers a richer narrative, complete with emotions, setting, and an exciting build-up, all guided by the prompt’s structure.
Challenges in Chain of Thought Prompting
While Chain of Thought Prompting can lead to much better results, it comes with its own set of challenges:
Prompt Complexity: Creating detailed prompts that break down complex tasks can be time-consuming and cognitively demanding for the user.
Verbose Responses: Asking for too much detail can sometimes overwhelm the model, leading to long-winded responses that are hard to follow.
Model Limitations: Not all AI models handle complex reasoning equally well. Even with a well-structured prompt, some models might still produce flawed logic or struggle with niche subjects.
Trial and Error: Finding the perfect prompt often requires experimentation. You may need to tweak or refine your prompt several times to get the ideal response.
Best Practices for Chain of Thought Prompting
To make the most of Chain of Thought Prompting, here are a few best practices:
Be Specific: Know what you are looking for — are you looking for broad generic responses or specific results from your prompt. Explicitly ask for step-by-step reasoning when dealing with specific topics. Don’t assume the AI will naturally take a methodical approach or it will understand the context of your query.
Break it Down: After you nailed the specific need and context, guide the AI through smaller, manageable steps. This ensures that the model doesn’t skip over important details. For example: if you want to find the salary of a French government employee, do not forget to details around“indice majoré” and allowances like “Indemnité de Fonctions, de Sujétions et d’Expertise (IFSE)”, or else you will just get salary ranges for different categories of French government employees.
Iterate: If you don’t get the answer you want, refine your prompt. You might need to rephrase or add more detail to guide the AI’s thought process better.
Ask for Explanations: Do not hestiate to ask the AI to explain its reasoning. This helps you understand how it arrived at its conclusion and minimizes the risk of hallucinations or logical errors. For example:“Did you consider X or Y in your reasoning while preparing the response? ”
Keep on practicing: The more you practice, the better you will get at prompting the right input for LLMS to think and response better.
Conclusion
I used to think of ChatGPT as a tool for quick answers, but now I see it as a kind of partner, a reasoning assistant that just needs the right guidance to unlock its potential. And if there’s one thing I’ve learned, it’s that AI, like humans, benefits from a little patience and a lot of clarity.
The next time you work with AI, remember: the quality of the result is directly linked to the quality of the prompt. Guide the AI to think clearly, and it will guide you in return.
Chain of Thought Prompting is a game-changer when interacting with AI models like ChatGPT. By encouraging the model to think through problems step by step, you can achieve more accurate, insightful, and creative responses. Crafting the right prompt requires thought and practice. With the right approach, you can turn an AI into a powerful problem-solving partner, capable of tackling even the most complex tasks with clarity and precision.
Comentarios