Advanced Prompt Engineering Explained
Master four advanced prompt engineering techniques—Zero-Shot, Few-Shot, Chain of Thought, and ReAct—to dramatically improve the accuracy, reasoning, and performance of AI tools like ChatGPT in digital marketing applications.

Introduction
As Artificial Intelligence rapidly becomes a mainstay for digital marketing professionals, a thorough understanding of prompting techniques is needed to realize its full potential across a broad spectrum of marketing applications. Advanced prompt engineering is an excellent approach for digital agencies that are eager to redefine the performance of LLMs such as ChatGPT to adopt. In this post we will provide you with detailed insight into the benefits of advanced prompting techniques, covering 4 prominent techniques every innovative agency should be familiar with.
Advanced Prompt Engineering in Essence
At its core, advanced prompt engineering is the process of fine-tuning the input provided to a large language model with the aim of obtaining improved output quality. It is an approach that distinct therefore from fine-tuning which entails retraining the model on additional data to modify its internal parameters and improve its ability to perform specific tasks.
An early demonstration of the success of the prompt engineering approach is the addition of 'Let's think step-by-step' to prompt sequences provided to the GPT-3 model. This resulted in a significant increase in factual accuracy and overall performance.
In this article we will discuss the following prompting techniques:
- Zero Shot and Few Shot Prompting
- General Knowledge Prompting
- Chain of thought (COT) Prompting
- ReAct
The Zero-Shot Technique
Zero-shot prompting techniques involve providing the LLM with a prompt that describes the task you want to perform using specific language without providing examples, such as issuing the simple instruction “translate the following English sentence to French.” The zero shot approach therefore involves placing reliance on the extensive pre-training the model has received and its overall level of understanding to provide a correct response. This is best described as a conventional approach to prompting which most agency teams are familiar with.
Few Shot Prompting
Few-Shot prompting is a related concept that differs from the zero-shot approach by providing the AI model with a number of exemplar prompts and template responses before presenting an additional prompt that the model will then be requested to address in accordance with the exemplars provided. For instance if we wanted to come up with a list of new potential features for a SaaS product to address client retention and satisfaction challenges experienced by digital agencies, we could use this exemplar few-shot sequence to produce the desired model output:
Identify suitable features for client-agency collaboration software platform based on the following problems
Problem 1: The agency struggles to understand why clients are dissatisfied.
Feature 1: Integrated client feedback and survey module with automated sentiment analysis and actionable insights.
Problem 2: There is insufficient communication and transparency with clients.
Feature 2: Client portal with real-time performance dashboards, clear reporting features, and direct messaging capabilities.
Problem 3: The agency is unsure how to offer additional value to clients.
Feature 3: Value-added services dashboard that includes options for scheduling strategic consultations, accessing educational content, and generating advanced analytics reports.
Problem 4: Client engagement and retention rates are declining.
Please suggest a feature to address problem 4:
Few shot prompting can be a valuable approach for tasks that require some reasoning capability and can often be a speedy approach for directing the language model towards the output you are seeking
Chain of Thought Prompting
Another valuable approach to prompt engineering is the chain of thought (CoT) technique which can be leveraged to increase model performance where sophisticated reasoning is required and orchestrate more contextually attuned responses. CoT prompt engineering was developed by studying human thought patterns and applying them within the machine learning context through the use of numerous reasoning stages in the prompting process. To apply CoT effectively there is a need for the LLM to be provided with several exemplar thought chains. The model is then asked to reason several times in a linear sequence before presenting the final solution. It is this CoT process that enables AI models to conduct sophisticated reasoning operations and can significantly improve performance when it comes to and multi-hop question answering and logical reasoning.
Example CoT Prompting Sequence:
Q: Olivia has 13 apples. She buys two more bags of apples. Each bag contains 7 apples. How many apples does she have now?
A: Olivia started with 13 apples. 2 bags of 7 apples each is 14 apples. 13 + 14 = 27. The answer is 27.
Q: A restaurant has 15 onions. If they used 10 to make lunch and brought 15 more, how many onions do they have?
A: The restaurant had 15 onions originally. They used 10 to make lunch. So they had 15-10 = 5. They bought 15 more onions, so they have 5 =15 = 20. The answer is 20.
General Knowledge Prompting
The general knowledge approach to prompting is similar to the use of the zero shot approach however it differs in one key respect – the LLM is first asked to provide knowledge of its own before that knowledge is interwoven within the prompt to improve reasoning capabilities:
For instance:
Generate 4 facts about Ancient Greece:
1. Birthplace of Democracy: Ancient Greece, particularly Athens, is often credited as the birthplace of democracy. Around 508-507 BCE, the Athenian leader Cleisthenes introduced reforms that laid the foundation for a system of government in which citizens could participate directly in decision-making.
2. The Olympic Games: The ancient Olympic Games were held every four years from 776 BCE to 393 CE in Olympia. These games were a major religious and cultural event, featuring athletic competitions, sacrifices, and feasts in honor of Zeus. The modern Olympic Games were inspired by these ancient traditions.
3. Philosophical Giants: Ancient Greece was home to some of the most influential philosophers in Western history, including Socrates, Plato, and Aristotle. Their ideas on ethics, politics, and metaphysics continue to shape philosophical thought and academic discourse today.
4. The Parthenon: One of the most iconic structures from ancient Greece is the Parthenon, a temple dedicated to the goddess Athena. Built between 447 and 432 BCE on the Acropolis of Athens, it is renowned for its architectural brilliance and sculptures, including those created by the famous sculptor Phidias.
We can then use the information provided to increase the quality of the output provided
“Use the above facts to write an informative paragraph about Ancient Greece.”
Ancient Greece, a civilization renowned for its profound impact on the modern world, is often celebrated as the birthplace of democracy, particularly in Athens where the pioneering reforms of Cleisthenes around 508-507 BCE allowed citizens to engage directly in governance. This early form of democratic participation set the stage for contemporary democratic systems. Additionally, Ancient Greece was the cradle of the Olympic Games, first held in Olympia in 776 BCE, which featured not only athletic competitions but also religious ceremonies and feasts in honor of Zeus. The legacy of Ancient Greece extends to its remarkable philosophical contributions, with thinkers like Socrates, Plato, and Aristotle laying foundational ideas that continue to influence philosophical and ethical thought. Architecturally, the Parthenon stands as a symbol of Ancient Greece's grandeur; constructed between 447 and 432 BCE, this temple dedicated to the goddess Athena exemplifies the era's architectural innovation and artistic excellence.
ReAct
ReAct is a sophisticated approach to prompt engineering that was first proposed in 2022. In a nutshell the process involves the use of LLMs to both reason and act in an interlinked fashion, allowing for the benefits of both activities to be fully realized. Reasoning assists the AI model with initiating, tracking and revising action plans whilst providing valuable assistance with responding to unexpected events. Actions support reasoning capability by allowing for interaction with external knowledge bases or software platforms.
Addressing the Limitations of Chain-of-thought Prompting
As we discussed earlier in this post, the chain-of-thought technique is often effective for improving the reasoning performance of AI models. However the application of the CoT approach can still result in some inaccuracies or hallucinations occuring. The aim of ReAct prompting is to address this potential inadequacy by providing AI systems with additional context from external sources. Overall, it is best to think of ReAct as a few-shot prompting technique that builds on CoT techniques by modelling human thinking when it comes to reasoning and strategizing and the taking of specific actions to achieve particular objectives.
The Structure of a ReAct Prompt
A typical ReAct prompt sequence includes exemplars accompanied by actions, the insights derived from the completion of the actions and the approach to reasoning chosen at each stage of the process. The aim is for the language model to simulate this human-like interchange of thinking and acting, for enhanced agency.
It is important to remember that the insights provided are not created by the language model itself but rather by the external data sources from which context is provided to it. This means that for ReAct prompting to be fully utilized there is a need for the model to be complemented by:
(i) An environment that uses a text action to return a text observation.
In the context of ReAct prompting, the environment is essentially a simulated or real-world system that responds to actions taken by the language model. An example of such an environment could be data analysis and interpretation. In this scenario the language model will interact with a database environment, analyze it and provide insights or recommendations. Three steps are required for the model to interact successfully with the environment.
Text Action: This is a command or decision expressed in natural language that the LM generates based on its reasoning. For instance, if the LM is interacting with a text-based game, an action might be "Query sales data for Q2 2024.”
Environment's Response: Once the LM issues a text action, the environment processes it and generates a response, which is a text observation. This observation provides feedback about the action’s outcome, such as "The data reveals an increase in quaterly sales in 3 product categories."
Dynamic Nature: The possible actions available to the LM can vary depending on the environment’s state. For example, if the LM is interacting with a database and it has just retrieved a list of customer records, new actions might become available such as "Filter the records by purchase date" or "Generate a report of high-value customers."
(ii) Output Parser Framework
The output parser framework is a crucial part of the ReAct approach and handles the integration between the LM’s output and the environment’s responses:
(i) Action Generation. The LM generates text-based actions based on its current reasoning and understanding of the environment.
(ii) Action Execution. The output parser takes the generated text action, executes it within the environment, and collects the resulting observation.
(iii) Observation Integration. The parser appends the observation to the text generated so far, providing a continuous context for the LM. This updated context, including previous actions and observations, is then used to prompt the LM for further reasoning or actions.
(iv) Stopping Condition. The parser ensures that the LM generates only one action at a time, waits for its execution and observation, and then continues based on the updated information.
There are several different types of output parsers that can be used including the JSON parser which converts LLM output into JSON language for further processing or understanding by other systems, the table parser which extracts and organizes model output into a tabular format suitable for spreadsheets or databases and CSV parsers.
(iii) Human-Generated Examples for Few-Shot Learning
Few-shot learning involves training the LLM using examples that illustrate how to interact with the environment in terms of reasoning, actions, and observations. Here’s how human-generated examples play a role.
Intermixed Examples. These examples combine reasoning (thoughts), actions, and observations in a coherent sequence. For example, an example might show.
Thought: "I need to find the total number of users who registered in the last month."
Action: "Query the database to count the number of users with a registration date in the last month."
Observation: "The total number of new users registered in the last month is 1,250."
These examples help the LM understand how to structure its reasoning and actions in a way that aligns with the environment’s responses. They provide a template or pattern for the LM to follow, demonstrating how to interact effectively.
By providing a few of these intermixed examples, the LM learns to generalize from the given patterns and apply them to new scenarios it encounters dynamically in the environment.
Example ReAct Prompting Sequence
You’re managing a digital marketing campaign and need to optimize an ongoing social media ad campaign to increase engagement and conversions. The following prompt sequence provides an illustration of how you could use ReAct to prompt an AI language model to produce detailed and coherent output by interacting with an external social media management application through an API.
Question: What can I do to improve the performance of [exemplar social media campaign].
Thought 1: "I need to determine the current performance metrics of the ad campaign to understand the baseline."
Action: Retrieve and display [all performance metrics from the analytics dashboard].
Observation: The metrics are retrieved, showing a CTR of 3.2%, a conversion rate of 1.8%, an engagement rate of 6.5%, and an average CPC of $0.85.
Thought 2: "I then need to identify which ads are performing well and which are underperforming."
Action: Analyze [performance data to categorize ads into high-performing and underperforming based on performance metrics].
Observation: Two ads stand out with high CTR and conversion rates, while three are underperforming, particularly with low conversion rates below 0.5%.
Thought 3: "I now need to analyze the audience characteristics of high-performing ads to find common traits."
Action: Extract [demographic and behavioral data for users engaging with the high-performing ads].
Observation: The high-performing ads show strong engagement from users aged 25-34, with a slight skew toward females, primarily located in urban areas and interested in tech and lifestyle products.
Thought 4: "I need to apply insights from high-performing ads to improve the underperforming ads."
Action: Suggest [modifications to targeting parameters, ad copy, or visuals for the underperforming ads based on the successful elements of the high-performing ads].
Observation: Recommendations include refining targeting to focus on the 25-34 age group, incorporating more tech-related visuals, and adjusting the ad copy to highlight product features more clearly
Thought 5: "I need to set up a plan to track the impact of the implemented changes on ad performance."
Action: Establish [tracking mechanisms and KPIs to monitor the results of the changes].
Observation: A tracking plan is created with KPIs including CTR, conversion rate, and overall return on ad spend (ROAS), with weekly performance updates scheduled.
In this example, the ReAct prompt guides the model through various decision-making steps involved in analyzing a social media campaign and improving its overall performance. Each "Thought" represents a logical step in the decision-making process, and the corresponding "Action" instructs the model on how to proceed.
This prompts the model to not only reason about the decision-making process but also take concrete actions, such as retrieval and analysis.
Summary
As Generative AI continues to reshape many aspects of digital marketing, leveraging advanced prompt engineering techniques is becoming increasingly crucial for optimizing the performance of AI-driven tools like ChatGPT. By mastering approaches such as Zero-Shot and Few-Shot prompting, Chain of Thought (CoT) prompting, General Knowledge prompting, and ReAct, you can maximize the performance of AI models when it comes to many digital marketing tasks.
Each technique offers unique benefits; Zero-Shot and Few-Shot prompting enhance task execution with minimal or guided examples, while CoT prompting improves complex reasoning tasks by mimicking human thought processes. ReAct combines reasoning with actionable steps to dynamically interact with external data sources, providing a comprehensive approach to problem-solving.
Incorporating these advanced prompting techniques enables your agency to push the boundaries of what is possible with Generative AI while retaining an ongoing competitive edge.
Wrap-up
Client acquisition shouldn’t hold you back. Productised helps you turn your expertise into scalable, branded AI products—whether you’re attracting leads, delivering value instantly, or building predictable revenue streams.
If that sounds like the growth engine you’ve been looking for — try Productised or join our Community.