🧭 Topics Covered:

  • 🗑️ GIGO: Garbage In, Garbage Out
  • ✍️ What is Prompt Engineering?
  • 🧠 How Prompts Influence LLM Output
  • 🧰 Types & Styles of Prompts
  • 🧪 Prompting Techniques (Zero-shot, Few-shot, CoT, etc.)
  • 🧙‍♂️ Role, Persona & Contextual Prompting
  • 🧩 Which Prompt Technique to Choose?
  • 🔐 Prompt Templates & Security

🎬 Let’s Begin with a Story…

Imagine you walk into a pizza shop 🍕 and say,

“Give me food.”

The waiter looks puzzled.

“Uhh… what kind? Spicy? Veg? Size? Cheese?”

Now instead, you say,

“Can I get a large margherita pizza with extra cheese and jalapeños?”

Boom 💥—now you're served exactly what you want. AI models work the same way. The better you explain your needs (prompt), the better the result (output)!

And this is where Prompt Engineering comes in.


🗑️ GIGO – Garbage In, Garbage Out

Have you ever typed something into ChatGPT and got a weird or useless response?

Well, that’s GIGO at play:

Garbage Input = Garbage Output 💩

AI is like a mirror—it reflects what you give it. Messy or vague input leads to confusing results. So crafting the right input is everything.


✍️ What is Prompt Engineering?

Prompt Engineering is the skill of writing smart instructions (called prompts) to get smart results from an AI model 🤖.

Later in this blog we will learn about more technical stuff, like different prompt formats and techniques.


🤔 What is a Prompt?

A prompt is the initial instruction or input you give to the AI to perform a task.

But here's a catch…

If you ask AI to generate a prompt, and then feed that prompt back to the AI, the results are often not great 😬. Why?

Probabl because most LLMs (like GPT or Gemini) were trained on human-written content—not AI-generated ones.

🔑 Takeaway: Always prefer writing your own prompts over relying on AI-generated ones.


🧠 System Prompts

System prompts help set the initial context for the conversation.

As developers, we can’t control user queries, but we can control the system prompt to steer the AI’s tone, behavior, or role 🎛️.

“You are a helpful travel assistant.” – That’s a system prompt.

Also, keep in mind:

  • LLMs charge you based on both input and output tokens 💰.Checkout the pricing page of the model that you are using.
  • Tokens should not be considered the same as words
  • Repeating the same system prompt? It might be cached and priced differently

✨ Prompt Templates – Why Bother Using Them?

Imagine sending raw user input straight to the AI. That’s risky!

🔒The Problem: Prompt Injection Attacks

One of the biggest vulnerabilities in LLMs today is prompt injection — where users sneak in inputs that hijack or manipulate the AI’s behavior.
Think of it like someone whispering fake orders to your assistant while you’re not looking 😅

🛡️ The Fix: Prompt Templates

Prompt templates let you structure conversations into clear roles:

  • System – instructions from the developer
  • User – the actual user input
  • Assistant – the AI’s response

This layered approach (like OpenAI’s ChatML format) tells the model who is saying what, and where one speaker stops and another begins. That boundary is key 🔐

This makes it much harder for malicious input to confuse or trick the model.

🧱 Why This Matters

  • Prompt templates reduce ambiguity, helping LLMs interpret input more accurately
  • They separate trusted developer instructions from unpredictable user text
  • Over time, this structure can help fully prevent injection attacks

Even when you give a simple instruction to an LLM, behind the scenes, it’s wrapped in a structured template — marking your role, your intent, and your context.


📐 Prompt Formats (Styles)

Here are a few popular formats used in different LLMs:

🦙 Alpaca Prompt

### Instruction:
Do X

### Input:
With Y

### Response:
Result
Instruction: For the given number by user perform arthematic operation
  Input: what is 2 + 2
  Response:
  ## the LLM will predict the next set of token and return 4.

🦙 LLaMA-2 Format ( used by LLaMA-2) :

[INST] 
          <<SYS>>
            {{ system_prompt }}
          <SYS>>
            {{ user_message_1 }}
        [/INST]
        {{ model_answer_1 }}
      
      
        [INST]
          {{ user_message_2 }} 
        [/INST]

🦙 LLaMA-3 Format ( used by LLaMA-3)

<|begin_of_text|>
        <|start_header_id|>
          system
        <|end_header_id|>

      You are a helpful AI assistant for travel tips and recommendations

      <|eot_id|>
        <|start_header_id|>
          user
        <|end_header_id|>

      What can you help me with?

      <|eot_id|>
        <|start_header_id|>
          assistant
        <|end_header_id|>

💬 ChatML Format (used by OpenAI)

{ "role": "system", "content": "You are a helpful assistant" }
{ "role": "user", "content": "What is LRU cache?" }
{ "role": "assistant", "content": "LRU stands for..." }

🛠️ Prompting Techniques

Let’s explore the ways you can craft prompts:

1. 🕵️ Zero-Shot Prompting

Just ask the question without giving any examples.

“Write a cold email introducing our new app.”

AI uses its existing knowledge. Good for quick tasks. No examples needed.

import os
from dotenv import load_dotenv
from openai import OpenAI

load_dotenv()

client = OpenAI()

api_response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role":"system", "content":"what is 5*45+34%3*2"}
    ]   
)

print("AI response -> ",api_response.choices[0].message.content)

2. ✌️ Few-Shot Prompting

Here, you give a few examples first, then ask for a new answer.

Helps improve accuracy when the task is nuanced or requires understanding a pattern.

import os
from dotenv import load_dotenv
from openai import OpenAI

load_dotenv()

client = OpenAI()

system_prompt = '''
                    You are an AI assistant which helper user in solving mathematical question.
                    Any question other than the mathematical question should not be answered by you.

                    Example:
                    Input: 2+2
                    Output: 2+2 is 4 which is calculate by adding 2 + 2

                    Input: 3*0+5
                    Output: 3*0+5 is 5. As per rule we first multiply and then add. So 3*0 is 0 and 0+5 is 5 which is calculated by first multiplying 3 with 0 and then adding the result with 5

                    Input: why is sky blue?
                    Output: Is this maths query? I am an mathematic assistant and can help you in mathematics only.

                '''
api_response_1 = client.chat.completions.create(
    model="gpt-4o",
    max_tokens=200, ## adjust to control pricing by limiting the token count
    temperature=0.7, ## adjust temperature to add more creativity/randmoness to output
    messages=[
        {"role":"system","content":system_prompt},
        {"role":"system", "content":"what is 5*45+34%3*2"}
    ]   
)
print("AI response 1 -> ",api_response_1.choices[0].message.content)

api_response_2 = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role":"system","content":system_prompt},
        {"role":"system", "content":"what's the speed at which cheetah can run"}
    ]   
)
print("AI response 2 -> ",api_response_2.choices[0].message.content)

Api response 1

Api response 2


3. 🔗 Chain-of-Thought (CoT)

Here, we ask the model to explain step-by-step before giving the answer.
The model is encroughed to break down reasoning step by step before arriving at an answer.

“Let’s break it down: First we..., then we...”

This improves accuracy and makes AI reasoning more transparent 🧠

import os
import json
from dotenv import load_dotenv
from openai import OpenAI

load_dotenv()

client = OpenAI()

api_response = None # Initialize response variable

system_prompt = '''
    You are an AI assistant who is expert in breaking down complex problems
    and then resolve the user query.

    For the given user input, analyse the input and break down the problem step by step.
    Atleast think 5-6 steps on how to resolve the problem before solving the problem.

    The steps are you get "input", you "analyse", you "think" several times and then return the output with explanation.
    Finally you validate the ouput before giving the final "output".

    Follow these step in sequence "analsyse","think","output","validate" and finally "output".

    Rules:
        1. Follow the strict JSON ouput as per Output schema.
        2. Always perform one step at a time and wait for next input
        3. Carefully analyze the user query.

    Output Fromat:
        {{step :"output",content:"string"}}

    Example:
    Input : what is 2+2?
    Output : {{step:"analyse",content:"The user is intrresetd in basic maths query and he is asking basic arthematice operation"}}   
    Output : {{step:"think",content:"To perform addition one must got from left to rigth and add all the operands"}}
    Output : {{step:"output",content:"4"}}
    Output : {{step:"validate",content:"Seems like 4 is correct as 2+2 adds up to 4"}}
    Output : {{step:"output",content:"2 + 2 = 4 and that is calculated by adding all numbers."}}

'''

api_response= None
messages=[
    {"role":"system", "content":system_prompt},
]

user_query = input("> ")
messages.append({"role":"user", "content":user_query},)

while True:
    api_response = client.chat.completions.create(
        model="gpt-4o", 
        response_format={"type":"json_object"},
        messages=messages
    )

    parsed_response = json.loads(api_response.choices[0].message.content)
    messages.append({"role":"assistant", "content":json.dumps(parsed_response)})

    if parsed_response.get('step') != 'output':
        print("each step -> ",parsed_response.get('content'))
        continue

    print(f'Parsed Respose : {parsed_response.get('content')}')
    break

AI Response 1

AI Response 2


4. 🔁 Self-Consistency Prompting

Run the same prompt multiple times. Pick the most common or logical answer.

Just like asking 5 friends and trusting the one most agree on!


5. 🧑‍🎓 Persona-Based Prompting

Give the AI a personality or an profession.

“You are a doctor giving tips to new parents.”

It shapes how the AI responds!


6. 🎭 Role-Playing Prompt

“You are an expert coding tutor for beginners.”

Let the AI act in character 🎬 and adapt to the role you've assigned.


As we go deeper, we’ll explore advanced prompt engineering strategies like:

1.🔍 Contextual Prompting

2.🖼️ Multimodal Prompting

These techniques go beyond just writing smart instructions — they need an orchestrator behind the scenes.

Think of the orchestrator as a conductor, managing how data flows into and out of the LLM for maximum accuracy and relevance.
These technique are used in apps that require deep context like chatbots, search assistants, etc.

To make this work, we’ll integrate:

  • Vector Databases – to provide semantic context and memory
  • Graph Databases – to model relationships between entities
  • PostgreSQL – for handling structured data
  • Tool / Function Calling – so the model can dynamically execute actions in real-time

We’ll learn how to stitch these together to build powerful, context-aware, multi-modal AI systems in upcoming blog.


🧪 How to Choose the Right Prompt Technique?

Here’s the secret sauce:

👉 Experiment. Track. Improve.

  • Observe how your app responds to real user queries
  • Mix and match techniques like:
    • CoT + Role-Play + Persona 🤯
  • Use observability tools to capture and analyze bad vs. good outputs to tweak your prompt technique accordingly.
  • Keep refining over time

🎯 Final Thoughts

Prompt Engineering isn’t just about giving commands to AI.

It’s about speaking its language clearly and cleverly 💡

If you're building AI tools, learning how to write great prompts will make your results 10x better.

Great prompts = Great products 🚀