Core Prompt Design Principles
Assigning a Persona
Use the system message to define a distinct identity for the model. This sets the tone and perspective for all subsequent responses.
import openai
openai.api_key = "your-api-key"
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a wise sage who replies with concise parables rooted in Stoicism."},
{"role": "user", "content": "How do I deal with constant distractions at work?"}
],
temperature=0.9
)
Embedding Persistent Instructions
Embed a reusable directive within the system prompt to govern output style across a session.
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "For every topic, produce exactly three bullet points using a witty tone and an unexpected metaphor."},
{"role": "user", "content": "Explain why modern emails are overwhelming."}
],
max_tokens=200
)
Task Decomposiiton
Break complex instructions into smaller, sequential steps so the model can reason more accurately.
prompt_sequence = [
"Step 1: Identify the three most critical errors in the following stack trace.",
"Step 2: Propose probable causes for each error.",
"Step 3: Suggest a two-sentence fix for the most urgent issue."
]
stack_trace = "Error: timeout while connecting to database at line 42"
full_prompt = "\n".join(prompt_sequence) + "\n\n" + stack_trace
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": full_prompt}
]
)
Layered Content Creation
For long-form text, build content in phases: outline, section draft, then detailed expansion.
sections = [
"Draft a high-level outline for a sci-fi short story about time loops.",
"Now expand each outline point into a paragraph.",
"Finally, add one sentence of sensory detail to each paragraph."
]
conversation = []
for step in sections:
conversation.append({"role": "user", "content": step})
reply = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=conversation
)
conversation.append(reply["choices"][0]["message"])
Prompt as Code
Treat prompts like programming constructs with placeholders, templates, and explicit output schemas.
eval_template = """
You are reviewing a model's response. Score it from 1-5 on these axes:
- Accuracy
- Clarity
- Conciseness
User Query: {query}
Model Response: {response}
Output the scores in JSON format.
"""
query = "What is baking soda?"
model_response = "Baking soda is sodium bicarbonate, a leavening agent."
prompt = eval_template.format(query=query, response=model_response)
result = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": prompt}]
)
Few-Shot Patterns
Provide examples that illustrate both the reasoning process and the desired output structure.
few_shot_prompt = """
Convert the given sentence into a logical triple (Subject, Predicate, Object).
Example 1:
Input: "Paris is the capital of France."
Output: ("Paris", "is capital of", "France")
Example 2:
Input: "The cat sat on the mat."
Output: ("The cat", "sat on", "the mat")
Now process: "Shakespeare wrote Hamlet."
"""
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": few_shot_prompt}]
)
Function Calling Mechanism
Function calling allows a model to produce arguments that trigger external tools, bridging language understanding and programmatic execution.
Setting Up Utiliteis
import json
import openai
from tenacity import retry, wait_random_exponential, stop_after_attempt
MODEL = "gpt-3.5-turbo"
@retry(wait=wait_random_exponential(min=1, max=40), stop=stop_after_attempt(3))
def request_completion(messages, functions=None, function_call=None, model=MODEL):
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {openai.api_key}"
}
payload = {"model": model, "messages": messages}
if functions:
payload["functions"] = functions
if function_call is not None:
payload["function_call"] = function_call
return openai.api_request("post", "https://api.openai.com/v1/chat/completions", headers=headers, json=payload)
Defining Functions
tool_definitions = [
{
"name": "fetch_weather",
"description": "Retrieve current weather for a location",
"parameters": {
"type": "object",
"properties": {
"city": {"type": "string", "description": "City name"},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
},
"required": ["city"]
}
}
]
Requesting Functon Execution
messages = [
{"role": "system", "content": "Ask for missing details before calling any function."},
{"role": "user", "content": "What is the temperature in Oslo?"}
]
resp = request_completion(messages, functions=tool_definitions)
response_msg = resp["choices"][0]["message"]
messages.append(response_msg)
# Add missing detail
messages.append({"role": "user", "content": "Use Celsius."})
resp = request_completion(messages, functions=tool_definitions)
final_message = resp["choices"][0]["message"]
print(final_message.get("function_call"))
Controlling Function Invocation
Force a specific function:
resp = request_completion(
messages,
functions=tool_definitions,
function_call={"name": "fetch_weather"}
)
Prevent any function call:
resp = request_completion(
messages,
functions=tool_definitions,
function_call="none"
)
Executing Generated Functions (SQL Agent Example)
import sqlite3
connection = sqlite3.connect("sample.db")
def query_db(sql):
try:
return str(connection.execute(sql).fetchall())
except Exception as e:
return f"Error: {e}"
db_tools = [
{
"name": "run_sql",
"description": "Execute a SQL query against the music database",
"parameters": {
"type": "object",
"properties": {
"sql": {"type": "string", "description": "The SQL query to run"}
},
"required": ["sql"]
}
}
]
chat_messages = [
{"role": "system", "content": "Use SQL to answer questions about the music catalog."},
{"role": "user", "content": "List the three longest tracks."}
]
comp_resp = request_completion(chat_messages, functions=db_tools)
comp_msg = comp_resp["choices"][0]["message"]
chat_messages.append(comp_msg)
if comp_msg.get("function_call"):
func_name = comp_msg["function_call"]["name"]
args = json.loads(comp_msg["function_call"]["arguments"])
if func_name == "run_sql":
data = query_db(args["sql"])
chat_messages.append({"role": "function", "name": func_name, "content": data})
To build a complete weather application, integrate a real weather API as the backend implementation for fetch_weather, parse the location from user input, and render the returned data.