
You're sitting at your desk, staring at a pile of customer support emails that need responses. Each one is unique, but they follow patterns—questions about billing, product features, or technical issues. What if you could build an AI assistant that could draft intelligent responses for you? Not just any chatbot, but something that understands your company's tone and provides helpful, contextual answers.
This is exactly the kind of problem that Anthropic's Claude API was designed to solve. Unlike building a machine learning model from scratch (which requires massive datasets and specialized knowledge), Claude gives you access to a powerful, pre-trained AI that you can integrate into applications with just a few lines of code.
By the end of this lesson, you'll have built a complete AI-powered application that can analyze customer support emails and suggest personalized responses. More importantly, you'll understand the fundamental concepts and patterns that apply to building any AI application with the Claude API.
What you'll learn:
You'll need basic Python knowledge (variables, functions, and imports) and a text editor or IDE. We'll install any required packages together. No prior experience with APIs or AI is assumed—we'll explain everything from the ground up.
Before we start coding, let's understand what we're working with. An API (Application Programming Interface) is like a restaurant menu—it tells you what you can order (what the service can do) and how to place your order (what information you need to provide). The Claude API is Anthropic's menu for accessing their AI model.
Think of Claude as an incredibly well-read assistant who can help with tasks like writing, analysis, coding, and reasoning. When you send a message through the API, you're essentially having a conversation with this assistant, but instead of typing in a chat window, you're sending structured data through your code.
The basic pattern is simple: you send Claude a prompt (your request), and it sends back a completion (its response). But as we'll see, the real power comes from how you craft these prompts and structure your application around them.
Let's start by getting everything installed and configured. We'll use Python because it's straightforward for beginners and has excellent support for working with APIs.
First, create a new directory for your project:
mkdir claude-support-assistant
cd claude-support-assistant
Now install the required packages:
pip install anthropic python-dotenv
The anthropic package is the official Python client for Claude's API, while python-dotenv helps us manage sensitive information like API keys securely.
Next, you'll need to get an API key from Anthropic. Visit console.anthropic.com, create an account, and navigate to the API Keys section. Generate a new key and copy it—you'll need it in a moment.
Create a file called .env in your project directory and add your API key:
ANTHROPIC_API_KEY=your_actual_api_key_here
Security tip: Never commit API keys to version control. The
.envfile should be added to your.gitignorefile if you're using Git.
Let's start with the simplest possible example to understand how the API works. Create a file called basic_example.py:
import os
from dotenv import load_dotenv
from anthropic import Anthropic
# Load environment variables from .env file
load_dotenv()
# Initialize the Claude client
client = Anthropic(api_key=os.getenv('ANTHROPIC_API_KEY'))
# Make a simple API call
response = client.messages.create(
model="claude-3-sonnet-20240229",
max_tokens=100,
messages=[
{"role": "user", "content": "Hello, Claude! Can you explain what an API is in simple terms?"}
]
)
print(response.content[0].text)
Run this script:
python basic_example.py
You should see Claude's response explaining what an API is. Congratulations—you've just made your first AI API call!
Let's break down what happened:
Anthropic client object using our API keyclaude-3-sonnet-20240229, which is a good balance of capability and costmax_tokens=100 to limit the response length (tokens are roughly equivalent to words)The response object contains multiple pieces of information, but response.content[0].text gives us the actual text response from Claude.
The key to building effective AI applications isn't just knowing how to call the API—it's understanding how to communicate clearly with the AI model. This is called "prompt engineering," and it's more art than science.
A prompt is like giving instructions to a very capable but literal-minded assistant. The clearer and more specific your instructions, the better the results you'll get. Let's explore this with some examples.
Here's a vague prompt:
"Help with customer service"
Claude might respond with generic advice about customer service principles. But look at this more specific version:
prompt = """
You are a customer service assistant for TechFlow, a software company.
A customer has sent this email:
"Hi, I've been trying to export my data from your platform for 3 days but the export button just spins forever. I'm getting frustrated because I need this for a client presentation tomorrow."
Please draft a helpful, empathetic response that:
1. Acknowledges their frustration
2. Provides a specific solution or workaround
3. Offers additional assistance
4. Maintains a professional but friendly tone
"""
This prompt gives Claude context (what company, what problem), specific requirements (the numbered list), and tone guidance. Let's see how this works in practice.
Now we'll build a complete application that analyzes customer emails and suggests responses. This will demonstrate real-world prompt engineering and application structure.
Create a file called support_assistant.py:
import os
from dotenv import load_dotenv
from anthropic import Anthropic
import json
class SupportAssistant:
def __init__(self):
"""Initialize the assistant with Claude API client"""
load_dotenv()
self.client = Anthropic(api_key=os.getenv('ANTHROPIC_API_KEY'))
self.company_context = {
"name": "TechFlow Solutions",
"product": "CloudDash Analytics Platform",
"tone": "professional but friendly and empathetic"
}
def analyze_email(self, email_content):
"""Analyze customer email and extract key information"""
analysis_prompt = f"""
Please analyze this customer support email and extract key information.
Email content:
"{email_content}"
Provide your analysis in this exact JSON format:
{{
"urgency_level": "low|medium|high",
"category": "technical|billing|feature_request|other",
"sentiment": "positive|neutral|negative",
"key_issues": ["list", "of", "main", "issues"],
"customer_goal": "what the customer ultimately wants to achieve"
}}
Only return the JSON, no additional text.
"""
try:
response = self.client.messages.create(
model="claude-3-sonnet-20240229",
max_tokens=300,
messages=[{"role": "user", "content": analysis_prompt}]
)
# Parse the JSON response
analysis = json.loads(response.content[0].text.strip())
return analysis
except json.JSONDecodeError:
return {"error": "Could not parse analysis"}
except Exception as e:
return {"error": f"API error: {str(e)}"}
def generate_response(self, email_content, analysis):
"""Generate a suggested response based on email and analysis"""
response_prompt = f"""
You are a customer service representative for {self.company_context['name']}.
Our main product is the {self.company_context['product']}.
A customer sent this email:
"{email_content}"
Analysis of the email:
- Urgency: {analysis.get('urgency_level', 'unknown')}
- Category: {analysis.get('category', 'unknown')}
- Sentiment: {analysis.get('sentiment', 'unknown')}
- Key issues: {', '.join(analysis.get('key_issues', []))}
- Customer goal: {analysis.get('customer_goal', 'unknown')}
Please draft a response that:
1. Acknowledges their concern with appropriate empathy
2. Addresses their specific issues directly
3. Provides actionable next steps or solutions
4. Matches their urgency level (more detailed help for urgent issues)
5. Maintains a {self.company_context['tone']} tone
6. Ends with an offer for further assistance
Write only the email response, no additional commentary.
"""
try:
response = self.client.messages.create(
model="claude-3-sonnet-20240229",
max_tokens=500,
messages=[{"role": "user", "content": response_prompt}]
)
return response.content[0].text.strip()
except Exception as e:
return f"Error generating response: {str(e)}"
def main():
"""Main application loop"""
assistant = SupportAssistant()
print("=== Customer Support AI Assistant ===")
print("Paste a customer email below, then press Enter twice to process it.")
print("Type 'quit' to exit.\n")
while True:
print("Customer email:")
email_lines = []
# Read multi-line email input
while True:
line = input()
if line.lower() == 'quit':
print("Goodbye!")
return
if line == '':
break
email_lines.append(line)
if not email_lines:
continue
email_content = '\n'.join(email_lines)
print("\n🔍 Analyzing email...")
analysis = assistant.analyze_email(email_content)
if 'error' in analysis:
print(f"❌ Analysis failed: {analysis['error']}")
continue
# Display analysis
print(f"\n📊 Analysis Results:")
print(f" Urgency: {analysis.get('urgency_level', 'unknown')}")
print(f" Category: {analysis.get('category', 'unknown')}")
print(f" Sentiment: {analysis.get('sentiment', 'unknown')}")
print(f" Key Issues: {', '.join(analysis.get('key_issues', []))}")
print(f" Customer Goal: {analysis.get('customer_goal', 'unknown')}")
print("\n✍️ Generating response...")
response = assistant.generate_response(email_content, analysis)
print(f"\n📧 Suggested Response:")
print("-" * 50)
print(response)
print("-" * 50)
print("\n" + "="*60 + "\n")
if __name__ == "__main__":
main()
This application demonstrates several important concepts:
Structured Prompts: Notice how we provide Claude with specific context, clear instructions, and defined output formats. The analysis prompt even requests JSON output, which we can parse programmatically.
Error Handling: Real applications need to handle API failures, network issues, and unexpected responses. We wrap API calls in try-catch blocks and provide meaningful error messages.
Application State: The SupportAssistant class maintains context about your company and product, which gets included in every prompt. This ensures consistent, branded responses.
Multi-step Processing: Instead of doing everything in one API call, we break the work into analysis and response generation. This gives us better control and transparency into what the AI is thinking.
Let's test the support assistant with a realistic customer email:
python support_assistant.py
When prompted, try pasting in this sample email:
Subject: Urgent - Data Export Failing
Hi,
I've been trying to export our Q3 analytics data from CloudDash for the past 2 days but every time I click the export button, the page just loads indefinitely. I've tried different browsers and clearing my cache but nothing works.
This is really urgent because I need to present these numbers to our board tomorrow morning. Can someone please help me get this data exported ASAP?
Thanks,
Sarah Johnson
Account Manager, RetailPlus Corp
The application will analyze this email (identifying it as high urgency, technical category, negative sentiment) and generate a personalized response that addresses Sarah's specific needs and timeline.
As you build more sophisticated AI applications, you'll want to master these advanced prompt engineering patterns:
Few-Shot Learning: Show Claude examples of the kind of responses you want:
prompt = f"""
Here are examples of great customer service responses:
Example 1:
Customer issue: "Login not working"
Response: "Hi [Name], I understand how frustrating login issues can be. Let's get this sorted out right away. Please try these steps: [specific steps]. If that doesn't work, I'll personally ensure our tech team prioritizes your case."
Example 2:
Customer issue: "Billing question"
Response: "Hi [Name], I'd be happy to clarify your billing. Looking at your account, I can see [specific details]. Here's exactly what each charge represents: [breakdown]."
Now respond to this customer email in the same style:
"{email_content}"
"""
Chain of Thought: Ask Claude to show its reasoning:
prompt = f"""
Before drafting a response, please think through:
1. What is the customer's emotional state?
2. What specific problem needs solving?
3. What's the appropriate tone to match their urgency?
4. What concrete next steps can we offer?
Customer email: "{email_content}"
Thinking process:
[Let Claude work through each step]
Final response:
[The actual response]
"""
Role-Based Constraints: Give Claude a specific persona:
prompt = f"""
You are Maria, a senior customer success manager with 8 years of experience.
You're known for your ability to de-escalate tense situations and find creative solutions.
You always:
- Use the customer's name
- Reference specific details from their account when relevant
- Provide multiple options when possible
- Follow up with a direct phone number for urgent issues
Respond to this email as Maria would:
"{email_content}"
"""
AI APIs aren't free, and costs can add up quickly if you're not careful. Here are essential strategies for managing your usage:
Token Management: Tokens are your primary cost driver. One token is roughly 0.75 words, so a 100-word response costs about 133 tokens. Set appropriate max_tokens limits:
# For short responses (summaries, classifications)
max_tokens=150
# For detailed responses (customer service, explanations)
max_tokens=400
# For long-form content (articles, reports)
max_tokens=1000
Response Caching: For repeated queries, cache results to avoid redundant API calls:
import hashlib
import json
from pathlib import Path
class CachedAssistant(SupportAssistant):
def __init__(self, cache_dir="cache"):
super().__init__()
self.cache_dir = Path(cache_dir)
self.cache_dir.mkdir(exist_ok=True)
def _get_cache_key(self, prompt):
"""Generate a unique key for this prompt"""
return hashlib.md5(prompt.encode()).hexdigest()
def _get_cached_response(self, prompt):
"""Check if we have a cached response for this prompt"""
cache_file = self.cache_dir / f"{self._get_cache_key(prompt)}.json"
if cache_file.exists():
with open(cache_file, 'r') as f:
return json.load(f)
return None
def _cache_response(self, prompt, response):
"""Cache a response for future use"""
cache_file = self.cache_dir / f"{self._get_cache_key(prompt)}.json"
with open(cache_file, 'w') as f:
json.dump({"prompt": prompt, "response": response}, f)
Rate Limiting: The Claude API has rate limits. For production applications, implement exponential backoff:
import time
import random
def call_api_with_retry(self, prompt, max_retries=3):
"""Call API with automatic retry on rate limit errors"""
for attempt in range(max_retries):
try:
response = self.client.messages.create(
model="claude-3-sonnet-20240229",
max_tokens=400,
messages=[{"role": "user", "content": prompt}]
)
return response.content[0].text
except Exception as e:
if "rate limit" in str(e).lower() and attempt < max_retries - 1:
# Wait with exponential backoff plus jitter
wait_time = (2 ** attempt) + random.uniform(0, 1)
print(f"Rate limited. Waiting {wait_time:.1f} seconds...")
time.sleep(wait_time)
else:
raise e
Now it's your turn to extend the support assistant. Here's a challenging but achievable exercise:
Build a Priority Router: Modify the application to automatically route emails based on urgency and category. High-urgency technical issues should generate responses that include an immediate escalation to the technical team, while low-urgency feature requests should acknowledge the request and explain the product roadmap process.
Here's the framework to get you started:
def route_and_respond(self, email_content, analysis):
"""Route email based on analysis and generate appropriate response"""
urgency = analysis.get('urgency_level', 'medium')
category = analysis.get('category', 'other')
# Define routing rules
if urgency == 'high' and category == 'technical':
return self.generate_escalated_technical_response(email_content, analysis)
elif category == 'billing':
return self.generate_billing_response(email_content, analysis)
elif category == 'feature_request':
return self.generate_feature_request_response(email_content, analysis)
else:
return self.generate_standard_response(email_content, analysis)
def generate_escalated_technical_response(self, email_content, analysis):
"""Generate response for high-priority technical issues"""
prompt = f"""
This is a HIGH PRIORITY technical issue that needs immediate escalation.
Customer email: "{email_content}"
Generate a response that:
1. Immediately acknowledges the urgency
2. Explains that you're escalating to the technical team
3. Provides a direct contact method (phone/email) for immediate assistance
4. Gives a specific timeline for follow-up (within 2 hours)
5. Includes any immediate workarounds if possible
Keep the tone urgent but reassuring.
"""
# Your implementation here
Try implementing the different response types. Pay attention to how the prompts differ based on the routing logic.
Mistake 1: Vague Prompts Don't write: "Help me with this email" Do write: "Act as a customer service representative for [company]. This customer is asking about [specific issue]. Respond with [specific requirements]."
Mistake 2: Ignoring Token Limits
If your responses get cut off mid-sentence, you've hit the token limit. Either increase max_tokens or ask for shorter responses in your prompt.
Mistake 3: Not Handling API Errors Always wrap API calls in try-catch blocks. Network issues, rate limits, and invalid requests can all cause failures.
Mistake 4: Inconsistent Context If Claude's responses vary wildly in tone or accuracy, you're probably not providing enough consistent context. Include company information, tone guidelines, and specific requirements in every prompt.
Troubleshooting API Issues:
.env fileDebugging Prompt Issues: Add debugging output to see what you're actually sending to Claude:
print("DEBUG - Sending prompt:")
print("-" * 30)
print(prompt)
print("-" * 30)
You've built a complete AI-powered application that can analyze customer emails and generate intelligent responses. More importantly, you've learned the fundamental patterns that apply to any AI application:
The support assistant is just the beginning. These same patterns apply whether you're building content generators, data analyzers, coding assistants, or any other AI-powered tool.
Your next steps:
claude-3-haiku for faster, cheaper responses or claude-3-opus for more complex reasoningThe Claude API opens up a world of possibilities. You now have the foundation to build sophisticated AI applications that can truly help solve real business problems. Start small, experiment often, and don't be afraid to iterate on your prompts—that's where the magic happens.
Learning Path: Building with LLMs