Master the Art of Communicating with AI / 掌握与AI沟通的艺术
Week 1 establishes the foundation for the entire course by teaching you 6 core techniques for effective LLM prompting. Through hands-on exercises, you'll learn that the quality of AI responses directly depends on how well you craft your prompts.
第一周为整个课程奠定基础,教你6种核心的LLM提示技术。通过实践练习,你将了解到,AI回应的质量直接取决于你如何精心设计提示词。
Before starting, ensure you have the following installed / 开始前请确保安装了以下内容:
pip install poetryollama pull mistral-nemo:12b and ollama pull llama3.1:8bK-shot prompting involves providing K examples to the model to help it understand the task. The examples should be:
Write a prompt that makes the LLM reverse the string "httpstatus" to get "sutatsptth"
YOUR_SYSTEM_PROMPT = """Reverse the letters in the given word. Show your work step by step.
Example 1:
Word: abc
Reversed: cba
Example 2:
Word: https
Reversed: sptth
Example 3:
Word: status
Reversed: sutats
Now reverse the given word. Output ONLY the reversed word."""
Chain-of-Thought (CoT) prompting encourages the model to show its reasoning step-by-step, dramatically improving accuracy on:
Calculate: 3^12345 (mod 100) = ?
Expected Answer: 43
YOUR_SYSTEM_PROMPT = """Solve this problem, then give the final answer on the last line as "Answer: <number>".
Calculate 3^12345 (mod 100)."""
Key Insight: Modern LLMs like Llama 3.1 are already trained to use CoT for math problems. Sometimes a simple prompt is enough!
关键洞察:像Llama 3.1这样的现代LLM已经过训练,知道在数学问题上使用CoT。有时简单的提示就足够了!
Tool Calling enables LLMs to interact with external systems by:
Create a prompt that makes the LLM call a Python function to analyze file structure.
Available tools:
- output_every_func_return_type: Analyzes a Python file and outputs the return type of every function
YOUR_SYSTEM_PROMPT = """You are an AI assistant that can call tools. When asked to call a tool, respond with a JSON object in this exact format:
{
"tool": "tool_name",
"args": {
"parameter": "value"
}
}
Available tools:
- output_every_func_return_type: Analyzes a Python file...
Rules:
1. Output ONLY the JSON object
2. No explanations, no markdown code blocks
3. The file_path should be "tool_calling.py" when asked to call the tool"""
Models sometimes wrap JSON in markdown code blocks. You need to handle both:
# Model might output:
```json
{"tool": "output_every_func_return_type", "args": {...}}
```
# Or just:
{"tool": "output_every_func_return_type", "args": {...}}
# Handle both by stripping markdown if present
Self-Consistency improves reliability by:
This is especially useful for tasks that have some randomness or ambiguity.
这对于有一定随机性或歧义的任务特别有用。
A car travels at 60 mph for 30 minutes. How far does it travel?
Expected Answer: 30 miles
# Configuration
NUM_RUNS_TIMES = 5
TEMPERATURE = 1.0 # High temperature for diversity
# Run the model 5 times and collect answers
answers = []
for _ in range(NUM_RUNS_TIMES):
response = ollama.chat(
model=MODEL,
messages=[{"role": "user", "content": prompt}],
options={"temperature": TEMPERATURE}
)
answers.append(extract_answer(response))
# Take majority vote
final_answer = max(set(answers), key=answers.count)
Result: In 5 runs, 4 gave the correct answer (30 miles), 1 gave wrong answer. Majority vote = correct!
结果:5次运行中,4次得到正确答案(30英里),1次错误。多数投票=正确!
RAG combines:
Generate Python code based on API documentation provided in the context.
The model should use ONLY the information provided in the context, not its training data.
模型应该仅使用上下文中提供的信息,而不是其训练数据。
YOUR_SYSTEM_PROMPT = """You are a Python programmer. Write clean, correct code based on the provided context and requirements.
Follow these rules:
1. Use ONLY the information provided in the context
2. Include all necessary imports
3. Write proper function signatures with type hints
4. Handle errors appropriately
5. Return only what is requested"""
def YOUR_CONTEXT_PROVIDER(corpus: List[str]) -> List[str]:
"""
Return relevant context documents for the query.
For this exercise, we simply return all available documents.
In production, you would implement vector similarity search.
"""
return corpus # Return all docs for simplicity
Production Note: In real RAG systems, use embeddings + vector search (e.g., FAISS, Pinecone) to retrieve only relevant documents.
生产注意:在实际RAG系统中,使用嵌入+向量搜索(如FAISS、Pinecone)仅检索相关文档。
Reflexion is an iterative process:
Write a password validation function that checks ALL requirements:
The model might miss some checks on the first try. Use Reflexion to iteratively improve!
模型可能在第一次尝试时遗漏某些检查。使用Reflexion迭代改进!
YOUR_REFLEXION_PROMPT = """You are improving a Python password validation function.
The function must validate ALL of these requirements:
- At least 8 characters long
- Contains at least one UPPERCASE letter (A-Z)
- Contains at least one lowercase letter (a-z)
- Contains at least one digit (0-9)
- Contains at least one special character from: !@#$%^&*()-
Analyze the test failures to see which checks are missing, then fix the implementation.
Output ONLY a fenced Python code block with the corrected function."""
def your_build_reflexion_context(prev_code: str, failures: List[str]) -> str:
"""
Build context for the reflexion prompt.
"""
# Extract missing requirements from failures
missing = []
for f in failures:
if "missing uppercase" in f:
missing.append("uppercase letter (A-Z)")
elif "missing lowercase" in f:
missing.append("lowercase letter (a-z)")
# ... etc for other checks
return f"""Current implementation:
```python
{prev_code}
```
This implementation is incomplete. It is missing: {', '.join(missing)}
Test failures:
{chr(10).join(f'- {f}' for f in failures)}
Fix the function to include ALL required validations."""
| Technique / 技术 | Best For / 最适合 | Difficulty / 难度 |
|---|---|---|
| K-shot Prompting | Tasks requiring specific output format / 需要特定输出格式的任务 | ⭐⭐ |
| Chain-of-Thought | Math, logic, multi-step reasoning / 数学、逻辑、多步骤推理 | ⭐⭐ |
| Tool Calling | External API integration / 外部API集成 | ⭐⭐⭐ |
| Self-Consistency | Tasks with randomness / 有随机性的任务 | ⭐⭐ |
| RAG | Knowledge-based QA / 基于知识的问答 | ⭐⭐⭐ |
| Reflexion | Code generation, debugging / 代码生成、调试 | ⭐⭐⭐⭐ |
| Model / 模型 | Size / 大小 | Best For / 最适合 | Temperature / 温度 |
|---|---|---|---|
| mistral-nemo:12b | 12B parameters | Code generation, text processing / 代码生成、文本处理 | 0.5 |
| llama3.1:8b | 8B parameters | Reasoning, math, tool calling / 推理、数学、工具调用 | 0.0-1.0 |
Week 1 has built a solid foundation in prompt engineering. In Week 2, you'll apply these skills to build a complete AI-powered application: