← Back to Course Summary / 返回课程总结
WEEK 1 - 基础篇

LLM Prompting Techniques
大语言模型提示技术完整教程

Master the Art of Communicating with AI / 掌握与AI沟通的艺术

Python 3.10 Ollama Llama 3.1 Mistral Nemo Poetry

Course Overview / 课程概述

Week 1 establishes the foundation for the entire course by teaching you 6 core techniques for effective LLM prompting. Through hands-on exercises, you'll learn that the quality of AI responses directly depends on how well you craft your prompts.

第一周为整个课程奠定基础,教你6种核心的LLM提示技术。通过实践练习,你将了解到,AI回应的质量直接取决于你如何精心设计提示词。

⏱️
Time Investment / 时间投入
~4 hours for all 6 exercises / 6个练习约4小时
📊
Difficulty / 难度
Beginner to Intermediate / 初级到中级
🎯
Outcome / 成果
6 working Python scripts / 6个可运行的Python脚本

Prerequisites / 前置要求

Environment Setup / 环境配置

Before starting, ensure you have the following installed / 开始前请确保安装了以下内容:

  1. Python 3.10+ - Download from python.org
  2. Ollama - Download from ollama.ai
  3. Poetry - Run: pip install poetry
  4. Pull required models - Run: ollama pull mistral-nemo:12b and ollama pull llama3.1:8b

Exercise 1: K-shot Prompting / K-shot提示

Concept / 概念

K-shot prompting involves providing K examples to the model to help it understand the task. The examples should be:

  • Related to the target task - Don't use random examples
  • Consistent in format - Match the expected output format
  • High quality - 2-3 good examples > 10 random ones

Your Task / 你的任务

Challenge: Reverse a String / 反转字符串

Write a prompt that makes the LLM reverse the string "httpstatus" to get "sutatsptth"

Requirements / 要求:

  • ✓ Use 2-3 examples in your prompt / 在提示中使用2-3个示例
  • ✓ Examples should be relevant to "httpstatus" / 示例应与"httpstatus"相关
  • ✓ Output format must be ONLY the reversed string / 输出格式必须仅为反转的字符串

Solution / 解决方案

Working Prompt / 有效的提示词
YOUR_SYSTEM_PROMPT = """Reverse the letters in the given word. Show your work step by step.

Example 1:
Word: abc
Reversed: cba

Example 2:
Word: https
Reversed: sptth

Example 3:
Word: status
Reversed: sutats

Now reverse the given word. Output ONLY the reversed word."""
Why This Works / 为什么有效
  • Relevant examples: "https" and "status" both appear in "httpstatus"
  • Clear format: Each example shows "Word: X\nReversed: Y"
  • Explicit constraint: "Output ONLY the reversed word"

Exercise 2: Chain-of-Thought / 思维链

Concept / 概念

Chain-of-Thought (CoT) prompting encourages the model to show its reasoning step-by-step, dramatically improving accuracy on:

  • Mathematical problems / 数学问题
  • Logic puzzles / 逻辑谜题
  • Multi-step reasoning / 多步骤推理

Your Task / 你的任务

Challenge: Modular Arithmetic / 模运算

Calculate: 3^12345 (mod 100) = ?

Expected Answer: 43

Solution / 解决方案

Working Prompt / 有效的提示词
YOUR_SYSTEM_PROMPT = """Solve this problem, then give the final answer on the last line as "Answer: <number>".

Calculate 3^12345 (mod 100)."""

Key Insight: Modern LLMs like Llama 3.1 are already trained to use CoT for math problems. Sometimes a simple prompt is enough!

关键洞察:像Llama 3.1这样的现代LLM已经过训练,知道在数学问题上使用CoT。有时简单的提示就足够了!

Exercise 3: Tool Calling / 工具调用

Concept / 概念

Tool Calling enables LLMs to interact with external systems by:

  1. Defining available tools with JSON schemas / 使用JSON模式定义可用工具
  2. Prompting the model to output tool calls in specific format / 提示模型以特定格式输出工具调用
  3. Parsing the model output and executing the tool / 解析模型输出并执行工具
  4. Returning results back to the model / 将结果返回给模型

Your Task / 你的任务

Challenge: Function Caller / 函数调用器

Create a prompt that makes the LLM call a Python function to analyze file structure.

Tool Definition / 工具定义:

Available tools:
- output_every_func_return_type: Analyzes a Python file and outputs the return type of every function

Requirements / 要求:

  • ✓ Output must be valid JSON / 输出必须是有效的JSON
  • ✓ Use "tool" and "args" keys / 使用"tool"和"args"键
  • ✓ No markdown code blocks / 不要markdown代码块

Solution / 解决方案

Working Prompt / 有效的提示词
YOUR_SYSTEM_PROMPT = """You are an AI assistant that can call tools. When asked to call a tool, respond with a JSON object in this exact format:

{
  "tool": "tool_name",
  "args": {
    "parameter": "value"
  }
}

Available tools:
- output_every_func_return_type: Analyzes a Python file...

Rules:
1. Output ONLY the JSON object
2. No explanations, no markdown code blocks
3. The file_path should be "tool_calling.py" when asked to call the tool"""
Handling Model Output / 处理模型输出

Models sometimes wrap JSON in markdown code blocks. You need to handle both:

# Model might output:
```json
{"tool": "output_every_func_return_type", "args": {...}}
```

# Or just:
{"tool": "output_every_func_return_type", "args": {...}}

# Handle both by stripping markdown if present

Exercise 4: Self-Consistency / 自洽性

Concept / 概念

Self-Consistency improves reliability by:

  1. Running the model multiple times with high temperature / 使用高温度多次运行模型
  2. Collecting all answers / 收集所有答案
  3. Taking the majority vote / 采取多数投票

This is especially useful for tasks that have some randomness or ambiguity.
这对于有一定随机性或歧义的任务特别有用。

Your Task / 你的任务

Challenge: Math Word Problem / 数学应用题

A car travels at 60 mph for 30 minutes. How far does it travel?

Expected Answer: 30 miles

Solution / 解决方案

Implementation / 实现
# Configuration
NUM_RUNS_TIMES = 5
TEMPERATURE = 1.0  # High temperature for diversity

# Run the model 5 times and collect answers
answers = []
for _ in range(NUM_RUNS_TIMES):
    response = ollama.chat(
        model=MODEL,
        messages=[{"role": "user", "content": prompt}],
        options={"temperature": TEMPERATURE}
    )
    answers.append(extract_answer(response))

# Take majority vote
final_answer = max(set(answers), key=answers.count)

Result: In 5 runs, 4 gave the correct answer (30 miles), 1 gave wrong answer. Majority vote = correct!

结果:5次运行中,4次得到正确答案(30英里),1次错误。多数投票=正确!

Exercise 5: RAG (Retrieval-Augmented Generation) / 检索增强生成

Concept / 概念

RAG combines:

  • Retrieval: Fetch relevant documents from a knowledge base / 从知识库检索相关文档
  • Augmentation: Add retrieved context to the prompt / 将检索到的上下文添加到提示中
  • Generation: LLM generates response using context / LLM使用上下文生成响应

Your Task / 你的任务

Challenge: Code Generation from Docs / 从文档生成代码

Generate Python code based on API documentation provided in the context.

The model should use ONLY the information provided in the context, not its training data.

模型应该仅使用上下文中提供的信息,而不是其训练数据。

Solution / 解决方案

System Prompt / 系统提示
YOUR_SYSTEM_PROMPT = """You are a Python programmer. Write clean, correct code based on the provided context and requirements.

Follow these rules:
1. Use ONLY the information provided in the context
2. Include all necessary imports
3. Write proper function signatures with type hints
4. Handle errors appropriately
5. Return only what is requested"""
Context Provider / 上下文提供者
def YOUR_CONTEXT_PROVIDER(corpus: List[str]) -> List[str]:
    """
    Return relevant context documents for the query.

    For this exercise, we simply return all available documents.
    In production, you would implement vector similarity search.
    """
    return corpus  # Return all docs for simplicity

Production Note: In real RAG systems, use embeddings + vector search (e.g., FAISS, Pinecone) to retrieve only relevant documents.

生产注意:在实际RAG系统中,使用嵌入+向量搜索(如FAISS、Pinecone)仅检索相关文档。

Exercise 6: Reflexion / 反思

Concept / 概念

Reflexion is an iterative process:

  1. Generate initial solution / 生成初始解决方案
  2. Test and collect failures / 测试并收集失败信息
  3. Reflect on what went wrong / 反思问题所在
  4. Improve based on feedback / 根据反馈改进
  5. Repeat until success / 重复直到成功

Your Task / 你的任务

Challenge: Password Validator / 密码验证器

Write a password validation function that checks ALL requirements:

  • ✓ At least 8 characters long / 至少8个字符
  • ✓ Contains UPPERCASE letter (A-Z) / 包含大写字母
  • ✓ Contains lowercase letter (a-z) / 包含小写字母
  • ✓ Contains digit (0-9) / 包含数字
  • ✓ Contains special character (!@#$%^&*()-) / 包含特殊字符

The model might miss some checks on the first try. Use Reflexion to iteratively improve!

模型可能在第一次尝试时遗漏某些检查。使用Reflexion迭代改进!

Solution / 解决方案

Reflexion Prompt / 反思提示
YOUR_REFLEXION_PROMPT = """You are improving a Python password validation function.

The function must validate ALL of these requirements:
- At least 8 characters long
- Contains at least one UPPERCASE letter (A-Z)
- Contains at least one lowercase letter (a-z)
- Contains at least one digit (0-9)
- Contains at least one special character from: !@#$%^&*()-

Analyze the test failures to see which checks are missing, then fix the implementation.

Output ONLY a fenced Python code block with the corrected function."""
Context Builder / 上下文构建器
def your_build_reflexion_context(prev_code: str, failures: List[str]) -> str:
    """
    Build context for the reflexion prompt.
    """
    # Extract missing requirements from failures
    missing = []
    for f in failures:
        if "missing uppercase" in f:
            missing.append("uppercase letter (A-Z)")
        elif "missing lowercase" in f:
            missing.append("lowercase letter (a-z)")
        # ... etc for other checks

    return f"""Current implementation:
```python
{prev_code}
```

This implementation is incomplete. It is missing: {', '.join(missing)}

Test failures:
{chr(10).join(f'- {f}' for f in failures)}

Fix the function to include ALL required validations."""

Key Learnings / 关键学习

Technique / 技术 Best For / 最适合 Difficulty / 难度
K-shot Prompting Tasks requiring specific output format / 需要特定输出格式的任务 ⭐⭐
Chain-of-Thought Math, logic, multi-step reasoning / 数学、逻辑、多步骤推理 ⭐⭐
Tool Calling External API integration / 外部API集成 ⭐⭐⭐
Self-Consistency Tasks with randomness / 有随机性的任务 ⭐⭐
RAG Knowledge-based QA / 基于知识的问答 ⭐⭐⭐
Reflexion Code generation, debugging / 代码生成、调试 ⭐⭐⭐⭐

Prompt Design Principles / 提示设计原则

Clarity / 清晰性
❌ "帮我写个函数"
✅ "编写一个Python函数`fetch_user_name(user_id: str) -> str`"
🎯
Specificity / 具体性
❌ "输出结果"
✅ "Output ONLY the reversed word, no other text"
📚
Example-Driven / 示例驱动
❌ Pure text description / 纯文字描述
✅ Provide 2-3 relevant examples / 提供2-3个相关示例
🔒
Clear Constraints / 约束明确
❌ "写得简洁些"
✅ "Keep minimal. No prose or comments."
📐
Strict Format / 格式严格
❌ "返回JSON"
✅ "Respond with JSON in this exact format: {...}"
🔄
Iterate / 迭代优化
First prompt is rarely perfect. Test, analyze failures, improve.
第一个提示很少完美。测试、分析失败、改进。

Model Selection Guide / 模型选择指南

Models Used / 使用的模型
Model / 模型 Size / 大小 Best For / 最适合 Temperature / 温度
mistral-nemo:12b 12B parameters Code generation, text processing / 代码生成、文本处理 0.5
llama3.1:8b 8B parameters Reasoning, math, tool calling / 推理、数学、工具调用 0.0-1.0
Temperature Guide / 温度指南
  • 0.0-0.3: Precise output needed (code generation, formatting) / 需要精确输出
  • 0.4-0.7: Balanced creativity and accuracy (general tasks) / 平衡创造性和准确性
  • 0.8-1.0: Maximum diversity (Self-Consistency, brainstorming) / 最大多样性

Achievements Unlocked / 成就解锁

🎯
6 Techniques Mastered
K-shot, CoT, Tool Calling, Self-Consistency, RAG, Reflexion
💡
Prompt Design Mindset
Think carefully about prompt structure and constraints
🔧
LLM Integration Skills
Ollama API, structured outputs, error handling
📊
Iterative Improvement
Test, analyze, refine approach

Next Steps: Week 2 / 下一步:Week 2

Week 1 has built a solid foundation in prompt engineering. In Week 2, you'll apply these skills to build a complete AI-powered application:

  • 🚀 Action Item Extractor - Build an app that extracts tasks from notes
  • FastAPI Integration - Learn modern Python web frameworks
  • 🤖 LLM in Production - Real-world AI application patterns
  • Testing & Refactoring - Professional development practices