ANTHROPIC · JULY 2025
ANTHROPIC · 2025年7月

Prompting 101: From Naive to Precise

提示工程101:从朴素到精准

Anthropic's applied AI team builds a prompt live — starting from a bare "analyze this" that confuses a car crash with a skiing accident, and iterating through role, context, XML structure, system prompts, prefill steering, and step-by-step thinking until Claude correctly assesses fault on a Swedish insurance claim.

Anthropic应用AI团队现场构建提示——从一个会把车祸误认为滑雪事故的简单"分析这个"开始,逐步迭代角色、上下文、XML结构、系统提示、预填充引导和逐步思考,直到Claude正确判定瑞典保险理赔的责任归属。

25minutes分钟
8topics主题

The Naive Prompt: What Happens With No Context

朴素提示:没有上下文会发生什么

The scenario: a Swedish car insurance company needs Claude to analyze two pieces of evidence — a car accident report form (in Swedish) and a hand-drawn sketch of the crash. The naive approach: throw both images into the console with a simple instruction to review the accident and determine fault. The result: Claude thinks it's a skiing accident on a street called "Chappangan." An innocent mistake — the prompt gave Claude nothing to work with. No role, no context, no structure. Claude filled in the blanks with its own assumptions, and the assumptions were wrong.

场景:一家瑞典汽车保险公司需要Claude分析两份证据——一份车祸报告表(瑞典语)和一张手绘事故草图。朴素方法:把两张图片扔进控制台,附上简单的审阅事故和判定责任的指令。结果:Claude认为这是一起在"Chappangan"街上发生的滑雪事故。一个无辜的错误——提示没有给Claude任何可以工作的东西。没有角色,没有上下文,没有结构。Claude用自己的假设填补了空白,而假设是错误的。

Prompt engineering is a very iterative empirical science. You could almost have a test case where Claude is supposed to understand it's a car environment, nothing to do with skiing.

提示工程是一个非常迭代的经验科学。你几乎可以设一个测试用例,让Claude理解这是汽车环境,与滑雪无关。

Give Claude a Role & Context

给Claude角色和上下文

CONTEXT 上下文

Explain the Task, Not Just Name It

解释任务,不只是命名

Context goes beyond the role. Explain what the two inputs are (a filled-out form and a hand-drawn sketch), what language they're in (Swedish), what the form contains (checkboxes for accident type, location, vehicles involved), and what the sketch shows (a diagram of the accident scene). Claude doesn't know any of this unless you tell it. The more context you provide, the fewer assumptions Claude has to make — and assumptions are where errors live.

上下文超越角色。解释两个输入是什么(一份填好的表和一张手绘草图),什么语言(瑞典语),表格包含什么(事故类型、地点、涉及车辆的复选框),草图显示什么(事故现场的示意图)。除非你告诉Claude,否则它不知道这些。你提供的上下文越多,Claude需要做的假设就越少——而假设就是错误所在。

XML Tags: Structure Input & Output

XML标签:结构化输入和输出

XML tags are Claude's bread and butter for structured prompts. Wrap different sections of your prompt in named tags — <accident_form>, <accident_sketch>, <analysis>, <fault_determination> — and Claude treats each tag as a distinct section. This does two things: it tells Claude where each piece of information starts and ends (so it doesn't blend the form and the sketch into one mush), and it tells you exactly where to look in Claude's response for the output you care about. When Claude returns its analysis inside <fault_determination> tags, you can parse that programmatically. Without structure, you're scraping free text.

XML标签是Claude结构化提示的看家本领。用命名标签包裹提示的不同部分——<accident_form>、<accident_sketch>、<analysis>、<fault_determination>——Claude把每个标签当作一个独立的部分。这做两件事:它告诉Claude每条信息从哪里开始和结束(所以它不会把表和草图混成一团),它告诉你确切在哪里看Claude的响应中你关心的输出。当Claude在<fault_determination>标签内返回其分析时,你可以编程解析它。没有结构,你就是在抓取自由文本。

System Prompt: Personality Meets Guardrails

系统提示:个性与护栏

Prefill: Steering the First Words

预填充:引导第一个词

Prefill is a technique where you start Claude's response for it. Instead of letting Claude choose how to begin, you write the opening — often just a JSON opening brace { or an XML tag like <analysis> — and Claude continues from there. This is useful when you want structured output: if you prefill with {, Claude is strongly inclined to produce valid JSON. If you prefill with <analysis>, Claude writes its analysis inside that tag. It's a lightweight way to steer format without adding a paragraph of instructions about output structure. You can also parse the result more easily because you know what format it started in.

预填充是一种你替Claude开始回应的技术。不让Claude选择如何开头,你写开头——通常只是一个JSON开括号{或一个XML标签如<analysis>——Claude从那里继续。当你想要结构化输出时这很有用:如果你预填充{,Claude强烈倾向于产生有效的JSON。如果你预填充<analysis>,Claude在该标签内写分析。这是一种轻量级的引导格式的方式,无需添加关于输出结构的一大段指令。你也可以更容易地解析结果,因为你知道它以什么格式开始。

Extended Thinking: Let Claude Show Its Work

扩展思考:让Claude展示工作过程

Structured Output: JSON for Machines

结构化输出:给机器的JSON

If the output needs to go into a database, an API, or any downstream system, it needs to be machine-readable. Anthropic's recommendation: ask for JSON output. Combine this with prefill (start Claude's response with {) and XML structure in the prompt (tell Claude exactly which fields you want: accident_type, at_fault_vehicle, confidence, reasoning). The combination of XML-tagged instructions, system prompt guardrails, and JSON prefill means you get consistent, parseable output that you can trust enough to put into production — not just a one-off demo that works sometimes.

如果输出需要进入数据库、API或任何下游系统,它需要是机器可读的。Anthropic的建议:要求JSON输出。将此与预填充(用{开始Claude的回应)和提示中的XML结构(确切告诉Claude你想要哪些字段:accident_typeat_fault_vehicleconfidencereasoning)结合。XML标签指令、系统提示护栏和JSON预填充的组合意味着你得到一致的、可解析的输出,足够信任可以投入生产——而不仅仅是偶尔能用的一次性演示。

Prompting Is Empirical Science

提示是经验科学

There is no perfect prompt on the first try. You build it one test case at a time.

第一次没有完美的提示。你一次一个测试用例地构建它。

Prompt engineering is a very iterative empirical science. The best way to learn it is just to practice doing it.

提示工程是一个非常迭代的经验科学。学习它最好的方式就是练习。