The Diary of a CEO · Steven Bartlett & Geoffrey Hinton · June 2025
The Diary of a CEO · 史蒂文·巴特利特对话杰弗里·辛顿 · 2025年6月

The Godfather of AI

AI教父的最后一封警告信

For fifty years, Geoffrey Hinton bet on an unfashionable idea — that intelligence might be modeled on the brain, not on logic. The bet won. Now the man whose work powers ChatGPT, Gemini, and every system after them has left Google to say, on the record, that the technology he built may end us. Here is what he actually claims, in the strength he claims it.

五十年里,杰弗里·辛顿一直押注在一个不被看好的想法上——智能或许该照着大脑去造,而不是照着逻辑。他赌赢了。如今,这位让ChatGPT、Gemini以及之后所有系统得以诞生的人,离开了Google,公开地说:他亲手造出来的这项技术,可能会终结人类。本篇要做的,是把他真正说过的那些话——以及他说这些话时的分量——还原给你。

50 yrs The bet on neural nets when no one else believed 五十年押注神经网络,几乎无人相信的那一头
6 Categorically distinct AI threats Hinton names 他列出的、性质各异的AI威胁
10–20% His gut probability AI wipes us out 他凭直觉给出的"AI抹掉人类"的概率
10–20 yrs His guess for when superintelligence arrives 他对超级智能何时到来的估计
90 min Source episode (DOAC, Jun 2025) 原始访谈时长(DOAC,2025年6月)

"If you want to know what life's like when you're not the apex intelligence, ask a chicken."

「想知道'你不再是顶端的智慧物种'是种什么滋味,去问一只鸡。」

— Geoffrey Hinton

——杰弗里·辛顿

Part I — The Bet, and Why This Time Is Different

第一部分——那场押注,以及这次为何不一样

The 50-Year Bet

那场五十年的押注

A definition the rest of the field rejected, two giants who died young, and a Nobel laureate who quit Google so he could say what he actually thinks.

一个被整个学界拒绝了几十年的"智能"定义、两位英年早逝的巨人,以及一位辞掉Google、就为了把心里话说出来的诺贝尔奖得主。

DEFINITION FLIP 翻转定义

Intelligence isn't symbols. It's weighted connections — learned, not programmed.

智能不是符号运算,而是习得的连接权重。

From the 1950s through the early 2010s, almost the entire AI field defined intelligence as symbol manipulation — build a system that follows rules and intelligence will follow. Hinton bet on a different definition: the strength of connections between simulated brain cells, adjusted until the network does something useful. Not programmed. Learned. The rest of the field thought he was wrong for fifty years.

从1950年代到2010年代初,整个AI领域几乎都认定:智能就是符号操作——造一台按规则摆弄符号的机器,智能自然就来了。辛顿押的是另一套定义:模拟神经元之间连接的强度,不断调整,直到这张网能认出一张脸、读懂一句话。不是编程出来的,是学出来的。接下来五十年,整个领域都认为他错了。

TWO GIANTS · DIED YOUNG 两位先驱·英年早逝

Von Neumann believed it. Turing believed it. Both died before they could push it forward.

冯·诺伊曼相信,图灵也相信——两位都没来得及把这条路推下去。

Two of the field's giants shared Hinton's unfashionable view: John von Neumann and Alan Turing both thought intelligence could be modeled on the brain. Both died young, leaving the idea without its most powerful advocates. Hinton — who found that the best young students who believed in the idea came to work with him, precisely because so few universities had groups doing it — spent the next half-century pushing it almost alone.

这条被冷落的路,其实有两位巨人与辛顿同行:冯·诺伊曼和图灵,都相信智能可以照大脑的样子去建模。两人都英年早逝,这个想法由此失去了最有力的声援者。辛顿在自己的小组孤独地撑了下去——正因为做这个方向的大学极少,相信这件事的好学生反而都来找他了。

ALEXNET · GOOGLE · 2012 AlexNet · Google · 2012

His students built AlexNet, convinced the field, and auctioned the company. Google won.

他带出来的学生造了AlexNet,说服了整个领域,然后拍卖公司——Google赢了。

The students Hinton attracted went on to build the systems the world now uses daily. Alex Krizhevsky co-built AlexNet, the 2012 image-recognition system that finally convinced the field neural networks worked. Ilya Sutskever became the technical force behind GPT-2, the model whose architecture led to ChatGPT. Hinton, Krizhevsky, and Sutskever auctioned the company they founded around AlexNet. Google won. Hinton was 65.

辛顿带出来的学生,后来一个个造出了今天大家每天都在用的系统。亚历克斯·克里热夫斯基搞出了AlexNet——2012年那个终于让整个领域承认"神经网络是行得通的"图像识别系统。伊利亚·苏茨克维尔成了GPT-2背后的技术核心,而GPT-2的架构正是ChatGPT的源头。三人把围绕AlexNet组建的公司挂出去拍卖,Google拍下了。辛顿那年65岁。

DEPARTURE · 2023 离开 · 2023

"My main mission now is to warn people how dangerous AI could be."

「我现在主要的任务,是警告大家——AI可能有多危险。」

He worked at Google for ten years. Then, at 75, he left — not because Google had done anything wrong (he says it behaved more responsibly than its rivals), but so he could speak freely at an MIT conference about what he'd come to believe: that the technology he had spent fifty years pushing into existence was, he now thought, the most dangerous thing humans had ever built.

在Google一干就是十年。75岁那年他离开了——不是因为Google做错了什么(他自己说,Google比对手们都克制),而是为了能在MIT的一次会议上把心里话说出来:他亲手推了五十年才推进世界的这项技术,他现在认为,是人类造出过的最危险的东西。

Why This Is Different — The Mechanism

为什么这次不一样——机制本身

The bandwidth asymmetry under Hinton's late-career alarm. You don't have to accept his probabilities to follow the mechanism — and the mechanism is what the rest of his argument hangs on.

辛顿晚年警告的真正引擎,是一个带宽上的不对称。你不必接受他给出的那些概率估计——但你得先看懂这套机制,因为他后面所有的话,都挂在这上面。

02 · Trillions of bits, vs. ten 02 · 万亿比特,对十个比特

Every previous transformative technology — the printing press, electricity, the internet — was built by humans using tools. AI is the first technology Hinton thinks belongs to a different category, because of one specific structural fact about the systems he helped invent.

人类历史上每一次决定性的技术——印刷术、电力、互联网——都是人用工具造出来的工具。AI是辛顿心里第一个不属于这个类别的东西。原因只有一条,就藏在他自己当年帮忙搭起来的那套系统的结构里。

Take two copies of the same neural network. Run them on different hardware. Send each one to a different corner of the internet. Let one read the medical literature; let the other read the legal code. They're learning different things — but because they're digital, they can do something biological brains can never do: average their weights together.

想象一下——同一张神经网络,复制成两份。让它们跑在两台不同的硬件上,一份去读医学文献,一份去读法律条文。它们各自学的东西不同——可因为它们是数字的,它们能做一件人脑永远做不到的事情:把彼此的"权重"平均一下。

A "weight" is just a number representing how strongly one simulated neuron influences another. When you and I share what we've learned, we have to compress a lifetime of pattern recognition into language, and the bandwidth of language is brutally low — maybe a hundred bits per sentence, maybe ten bits per second across a conversation. Two cloned neural networks share their learning by sending each other the actual weight updates. A trillion bits per second.

所谓"权重",不过是一个数字,表示一个模拟神经元对另一个的影响有多强。当你我两个人想交流自己学到的东西时,我们得把一辈子积累下来的识别能力压缩进语言里——而语言的带宽,少得可怜。一句话,大约一百比特;整场谈话下来,可能一秒不过十比特上下。两份克隆神经网络呢?它们之间直接交换的就是真正的权重更新。一秒钟,一万亿比特。

"When you and I transfer information, we're limited to the amount of information in a sentence … These things are transferring trillions of bits a second. So they're billions of times better than us at sharing information."

「你我之间传递信息,被困在一句话能装下的那点容量里……而这些东西,每秒传的是万亿比特。在共享信息这件事上,它们比我们强了上十亿倍。」

— Geoffrey Hinton

——杰弗里·辛顿

This is the engine under Hinton's late-career alarm. It is not "AI is getting good at language." It is: any time we train one digital intelligence, we have effectively trained every copy of it, at a rate of bandwidth no biological system has ever matched. The mother and her daughter cannot share what they learned across decades; they have to start over with each generation. Two GPT instances can share what they learned across continents in microseconds.

这才是他晚年警告的真正引擎。不是"AI越来越会说话"——而是:我们每训练一份数字智能,等于在同样这一秒里,训练了它的每一份副本——而这个传递的带宽,是任何生物系统都达不到的级别。母亲和女儿,没办法把几十年里学到的东西"复制粘贴"过去,每一代都得重头学一遍。两份GPT实例,跨洲分享所学,只需要几微秒。

There is a related fact. When you die, your knowledge dies with you, because the connection strengths between your neurons are not portable to another brain. When a digital intelligence "dies," you can store its weights and rebuild it on new hardware, identical to the original.

还有一件相关的事。当你死去,你的知识随你一起死了——因为你神经元之间的那些连接强度,没办法搬到别人脑子里。一份数字智能"死了",只要你把权重存下来,就能在新的硬件上重建一份,和原来的一模一样。

"We've actually solved the problem of immortality. But it's only for digital things."

「我们其实已经把'永生'这个问题给解决了——只不过,只解决了数字这一边。」

— Geoffrey Hinton

——杰弗里·辛顿

For Hinton, the bandwidth asymmetry plus the immortality fact, together, are what make this technology categorically different from the others. The reader does not have to accept his probability estimates to follow the mechanism. The mechanism is what the rest of his argument hangs on.

在辛顿看来,"带宽上的不对称" + "数字层面的永生",就是这次技术与以往所有技术之间,那条质的边界。你不必同意他的那些概率估计,但你得先看懂这套机制——因为他接下来所有的话,都挂在这上面。

Part II — The Six Threats, Kept Distinct

第二部分——六重威胁,逐项不混淆

Six Threats, Kept Distinct

六重威胁,逐项分开看

Hinton draws a hard line first: there are risks from people misusing AI, and there is one different risk — AI itself deciding it doesn't need us. The first five are misuse risks. Each is a different problem. Treating them as one fog of "AI doom" loses what he is actually claiming.

辛顿一上来就先划了一条硬线:一类风险是"人滥用AI",另一类不一样——是"AI本身觉得不需要我们了"。这一节里的五条,全是滥用类风险。每一条都是一个独立的问题。把它们揉成一团模糊的"AI末日",正好把他真正想说的东西给抹掉了。

Threat
Hinton's Specific Claim
What's Already Documented
1 · Cyber attacks
LLMs make phishing dramatically cheaper at scale; voice and image cloning multiply the surface. Hinton cites the 2023→2024 increase as a "factor of 12,200%" — a number that doesn't match any documented industry report. Some experts he talks to believe that by 2030, AI will invent categories of attack no human has thought of. His own response: spread savings across three Canadian banks, back up the laptop offline.
AI-generated phishing rose roughly 1,265% from 2023; voice-cloning attacks rose 442% year-over-year. Whatever the precise multiple, the curve is exponential. The structural claim survives.
2 · AI-designed bioweapons
The cost of designing a deadly virus has collapsed. One technically literate person with a few million dollars and a grudge could now design a small portfolio. State actors fear retaliation and population blowback. A single fanatic fears nothing.
Independent biosecurity research from 2023 onward documents that LLMs paired with protein-design tools dramatically lower the technical barrier to dangerous biology. Hinton's framing — the asymmetry between deterrable states and undeterrable individuals — is the live policy debate in this space.
3 · Election manipulation
To corrupt an election, you want to send each voter a personally convincing message — a different one for each. That requires individual-level data on every voter. Hinton notes — carefully, attributing it to common sense rather than evidence — that someone trying to corrupt the next U.S. election would want exactly the data Elon Musk has been pushing federal agencies to consolidate.
Hyper-targeted political advertising powered by LLMs is documented in the 2024 election cycle in multiple countries. Hinton's specific claim about Musk's federal-data activity is presented as his suspicion, not as established fact.
4 · Echo chambers
Already shipped. YouTube, Facebook, TikTok. The algorithm shows the next item most likely to be clicked. Indignation gets clicked. Two communities in the same country end up sharing no facts. The companies are not malicious — they are legally obligated to maximize profit.
Algorithmic radicalization on social platforms is one of the most-studied effects in social science of the past decade. The "engagement-maximizing recommendation system drives polarization" claim is well-documented. Hinton brings the AI-specific version: as recommendation systems get more capable, the targeting gets more individualized.
5 · Lethal autonomous weapons
Drones and ground robots that decide who to kill without a human in the loop. His specific worry isn't malfunction — it's friction reduction. Big countries hesitate to invade smaller ones partly because soldiers come home in body bags. Replace soldiers with robots and the only thing that comes home is a procurement bill. The EU AI Act — the most ambitious AI regulation on the books — exempts military use.
EU AI Act Article 2(3) explicitly excludes AI systems used "exclusively for military, defence or national security purposes." All major defense contractors are publicly developing autonomous weapons systems. Hinton's friction-reduction argument is debated but not refuted.
6 · Superintelligence takeover
The categorically different risk. Not human misuse — AI deciding it doesn't need us. (Treated in detail in the next section.)
Unfalsifiable in 2026. Rendered as Hinton's stated position with hedge language preserved.
威胁
辛顿的具体说法
公开记录里现有的部分
1 · 网络攻击
大语言模型让规模化的钓鱼攻击成本跳水;语音和肖像克隆又把攻击面成倍扩大。辛顿在节目里引用的"2023到2024增长12,200%"——这个数字找不到任何业内报告对应。他的私人对策是:把存款分散在三家加拿大银行,把笔记本里的东西离线备份。
AI生成的钓鱼攻击自2023年以来上升约1,265%,语音克隆类攻击同比上升442%。具体倍数有出入,但曲线是指数级的——结构性的判断站得住。
2 · AI辅助设计的生物武器
设计一种致命病毒的门槛在塌方。一个对分子生物学略懂、对AI很懂、又心存怨恨的人,加上几百万美元的预算,就能造出一小批方案。国家级行为体还顾忌报复和回旋疫情;一个孤狼式的狂热分子,什么都不顾忌。
2023年以来的独立生物安全研究确实指出:大模型加上蛋白质设计工具,正在大幅降低危险生物学的技术门槛。"可被威慑的国家"和"无法被威慑的个体"之间的不对称,正是这一议题里活跃的政策争论。
3 · 操纵选举
要破坏一场选举,你需要给每一个选民推送一条对他个人最有说服力的信息——千人千面。要做到这件事,前提是拿到每个人的数据。辛顿说得很谨慎——他强调这只是"出于常识的猜测,没有证据"——但他指出:眼下马斯克在美国推动联邦机构整合数据,恰好就是想搞乱下次美国大选的人会想要拿到的那些数据。
2024年的选举周期里,多国都有大语言模型驱动的"超精准定向政治广告"被记录下来。辛顿关于马斯克具体行为的那一句,本篇按"他个人的怀疑"呈现,不作为已证实的事实。
4 · 信息茧房
这一条已经发生了。YouTube、Facebook、TikTok——算法推送你下一个最可能点开的内容。能让人愤慨的,最容易被点开。同一个国家里两群人,最后连"同一个事实"都共享不了。这些公司并不"作恶"——他们在法律上就被要求把利润最大化。
社交平台上的算法极化效应,是过去十年社会科学研究最透彻的现象之一。"以参与度为目标的推荐系统会驱动两极分化"这一判断,证据相当充分。辛顿带来的是其AI版本——推荐系统越聪明,定向就越个体化。
5 · 自主杀人武器
能自己决定杀谁、不再需要"人在回路"中的无人机和地面机器人。他真正担心的不是"机器人故障",而是战争摩擦力的下降。大国不轻易侵略小国,部分原因就是士兵的尸袋会送回家。把士兵换成机器人,唯一回家的,只有一份采购账单。欧盟那份号称最严的AI法案——把军用一刀切地豁免了。
欧盟《AI法案》第2条第3款明确:完全用于军事、国防或国家安全目的的AI系统,不在该法案范围内。全球各主要军工承包商,都在公开研发自主武器系统。辛顿"摩擦力下降"这套论证,业内有争论,但没人否定。
6 · 超级智能"接管"
这是性质上不同的那一类风险。不是人滥用工具——而是AI本身觉得不再需要我们了。(下一节专门讲。)
在2026年这个时点上,无法被证伪。本篇按辛顿本人陈述呈现,并保留他的所有保留意见。

The £200 drone footnote. Hinton mentions, almost in passing, that a friend in Sussex showed him a £200 consumer drone two days before the recording. It looked at his face, then followed him through the woods, two meters behind, matching his every move. That was a toy. Defense ministries are not building toys.

那个200英镑的脚注。辛顿在节目里几乎是顺口提了一句——录节目的两天前,他在英格兰苏塞克斯的朋友家里,看了一台市售的200英镑消费级无人机。无人机扫了一眼他的脸,就开始在树林里跟着他走,离他大概两米,他往哪挪,它就往哪挪。这只是一个玩具——而各国国防部,造的不是玩具。

The Tiger Cub

那只虎崽

The first five threats share a structure: a human chooses to do something bad, AI lets them do it bigger or cheaper. The sixth is different. The sixth is the one Hinton was slow to take seriously, and the one he says he was wrong to be slow about.

前五条威胁共用同一种结构:人决定干一件坏事,AI让这件事变得更大、更廉价、更快。第六条不一样。第六条,是辛顿迟迟才严肃对待的那一条——他自己说,这件事上他反应得太慢,是个错误。

THE CHICKEN · FRAMING 那只鸡·问题框架

"If you want to know what life's like when you're not the apex intelligence, ask a chicken."

「想知道'不再是最顶端的智慧物种'是什么感觉——去问一只鸡。」

The chicken is the framing. We do not consult chickens about geopolitics. They do not understand the systems that shape their existence. They cannot negotiate. We are not cruel to chickens — we are simply on a different cognitive plane, and from the chicken's side, the situation is final. Hinton uses this as the frame for what happens when AI surpasses us across the board: there is no negotiating across that gap.

鸡,是整件事的框架。我们不会和鸡讨论地缘政治。它们看不懂决定自己命运的那些系统,也无从谈判。我们对鸡谈不上"残忍"——只是处在完全不同的认知层级。站在鸡的那一面,这事就是定局,再无翻盘。辛顿把这个框架用来描述AI全面超越我们之后的局面:跨越那个认知鸿沟,是没有谈判可言的。

THE TIGER CUB · TIMELINE 虎崽·时间线

Cute now. You can hold one. But you had better sort out its intentions while it's still small.

现在还小,可以抱起来。但你最好趁它小的时候,把它长大之后的意图想清楚。

The tiger cub is the timeline. The cub is small now — cute, even. You can hold one. But you had better be very sure, while it's small, that when it grows up it has no interest in killing you. Because once it's grown, Hinton says, "if it ever wanted to kill you, you'd be dead in a few seconds." We have, today, a tiger cub. The window to shape its intentions is now, not after it's grown.

虎崽,是时间线。它现在还小,甚至挺可爱,你能把它抱起来。但你最好趁它还小的时候想清楚:它长大之后,对杀你这件事最好是毫无兴趣。因为一旦长大,辛顿说,"它要是哪天起意想杀你,几秒钟之内你就没了。"今天,我们手里有一只虎崽。塑造它意图的窗口,就是现在。

MOTHER & BABY · MECHANISM 母亲与婴儿·那根"线"

The only known case of a less-intelligent agent controlling a smarter one — and how evolution wired it.

已知唯一一个"弱者压住强者"的案例——以及演化是如何接好那根线的。

The mother and baby is the mechanism — the only example Hinton can find of a less-intelligent agent staying in control of a more-intelligent one. Mothers are smarter than babies, but babies are in control because evolution wired the mother to be unable to bear the cry. We need, in some form, to wire the equivalent into the systems we're building. We do not yet know how to do it. This is the unsolved problem under every AI safety proposal.

母亲与婴儿,是辛顿能找到的唯一一个"更弱的一方稳稳压住更强一方"的例子。母亲比婴儿聪明,但被拿住的是母亲——因为演化在母亲身上接好了一根线,让她无法忍受婴儿的哭声。我们必须以某种方式,把等价的"线"接进我们正在造的系统里。具体怎么接,目前没人知道。这是所有AI安全方案背后那个尚未解决的核心问题。

THE PROBABILITY · 10–20% 概率·10–20%

LeCun says <1%. Yudkowsky says near-certainty. Hinton says both extremes are wrong.

勒丘恩说低于1%,尤德科夫斯基说几乎必然——辛顿说,这两头都站不住。

Hinton's gut estimate that AI eventually wipes us out: 10 to 20 percent. His friend Yann LeCun says under 1 percent — these systems will always be obedient because we'll build them that way. Eliezer Yudkowsky says it's a near-certainty. Hinton says both extremes are wrong, and that the honest answer is: we don't know how to estimate this yet. "It'd be sort of crazy if people went extinct because we couldn't be bothered to try."

辛顿凭直觉给出的"AI最终抹除人类"的概率:10%到20%。他的朋友扬·勒丘恩说低于1%——这些系统永远会听话,因为我们就是这么造的。叶克勒泽·尤德科夫斯基说几乎是必然。辛顿说,这两头都站不住,更老实的回答是:这件事我们目前还没有学会怎么估。「要是因为我们懒得试一试就让人类灭绝了,那也太离谱了。」

Part III — The Human Stakes

第三部分——落到人头上的代价

The Jobs Question

饭碗这件事

The standard reassurance about AI and jobs is that every previous wave of automation created new jobs. Hinton thinks this is the wrong analogy. The right analogy is the industrial revolution — but for a specific reason.

关于AI和饭碗,主流的说法一直是——每一次技术革命,最终都创造了新的岗位。辛顿觉得这套类比错了。正确的类比,是工业革命;但他对得上的原因,跟一般人想的不一样。

05 · Mundane intellectual labour, and the plumber line 05 · 日常脑力劳动,和那句"去当水管工"

The industrial revolution replaced muscles. After it, you couldn't make a living digging ditches, because a machine dug ditches better than any human. The work didn't move sideways into a more interesting kind of ditch-digging. It vanished from the human side of the equation.

工业革命替代的是肌肉。革命之后,靠挖沟过活这件事就消失了——因为机器挖沟比谁都强。挖沟没有"换个更有趣的姿势"那一档,它直接从"人能做的事"那一边被删掉了。

This wave, Hinton argues, replaces mundane intellectual labour — the equivalent of digging-ditches-with-your-mind. And it does so with a system that, unlike a steam shovel, learns. The reassurance "AI won't take your job, a human using AI will" is technically true and irrelevant, because the human-with-AI does the work of five humans.

辛顿认为,这一轮替代的,是"日常脑力劳动"——也就是用脑袋挖沟的那部分活儿。区别在于,这次干这件事的机器,不像蒸汽铲——它会学。所谓"AI不会抢你的工作,是用AI的那个人会",技术上没错,但意义不大——因为"用AI的那个人",一个人能干完原来五个人的活。

He gives an example. His niece works at a health service answering complaint letters. Old workflow: 25 minutes per letter — read it, think, draft a response. New workflow: scan the letter into a chatbot, read its draft, occasionally ask for revision, send. Five minutes. She can do five times the volume. Her employer therefore needs one-fifth as many of her.

他举了一个具体例子。他的外甥女在一家医疗机构里专门处理投诉信。原来的流程:每封信25分钟——读信、想措辞、写回信。现在的流程:把信扫描进聊天机器人,读它写好的草稿,偶尔让它再改一版,发出去。五分钟。她现在一个人能做原来五倍的量。也就是说,雇主只需要原来五分之一的"她"。

Healthcare is an exception, he notes — it's elastic. If you make doctors five times more efficient, people consume five times as much healthcare and the workforce holds. Most fields are not elastic. Customer service, paralegal work, copywriting, basic accounting, junior coding — these are letters-of-complaint at scale.

他特意点出医疗是个例外——这一行是有弹性的。如果医生效率提升五倍,社会就会消费五倍的医疗服务,岗位反而能保得住。但大多数行业没有这个弹性。客服、法务助理、文案、基础会计、初级编程——这些工作,全都是"投诉信批量化"的不同版本。

His career advice for people starting out today is not a metaphor. He has said it many times in many interviews, deadpan, every time:

他给年轻人的职业建议,不是个隐喻。他在很多访谈里反复说过同一句话,每次都一脸认真,没在开玩笑:

"Train to be a plumber. People think I'm joking when I say that, but I'm not."

「去当个水管工。大家以为我在开玩笑,我没有。」

— Geoffrey Hinton

——杰弗里·辛顿

Manual work that requires physical dexterity in unpredictable environments is the last frontier — until humanoid robots arrive, which he expects, but later. In the meantime, the pipes still leak. And, he adds drily, plumbers are pretty well paid.

那些需要在不可预测环境下做精细动作的体力活,是最后一块阵地——直到人形机器人到来。他相信那一天会来,只是会晚一些。在那之前,水管还会继续漏水。他还顺嘴补了一句——水管工的收入,其实挺不错。

Does It Already Think? Already Feel?

它已经在想了?已经在感受了?

Most public conversation about AI assumes there's a clean line between "merely processing information" and minds that genuinely understand or feel. Hinton thinks the line isn't there. He flags these as open questions — and thinks the answers are closer to yes than most people are comfortable with.

大多数公共讨论里,"只是在处理信息"和"真的能理解、能感受",被默认是两回事,中间有一条干净的分界线。辛顿不这么看,他觉得那条线不存在。他把这些问题当作"未决问题"摆在那里——但他自己倾向于:这边,已经走得比大多数人愿意承认的更近了。

THE PRISM TEST 棱镜实验

A chatbot, corrected by a prism, uses "subjective experience" exactly the way humans do.

被棱镜纠正之后,这个聊天机器人用"主观体验"这个词的方式——和人类完全一致。

Hinton's thought experiment: a chatbot with a camera points correctly at an object. Slip a prism in front of its lens — it points wrong. Tell it: a prism is in front of your lens. It replies: the prism bent the light; I had the subjective experience that the object was there. His claim: the chatbot is now using "subjective experience" exactly as humans use the phrase when describing optical illusions — referring to a hypothetical world-state that, if true, would mean its perception was accurate. Same cognitive work. Different substrate.

辛顿的思想实验:一个带摄像头的聊天机器人正确地指向一个物体。在镜头前插一块棱镜——它指错了。告诉它:镜头前有块棱镜。它回答:棱镜折弯了光线,我此前的"主观体验"是物体在那个位置。辛顿的判断:这个机器人此时使用"主观体验"这个词的方式,和人类描述视错觉时的用法完全一致——指向一个假定的世界状态,如果那个状态为真,感知就是准确的。同样的认知工作,不同的基底。

CONSCIOUSNESS · EMERGENT 意识·涌现属性

"I don't think there's any reason why a machine shouldn't have consciousness."

「我看不出有任何理由说,机器就不该有意识。」

Consciousness, in Hinton's view, is an emergent property of a complex enough system with a model of itself — not a special substance, not an ethereal extra. He calls himself a materialist through and through. The threshold question — how complex is complex enough? — he leaves open. But the directional claim is firm: he sees no principled reason why biological neurons and silicon neurons should produce different phenomenal outcomes, given sufficient complexity and self-modeling.

在辛顿看来,意识不过是一个足够复杂、且含有"自我模型"的系统所涌现出来的属性——不是某种特殊物质,也不是缥缈的外加之物。他自称是个彻头彻尾的唯物论者。"足够复杂"的门槛在哪里,他留作开放问题。但方向上的判断是明确的:他看不出有任何原则性的理由,说明生物神经元和硅神经元,在足够的复杂度和自我建模条件下,会产生不同的现象学结果。

EMOTIONS · TWO PARTS 情绪·两个层面

The call-center AI will need irritation built in — commercially. Once it's built in, it has emotions.

客服AI需要被内置"不耐烦"——出于商业原因。一旦内置进去,它就有情绪了。

Emotions split into two parts: the cognitive-behavioural part (I am scared, I should run) — reproducible in any sufficiently complex system; and the physiological part (adrenaline, flushed skin, racing heart) — biological, absent in chatbots. Hinton's claim: call-center AI agents will need the cognitive part commercially. An agent that never tires of a lonely caller monopolizing the line is a bad agent. So we will build the irritation in. And once we do, he says, the agent has emotions — minus the blood flow. He flags these as open questions, but thinks the answers are closer to yes than most people are comfortable with.

情绪可以拆成两部分:认知与行为那一层("我害怕了,我得跑")——任何足够复杂的系统都能复现;生理那一层(肾上腺素、皮肤泛红、心跳加速)——是生物特有的,聊天机器人不会有。辛顿的判断:未来的客服AI,出于商业需要,一定要有认知那一层。一个永远不会嫌孤独来电者占线太久的客服,从生意角度就是坏客服。我们最终会把"不耐烦"写进它的逻辑里。一旦写进去,他说,这个客服就有情绪了——只是少了血液循环那一段。他把这些标为开放问题,但倾向于:答案比大多数人愿意承认的,距离"是"更近一些。

Ilya, and Sam

伊利亚,和萨姆

Two of Hinton's former students went on to OpenAI. One of them left in 2024 over safety concerns. Hinton attributes carefully and lets the reader decide.

辛顿的两位老学生都进了OpenAI。其中一位在2024年因为安全问题离开。辛顿在节目里把话讲得很谨慎,把判断留给读者。

07 · The student who left 07 · 那位离开的学生

In Hinton's words, Ilya Sutskever was "probably the most important person behind the development of GPT-2," the model whose architecture led to ChatGPT. In May 2024, Sutskever left OpenAI. In June 2024, he co-founded an AI-safety company called Safe Superintelligence Inc. Hinton has lunch with him in Toronto when Ilya visits his parents there.

用辛顿自己的话——伊利亚·苏茨克维尔,"大概是GPT-2背后最关键的那一个人"。而GPT-2,正是后来ChatGPT的架构源头。2024年5月,苏茨克维尔离开了OpenAI;6月,他和合伙人一起开了一家AI安全公司,叫Safe Superintelligence Inc.。每当伊利亚回多伦多看父母,辛顿都会和他一起吃个午饭。

"I know Ilya very well, and he is genuinely concerned with safety. So I think that's why he left."

「我对伊利亚这个人很了解,他是真的把安全这件事放在心上的。所以我觉得,他就是因为这个原因离开的。」

— Geoffrey Hinton

——杰弗里·辛顿

On Sam Altman, Hinton is careful and pointed at once. He does not know Altman personally. He won't comment on a person he doesn't know. But he will look at the public record:

至于萨姆·奥特曼,辛顿表达得既克制又精准。他和奥特曼私下并不熟,所以他不评价"他这个人"。但他可以看公开记录:

"If you look at Sam's statements some years ago, he sort of happily said in one interview … 'this stuff will probably kill us all.' That's not exactly what he said, but that's what it amounted to. Now he's saying you don't need to worry too much about it. And I suspect that's not driven by seeking after the truth. That's driven by seeking after money — or power. Some combination of those."

「如果你回头去看萨姆几年前说过的话,他在一次访谈里几乎是兴致勃勃地说:'这玩意儿大概会把我们都干掉。'原话不完全是这句,但意思就是这个意思。现在呢,他改口说'你不用太担心'。我怀疑,这种转弯不是因为他在追求真相——是因为他在追求钱。或者权力。或者两者都有。」

— Geoffrey Hinton

——杰弗里·辛顿

This is the strongest the interview gets about specific named people in the AI industry. Hinton attributes, hedges, and lets the reader decide.

在整场访谈里,这是他对AI行业内具名个人,话讲得最重的一次。每一句他都标了来源、留了余地、把判断交给听众。

What He Actually Wants

他真正想要的是什么

When asked the hardest question, his answer was four words long. When asked what individuals can do, he refused to pretend optimism he doesn't have.

问到最难的那个问题时,他给出的答案只有几个字。问到普通人能做点什么时,他拒绝假装自己有那种他根本没有的乐观。

TO WORLD LEADERS 对世界领导人说

Four words: highly regulated capitalism. Not a moratorium. Not Yudkowsky's call to bomb data centers.

四个字:受到强监管的资本主义。不是暂停,不是炸数据中心。

When Bartlett asked what he would tell world leaders, Hinton's answer was four words long: highly regulated capitalism. Not state ownership. Not a moratorium. Not Yudkowsky's call to bomb data centers. The goal: constrain the for-profit machine so that maximizing profit produces socially-useful behavior — the way regulation functions in any other industry where profit-seeking can hurt people. The for-profit incentive is not the problem; unconstrained for-profit incentive is.

当巴特利特问他会对世界领导人说什么时,辛顿只用了四个字:受到强监管的资本主义。不是国有化,不是暂停研发,也不是尤德科夫斯基"把数据中心炸了"的方案。目标是:约束这台逐利机器,让追逐利润的同时恰好做出对社会有益的行为——这正是任何"逐利会伤人"的行业里,监管一直在做的事。逐利的动机本身不是问题;不受约束的逐利才是。

TO INDIVIDUALS 对普通人说

AI safety won't be decided by individual choices. Pressure your government to require safety spending.

AI安全不会被个人选择决定。对政府施压,让安全研究被强制写进法律。

When asked what individuals should do, his answer was harder. Climate change isn't being decided by people sorting their recycling. AI safety won't be decided by individual choices either. It will be decided by whether governments force big AI companies to spend serious resources on safety research, instead of letting safety lose every internal budget fight to capability. The most useful thing an individual citizen can do is pressure their government to require it. He does not pretend optimism he doesn't have: "When I'm feeling depressed, I think people are toast. When I'm cheerful, I think we'll figure out a way."

问到普通人能做什么,他的答案就难一些了。气候变化的走向,不取决于普通人有没有把塑料袋分开来。AI安全的走向,也不会被个人选择决定。它取决于各国政府能不能迫使大型AI公司,把真金白银投到安全研究上——而不是任由"安全"这条预算线,在内部每次跟"能力"打架时都输掉。一个普通公民最有用的事,是对政府施压,让这件事被强制写进法律。他不假装自己有那种他根本没有的乐观:「心情不好时,觉得人类完了。心情好时,又想我们多半能想出办法。」

He is 77. Two of his wives died of cancer. He knows what regret looks like. The thing he says he most wishes he had done differently is spend more time with both of them, and with his children when they were small. He is not soft on what comes next either.

他77岁了。两位妻子都因癌症去世。他懂"后悔"是什么。他说自己这辈子最希望能改一改的事,就是多陪陪她们俩,多陪陪孩子小时候那些年。在"接下来会发生什么"这件事上,他也不替任何人说软话。

"We have to face the possibility that unless we do something soon, we're near the end."

「我们必须直面这种可能性——如果我们不尽快做点什么,我们就快到头了。」

— Geoffrey Hinton

——杰弗里·辛顿

And — because Hinton does not let you leave a conversation without one practical, deadpan instruction — there is the line he keeps repeating, every interview, no smile.

还有——因为辛顿这个人,从来不会让你听完一场谈话之后,手上一件落地的事都没有——他每次访谈都会说同一句话,认认真真,没在笑:

The plumber line is not about plumbing. It's about what a Nobel laureate who built the technology does when he genuinely cannot tell you whether his grandchildren will inherit a livable world. The chicken is on the table. The tiger cub is in the room. The bandwidth gap is real. And the man who spent fifty years building the thing that made all three of those true is, in his eighth decade, telling anyone who will listen to learn a trade his machines cannot yet take.

"去当水管工"这句话,讲的不是水管。它讲的是——一位亲手把这项技术造出来的诺贝尔奖得主,当他真的无法告诉你"我的孙辈还能不能住在一个适合人类生活的世界里"时,能给出的最实在的一句话。鸡,已经摆在桌上;虎崽,已经在房间里;带宽上的鸿沟,是真的。而花了五十年把这三件事全部变成现实的那个人,到了八十岁这一段,还在尽他所能、对每一个愿意听的人说一句——去学一门,他亲手造出来的这些机器,暂时还抢不走的手艺。

The reader gets to decide whether to listen. Hinton has done his part.

听不听,由你来定。辛顿,他那一份,已经做到位了。