查看原文
其他

OpenAI 首席技术官:GPT4仅高中生水平,博士级AI拟2025年底发布!GPT5还是GPT6?

风清徐徐来 AI变现研习社
2024-08-22

视频是 OpenAI 首席技术官 Mira Murati 在其母校达特茅斯工程学院举行的一次对话记录。

说明:视频中文字幕由万兴喵影自带翻译功能生成,后文翻译由和gpt-4o

背景:

下图左一为  Mira Murati


达特茅斯学院(Dartmouth College)是一所位于美国新罕布什尔州汉诺威的私立研究型大学。

达特茅斯工程学院(Thayer School of Engineering)是达特茅斯学院下属的一所工程学研究生院,也是常春藤联盟中唯一一所专注于工程领域的学院。Thayer School 以其创新的教学方法、跨学科研究和紧密的工业联系而闻名。

OpenAI是全球顶流人工智能公司,其开发的AI大模型 GPT系列力压谷歌、马斯克,一直是全球公认第一的 AI产品。关于OpenAI和GPT-4o 详细介绍和基础操作,笔者专门做了一期视频介绍,链接见文末

Mira Murati 现任OpenAI 首席技术官,她也是达特茅斯工程学院2012年毕业生。

活动由达特茅斯 Thayer 工程学院院长 Alexis Abramson 主持,并由 Jeff Blackburn 担任主持人,讨论内容涵盖了 Mira 从达特茅斯到特斯拉再到 OpenAI 的职业旅程。Mira 重点介绍了她在 AI 技术(如 ChatGPT 和 DALL-E)方面的工作,强调了 AI 在教育、创意和各个行业中的变革潜力。本次访谈中,Mira 介绍了她从达特茅斯毕业后,从航空企业到特斯拉再到 OpenAI 的职业旅程,然后分享了AI不久后将对工作、生活、学习、社会的影响

以下是访谈核心内容:

  • 从某种意义上说,(现在AI的繁荣)是在数十年人类努力的基础上继续构建。是这三个因素的结合:神经网络,然后是大量的数据,以及大量的计算能力,而且是线性的。

  • 生成式AI 本质上就是依靠大量的基础资料,用概率角度,从上一个token预测下一个token。

  • OpenAI最开始做的API(应用程序接口),交给第三方商业公司,发现无法商业化落地,于是OpenAI亲自下场,开发了ChatGPT,一直到现在。

  • GPT3.5 就是儿童水平,gpt4 是优秀的高中生,不久后(原话是 a few years)将有博士级别的 AI 出来,在主持人问1年半以后,Mira Murati笑着点了头。

  •  ChatGPT 最显著的作用之一是将 AI 带入公众意识,让人们直观地了解这项技术的能力和风险。尝试它、在你的业务中使用它,并看到它可以做什么,不可以做什么,才能真正了解其影响,并为未来做好准备。

    (译者:从身边做起,了解AI熟悉AI。这也是为什么笔者近期上线了一门36节视频《AI办公常识课》的初衷,gpt4o常识视频地址 https://t.qianliao.tv/SOYX5z11

  •  AI 会影响到每个行业,尤其是在知识工作者方面。或许在物理世界的应用会稍微滞后一些,但知识工作者方面几乎没有一个领域不会受到影响。

  • AI 可以显著加快初稿的完成速度,无论是创造新设计、编写代码、写论文或写电子邮件,AI 都可以大大简化这些过程。使你能够专注于更具创造性和更复杂的任务,尤其是在编程中,你可以将许多繁琐的工作交给 AI。

  • AI会成为创意领域的协作工具,更多的人会变得更加有创造力。

  • 关于AI导致的新工作与失业。新工作将出现,但具体多少、变化的程度和失业的情况,我们还不清楚。我认为没人真正知道,因为这些还没有被系统研究。经济将会转型,这些工具将创造巨大的价值,问题在于如何将这些价值分配到社会中,是通过公共福利、基本收入还是其他新的系统?还有很多问题需要探索和解决。

  • 关于AI在教育领域的应用。AI 在教育领域的应用将是非常强大的,它可以提升我们的创造力和知识水平。我们有机会构建高质量且非常易于获取的教育资源,你可以为世界上的任何人提供个性化的教育。

  • 价值观是通过数据传递给AI的,包括互联网数据、许可数据和人工标注数据。不同的组织可以通过数据给予不同的价值观。是一个非常困难的问题,因为人类本身就对很多问题存在分歧,而技术问题又更加复杂。

(下文由 Claude3.5 sonnet 翻译的)

Alexis Abramson: 下午好,各位。很高兴在我们的新大楼里看到座无虚席。我是 Alexis Abramson,达特茅斯工程学院的院长。非常荣幸能邀请大家参加这场特别的活动,与 Mira Murati 对话。她是我国人工智能领域最杰出的领导者之一,同时也是达特茅斯工程学院的校友。

在开始之前,我想特别欢迎一位特殊的嘉宾,Joy Buolamwini,她在 AI、AI 伦理和算法正义方面也颇有建树。她明天将获得达特茅斯的荣誉学位。同时热烈欢迎 Mira 和所有现在是她家人或她在达特茅斯时期的家人,包括她的兄弟 Ernel Murati,他也是 2016 年 Thayer 学院的校友。

感谢我们在神经计算科学研究所和计算机科学系的合作伙伴。从 1956 年达特茅斯举办的第一次开创性人工智能会议,到我们目前在大型语言模型和精准健康方面的多学科研究,达特茅斯一直处于 AI 创新的前沿。

因此,我们特别高兴能邀请到 Mira,OpenAI 的首席技术官,同时也是我们工程学院 2012 年的毕业生。她因在当今最受关注的 AI 技术开发方面的开创性工作而闻名。在 OpenAI,她领导了 ChatGPT 和 Dall-E 等变革性模型的开发,为未来的生成式 AI 技术奠定了基础。

在 Thayer 学习期间,她将工程技能应用于与达特茅斯方程式赛车队一起设计和建造混合动力赛车。明天在毕业典礼上,她将获得达特茅斯颁发的荣誉理学博士学位。最后,今天主持我们对话的是 Jeff Blackburn,他是 1991 年达特茅斯毕业生,现任达特茅斯董事。

Jeff 的职业生涯主要集中在全球数字媒体和技术的发展上。他曾担任亚马逊全球媒体和娱乐高级副总裁,直到 2023 年,并在公司担任过多个领导职位。他对技术、媒体和娱乐交叉领域的洞察肯定会让我们今天的对话更加精彩。 那么,不多说了,我把对话交给他们。请大家欢迎 Mira Murati 和 Jeff Blackburn。(观众鼓掌)

Jeff Blackburn: 谢谢你,Alexis。这栋大楼真漂亮,太好了。Mira,非常感谢你能来这里,抽出时间。我只能想象你现在的日程有多忙。

Mira Murati: 能来这里真是太好了。

Jeff: 你能为大家抽出这么多时间真是太好了。

Mira: 我真的很高兴能来这里。

Jeff: 我想直接开始,因为我知道每个人都想听听你生活中发生的事情,以及你正在构建的东西,因为这真的很 fascinating。也许我们应该从你开始说起,你离开 Thayer 后,在特斯拉工作了一段时间,然后去了 OpenAI。

你能描述一下那段时期,以及加入 OpenAI 的早期情况吗?

Mira: 是的,所以我...在离开 Thayer 之后,我其实短暂地在航空航天领域工作过,然后我意识到航空航天行业发展有点慢,而我对特斯拉的使命非常感兴趣,当然还有在建立一个可持续的交通未来方面的创新挑战,所以我决定加入他们。

(译者补充:Mira 于2012年至2013年期间在一家法国航空航天公司Zodiac Aerospace工作,担任高级概念工程师。她随后在2013年加入特斯拉,她随后在2013年加入特斯拉,担任Model X项目的高级产品经理,负责开发这款豪华电动SUV,并参与了自动驾驶技术的早期版本)

Mira: 在参与 Model S 和 Model X 项目之后,我想我并不想成为一个汽车人。我想在某种程度上推动社会前进,同时也要面对真正困难的工程挑战。当时在特斯拉工作时,我对自动驾驶汽车产生了浓厚的兴趣,特别是将计算机视觉和 AI 技术应用于自动驾驶汽车的交叉领域。

我想,"好吧,我想在不同领域学习更多关于 AI 的知识。于是我加入了一家创业公司,在那里我领导工程和产品开发,将 AI 和计算机视觉应用于空间计算领域,思考下一代计算接口。当时,我认为未来会是虚拟现实和增强现实。

现在我认为情况有所不同,但我当时在想,如果你可以用手来与复杂信息交互,无论是公式、分子还是拓扑学概念,你就可以以更直观的方式学习这些东西并与之互动,从而扩展你的学习。

结果证明 VR 那时还为时尚早。但这让我有机会在不同领域学习 AI,也让我对 AI 的发展程度和可能的应用有了不同的视角-

Jeff: 所以在特斯拉的自动驾驶项目中,你看到了机器学习、深度学习。你能预见它的发展方向。

Mira: 是的,视觉技术。但不是很清晰-

Jeff: 你当时有和 Elon 合作吗?

Mira: 我确实在最后一年与 Elon 有合作。但当时并不完全清楚它会如何发展。那时,它仍然只是将 AI 应用于狭窄的领域,而不是普遍应用。你只是将它应用于非常具体的问题,在 VR 和 AR 中也是如此

从那时起,我想我不仅仅想将它应用于具体问题。我想学习研究本身,真正了解正在发生什么,然后从那里出发应用到其他领域。所以这就是我加入 OpenAI 的原因,OpenAI 的使命对我很有吸引力。当时它是一个非营利组织,虽然现在结构已经改变,但使命并没有变。

当我六年前加入时,它是一个致力于构建安全的通用人工智能的非营利组织,除了 DeepMind 之外,它是唯一一家这样做的公司。当然,现在有很多公司在构建某种版本的这种技术。

Jeff: 是的,有一些。

Mira: 是的。这就是我加入 OpenAI 的旅程开始的方式。

Jeff: 明白了。所以你在那里已经建立了很多东西。我的意思是,也许我们可以为大家简单介绍一下机器学习、深度学习,现在的 AI。这些都是相关的,但又有所不同。那么,现在发生了什么,以及它是如何体现在 ChatGPT、Dall-E 或你们的视频产品中的?它是如何工作的?

Mira: 这并不是什么全新的东西。从某种意义上说,我们是在数十年人类努力的基础上继续构建事实上,它确实始于这里。在过去十年左右发生的事情是这三个因素的结合:你有神经网络,然后是大量的数据,以及大量的计算能力。你将这三者结合起来,就得到了这些真正变革性的 AI 系统或模型,结果发现它们可以做这些令人惊叹的事情,比如完成通用任务,但我们并不真正清楚它是如何做到的

深度学习就是有效的。当然,我们正在尝试理解并应用工具和研究来了解这些系统实际上是如何工作的,但我们知道它是有效的,因为我们在过去几年里一直在这样做。我们也看到了进步的轨迹,以及这些系统随着时间的推移是如何变得更好的。

当你看像 GPT-3 这样的系统,我们在大约 3 年半前部署的大型语言模型。GPT-3 能够做到...首先,这个模型的目标只是预测下一个标记。

Jeff: 实际上就是预测下一个词。

Mira: 是的,基本上是这样。然后我们发现,如果你给这个模型这个目标来预测下一个标记,并且你用大量的数据训练它,使用大量的计算能力,你得到的也是一个实际上能够理解语言的模型,达到与我们相似的水平。

Jeff: 因为它读了很多书。它读了所有的书。

Mira: 它某种程度上知道下一个词应该是什么。

Jeff: 基本上互联网上的所有内容。但它并不是在记忆下一个词是什么。

Mira: 它实际上是在生成对它之前见过的数据模式的自己的理解。然后我们发现,好吧,不仅仅是语言。实际上,如果你把不同类型的数据放进去,比如代码,它也可以编程。所以,实际上,它并不在乎你放进去什么类型的数据。可以是图像,可以是视频,可以是声音,它都可以做同样的事情。

Jeff: 哦,我们稍后会谈到图像。

Mira: 是的。但是,文本提示可以给你图像或视频,现在你甚至看到了相反的情况。

Jeff: 是的,没错。

Mira: 所以我们发现这个公式实际上非常有效,数据、计算能力和深度学习,你可以放入不同类型的数据,你可以增加计算量,然后这些 AI 系统的性能会越来越好。

Mira: 这就是我们所说的扩展定律。它们并不是真正的法则。它本质上是一种统计预测,随着你投入更多的数据和更多的计算能力,模型的能力会不断提高。这就是今天推动 AI 进步的动力。

Jeff: 为什么你们从聊天机器人开始?

Mira: 从产品角度来说,实际上我们是从 API 开始的我们真的不知道如何将 GPT-3 商业化实际上,商业化 AI 技术是非常困难的。最初,我们认为这是理所当然的,我们非常专注于构建技术和做研究。

我们想,"这里有一个了不起的模型,商业伙伴们,拿去吧,在这个基础上构建令人惊叹的产品。"  然后我们发现,这实际上非常困难。这就是为什么我们开始自己动手。

Jeff: 这就引导你们构建聊天机器人?

Mira: 是的,因为我们试图弄清楚,为什么这些非常成功的公司很难将这项技术转化为有用的产品?

Jeff: 我明白了。

Mira: 这是因为它是一种非常奇怪的构建产品的方式。你是从能力开始的。你是从一项技术开始的。你并不是从"我想要解决世界上的什么问题"开始的。它是一种非常通用的能力。

Jeff: 这很快就导致了你刚才描述的情况,即更多的数据,更多的计算能力,更多的智能。这会变得多聪明? 我的意思是,听起来你的描述是,这种扩展是相当线性的,你增加这些元素,它就会变得更聪明。在过去几年里,ChatGPT 变得更聪明了吗?它会多快达到人类水平的智能?

Mira:  是的,这些系统在特定任务上已经达到了人类水平,当然在很多任务上还没有。如果你看改进的轨迹,像 GPT-3 这样的系统可能是幼儿级别的智能。然后像 GPT-4 这样的系统更像是聪明高中生的智能。在接下来的几年里,我们正在寻求特定任务上的博士级智能。

Jeff: 比如?

Mira: 所以事情正在变化和迅速改进。

Jeff: 你是说一年后?

Mira: 是的,可能一年半左右吧。

Jeff: 到那时,你与 ChatGPT 对话时,它会显得比你更聪明?

Mira: 在某些方面,是的。在很多方面,是的。

Jeff: 也许一年就能实现了。

Mira: 我的意思是,是的,可能会。大概吧。大概。

Jeff: 相当接近了。这确实引发了其他问题,

我知道你在安全方面已经表态很多,我很高兴也很自豪你在做这些。但我的意思是,人们确实想听你谈谈这个。

(下文由 GPT-4o 翻译)

所以,未来的三年内,当它们极其智能时,你会怎么考虑?它们能通过所有的考试并开始自己连接到互联网,进行各种操作。这是真的存在的风险吗?作为 CTO 和产品负责人,你是怎么考虑这些问题的?

 Mira:是的,我们正在大量思考这些问题。这些问题确实存在,未来会有具备代理能力的 AI 系统,它们能够连接到互联网,彼此沟通,完成任务,或者与人类协作,像今天我们与彼此协作一样。这些 AI 系统的安全性和社会影响非常重要,这些问题不是事后才考虑的。我们在开发技术的同时必须嵌入这些问题,并深入研究,以确保它们是安全的。

Mira:实际上,能力和安全并不是两个独立的领域,而是紧密相关的。指导一个更智能的系统,比指导一个不太智能的系统要容易得多。就像训练一只聪明的狗比训练一只笨狗要容易一样。因此,智能和安全是密不可分的。

 Jeff:它因为更智能而能更好地理解安全措施,是吗?

 Mira:对,确实如此。所以目前有一个争论,即应该更多关注安全研究还是能力研究。我认为这有些误导。你必须同时考虑部署产品的安全性和周围的安全措施。研究和开发中,能力和安全实际上是相辅相成的。从我们的角度来看,我们正以科学的方式来处理这个问题。我们尝试在完成训练前预测这些模型的能力,然后在此过程中为如何处理它们准备好防护措施。在这个行业中,这种方法并不常见。通常,我们训练这些模型,能力会自然涌现,我们称之为涌现能力,因为它们会突然出现。我们可以看到一些统计性能,但不知道这些统计性能是否意味着模型在翻译、生物化学或编程方面表现得更好。

发展这种新能力预测科学有助于我们为未来做准备。

Jeff:你说的所有安全工作其实和开发工作是一致的,对吗?

Mira:对,没错。

Jeff:是一条相似的路径。

Mira:对,所以你必须将其带入开发过程中。 

Jeff:那么,关于这些问题,比如 Volodymyr Zelensky 的视频,说“我们投降了”,或者 Tom Hanks 的视频,或某个牙医广告的视频,这些类型的使用呢?这是你的领域吗,还是需要一些法规来规范?你怎么看待这些问题?

Mira:是的,我的看法是,这些技术是我们开发的,所以它们的使用是我们的责任,但也是社会、民间社会、政府、内容创作者和媒体等共同的责任,以确保它们的正确使用。为了使其成为一种共同责任,我们需要让人们参与进来,提供工具让他们理解和提供保护措施。

Jeff:这些事情很难阻止,对吗?

Mira:是的,我认为不可能实现零风险,但问题在于如何尽量减少风险,并提供工具来做到这一点。例如,对于政府,我们需要带领他们一起前行,给他们早期访问的机会,教育他们了解情况。 

Mira:我认为 ChatGPT 最显著的作用之一是将 AI 带入公众意识,让人们直观地了解这项技术的能力和风险。这与阅读文章不同,尝试它、在你的业务中使用它,并看到它可以做什么,不可以做什么,才能真正了解其影响,并为未来做好准备。

Jeff:是的,这确实是个好点子。你们创建的这些接口,如 ChatGPT,让人们了解即将到来的技术。他们可以使用它,可以看到它的内在机制。

你认为在政府方面,现在有需要立即实施的某些法规吗?在未来一两年内,这些系统会变得非常智能,也许有点吓人,所以现在应该做些什么吗?

Mira:我们一直在呼吁对前沿模型进行更多的监管,这些模型将具有非常强大的能力,同时也有被滥用的风险。我们一直与政策制定者和监管机构保持开放合作。

Mira:对于短期和较小的模型,我认为应该允许生态系统中的多样性和丰富性,避免阻碍那些没有大量计算资源或数据的小公司或个人的创新。因此,我们一直在倡导对前沿系统进行更多的监管,因为它们的风险更高。

我们需要在这些前沿系统到来之前做好准备,而不是在变化已经发生时再去追赶。

Jeff:但你可能不希望华盛顿特区来监管你发布 GPT-5 的过程,对吧?

Mira:实际上,这取决于具体的法规。我们已经在做很多工作,这些工作现在已经被白宫的承诺和欧盟委员会的原则所认可。

Mira:通常,正确的做法是先进行工作,了解实践中的情况,然后基于此制定法规。到目前为止,这正是我们所做的。要提前应对这些前沿系统,需要我们在能力预测方面做大量的科学研究,以制定正确的法规。

Jeff:希望政府里有能理解你们在做什么的人。

Mira:现在越来越多的人加入了政府,他们对 AI 有更好的理解,但还不够多。

Jeff:在你看来,哪些行业将最受 AI 影响?它已经在金融、内容、媒体和医疗等领域产生了影响。但你展望未来时,认为哪些行业会受到最大的影响?

Mira:我认为 AI 会影响到每个行业,尤其是在知识工作者方面。或许在物理世界的应用会稍微滞后一些,但知识工作者方面几乎没有一个领域不会受到影响。

Mira:目前在一些高风险领域,如医疗和法律,AI 的应用有些滞后。这是合理的,首先要确保低风险和中等风险的应用,然后才进入高风险领域。初期应该有更多的人类监督,然后逐渐增加 AI 的自主性,使其更具协作性。

Jeff:有没有你个人非常喜欢或者即将看到的应用案例?

Mira:是的,我认为基本上在任何你想做的事情的最初阶段,无论是创造新设计、编写代码、写论文或写电子邮件,AI 都可以大大简化这些过程。这是我最喜欢的应用场景。

AI 可以显著加快初稿的完成速度,使你能够专注于更具创造性和更复杂的任务,尤其是在编程中,你可以将许多繁琐的工作交给 AI。

Jeff:文档和类似的工作也是如此。

Mira:是的,文档编写也是。在行业中,我们已经看到许多应用。客户服务是一个重要应用,通过聊天机器人和写作工具进行分析,因为现在我们已经将许多工具连接到核心模型,使其更加实用和高效。

Mira:你可以导入各种数据进行分析和过滤,也可以使用图像和浏览工具。如果你在准备论文,研究部分的工作可以更快更严格地完成

我认为这是生产力的下一个层次,将这些工具添加到核心模型中,使其使用起来非常顺畅。模型可以决定何时使用分析工具、何时进行搜索等。

Jeff:你是说它已经看过所有的电视剧和电影,并且可以开始编写剧本和制作电影了?

Mira:是的,它可以作为一种工具来做这些事情,我预计我们会与它合作,它会扩展我们的创造力。人们认为创造力是少数天才才能拥有的,但这些工具降低了门槛,让更多人能够参与进来,扩展他们的创造力

Jeff:它可以很容易地给我提供 200 种不同的剧情结尾。

Mira:是的,你可以延续故事,故事永不结束,你可以不断创作。

Mira:但我认为它主要会成为创意领域的协作工具,更多的人会变得更加有创造力。

Jeff:目前存在一些恐惧。

Mira:是的,确实存在。

Jeff:但你认为这种恐惧会转变为人们找到方法,使工作的创意部分变得更好?

Mira:我认为是的,有些创意工作可能会消失,但如果这些内容质量不高,或许它们本不该存在。我真的相信,通过使用 AI 作为工具来进行教育和创作,它会扩展我们的智力、创造力和想象力。

Jeff:人们曾经认为 CGI 等技术会毁掉电影产业,但实际上没有。这次 AI 的影响更大,但每当有新技术出现,初始反应总是恐惧。

AI 在工作方面的影响怎么样?人们担心很多工作会被替代。你怎么看待 AI 对工作的影响,不仅仅是 OpenAI 的工作,而是整体上?

Mira:事实是我们还不了解 AI 对工作的具体影响第一步是帮助人们理解这些系统的能力,将它们融入工作流程,然后预测和预估影响

实际上,这些工具已经在被广泛使用,但我们没有对其进行系统研究,我们应该研究目前工作和教育的现状,这有助于预测未来的变化并做好准备。

我不是经济学家,但我可以预见许多工作会发生变化,一些工作会消失,一些工作会出现。我们不知道具体会怎样,但可以想象很多重复性的工作可能会被替代。

Jeff:像质量保证和代码测试之类的工作可能会消失,对吗?

Mira:是的,如果这些工作只是纯粹的重复性工作,且没有进一步的提升,这些工作可能会被替代。这只是一个例子,类似的很多工作都会受到影响。

Jeff:你认为会有足够多的新工作来弥补这些被替代的工作吗?

Mira:我认为会有很多新工作出现,但具体多少、变化的程度和失业的情况,我们还不清楚

我认为没人真正知道,因为这些还没有被系统研究。经济将会转型,这些工具将创造巨大的价值,问题在于如何将这些价值分配到社会中,是通过公共福利、基本收入还是其他新的系统?还有很多问题需要探索和解决。

Jeff:高等教育在这方面有很大的作用,你提到的这些工作还没有完全展开。

Mira:是的。

Jeff:高等教育在 AI 发展的未来中应该扮演什么角色?

Mira:我认为真正要搞清楚如何利用这些工具和 AI 来推动教育的发展。AI 在教育领域的应用将是非常强大的,它可以提升我们的创造力和知识水平。我们有机会构建高质量且非常易于获取的教育资源,理想情况下是免费的,面向全世界的每一个人,涵盖任何语言和文化。

你可以为世界上的任何人提供个性化的教育。在达特茅斯这样的地方,课堂规模较小,老师能给予更多关注,但你仍然可以想象,甚至在这里,每个学生都能享受一对一的辅导,更不用说世界其他地方了。

Jeff:补充教学。

Mira:是的。因为我们没有花足够的时间学习如何学习,这通常要到大学阶段才会接触。而如何学习是非常基本的技能,如果掌握不好会浪费很多时间。课程、教材和问题集都可以根据个人的学习方式进行定制。

Jeff:你认为在像达特茅斯这样的地方,AI 可以补充教学吗?

Mira:绝对可以。

Jeff:就像私教一样

开放提问

Jeff:我们可以开放提问吗?你介意 

Mira:非常乐意。

Jeff:好。大家可以提问。

 Dave:我来开始吧。

Speaker:请稍等,我给你一个麦克风。

Dave:达特茅斯的第一批计算机科学家之一 John Kemeny 曾经说过,每个由人类构建的计算机程序都嵌入了人类的价值观,无论是有意还是无意。那么,我想问的是,你认为 GPT 产品中嵌入了哪些人类价值观?或者换句话说,我们应该如何在这些工具中嵌入尊重、公平、诚信和正直等价值观?

Mira:这是一个非常好的问题,也是一个非常难回答的问题。这是我们多年来一直在思考的问题。目前,如果你看这些系统,许多价值观是通过数据输入的,包括互联网数据、许可数据和人工标注数据。这些输入数据带有特定的价值观,这是一个集合。这些价值观很重要。

当你将这些产品投入使用时,你有机会通过大量用户获取更广泛的价值观。目前,ChatGPT 有一个免费版本,拥有全球超过 1 亿用户。这些用户可以提供反馈,如果他们允许我们使用他们的数据,我们会利用这些数据创建一个更符合人们期望的系统。但这是默认系统。我们希望在此基础上增加一个定制层,使得每个社区可以添加自己的价值观,比如学校、教会、国家甚至州,可以在这个默认系统之上提供更具体和精确的价值观。我们正在研究如何实现这一点。

但这显然是一个非常困难的问题,因为人类本身就对很多问题存在分歧,而技术问题又更加复杂。

Jeff:你能让它不生气吗?

Mira:生气?

Jeff:是的,不生气是其中的一个价值观吗?

Mira:这实际上应该由你决定。如果你作为用户...

Jeff:哦,如果你想要一个生气的聊天机器人,你可以拥有它。

Mira:是的,如果你想要一个生气的聊天机器人,你应该可以有一个生气的聊天机器人。

Jeff:好的,这边有个问题。

Joy: 你好。谢谢你。这里是 Joy 博士。也祝贺你获得荣誉学位以及你在 OpenAI 所做的一切。我很好奇你是如何看待创作权和生物识别权的问题。你之前提到,可能有一些创造性的工作不应该存在,同时你们有很多创作者正在思考关于同意、补偿、甚至是拥有私有模型或开源模型的问题,这些模型的数据都是从互联网上获取的。我很想听听你对创作权涉及的同意权和补偿权的看法。我们现在在大学里,你理解复合问题的构成吗?另一个需要思考的是生物识别权,比如涉及到声音、人脸等等。关于“Sky”声音的最近争议,以及存在声音相似、长相相似的人,更不用说在重要的选举年份里出现的各种虚假信息威胁,我对你关于生物识别权方面的看法很感兴趣。

Mira: 是的,所以......好的,我将从最后一部分开始......我们对声音技术进行了大量的研究,由于这些技术带来了如此多的风险和问题,我们直到最近才发布了它们。但同样重要的是,我们需要带领社会了解这项技术,通过设立防护措施和控制风险,让其他人研究并解决相关问题。例如,我们正在与一些机构合作,帮助我们思考与 AI 的互动问题,尤其是当人类在使用情感上具有强烈吸引力的语音和视频时。我们需要开始理解这些事情将如何发展,以及我们应该为什么做好准备。在提到的那个例子中,Sky 的声音并不是 Scarlett Johansson 的,也不应该是 Scarlett Johansson 的,这是一个完全独立的过程。我在负责挑选声音,我们的 CEO 正在和 Scarlett Johansson 进行沟通...然而,出于对她的尊重,我们撤下了音频。有些人觉得听起来有些相似。这些都是主观的.

Jeff: 是的,你可以设定一个红队流程,例如,如果某个声音被认为与某个非常知名的公众声音极其相似,你可能就不会选择那个。

Mira: 在我们的红队中,这个问题并没有出现,但这也说明了我们需要更大规模的红队,如果有需要,他们可以早点发现这样的问题。但更广泛地说,对于生物识别权的问题,我认为我们的策略是先让少数人,比如专家或红队的成员,来帮助我们全面理解风险和能力。然后我们制定缓解策略,随着我们对这些策略的信心增强,我们会让更多的人接触到。因此,我们目前并未允许人们利用这项技术来模拟自己的声音,这是因为我们正在评估相关风险,且对于我们能否妥善处置潜在的滥用行为尚无把握。但是,我们对如何处理滥用问题有足够的信心因为我们已经在一些特定的语音上设置了防护措施并且目前只在小范围内使用这基本上是扩大了的红队操作。当我们将其扩展到一千个用户,即我们的 Alpha 版本时我们将与用户紧密合作收集反馈,了解边缘情况以此为将使用范围扩大到十万人做好准备然后,用户数将增长到一百万然后是一亿,以此类推。但是,这一切都是在我们严格的控制下进行的我们称之为迭代部署。如果我们对这些使用情况都能感到满意那么我们就不会在特定的情况下或针对特定的使用者推出它们我们可能会试图以某种方式限制产品的功能因为能力和风险总是并存的。

同时,我们也在进行大量的研究以帮助我们解决内容来源和真实性的问题让人们有工具可以判断一件事是否是深度伪造或者是否在传播虚假信息等等。实际上,从 OpenAI 成立开始我们就一直在研究虚假信息并且我们已经开发了很多工具,比如水印,内容策略这些都能帮助我们管理虚假信息的传播尤其在今年这样的全球选举年。我们一直在进一步加强这项工作。但这是一个极具挑战性的领域,我们作为技术和产品的开发者,需要投入大量的精力,同时还需要与公民社会合作,与媒体和内容创作者合作,共同探讨如何解决这些问题。

当我们开发像音频技术或 Sora 这样的技术时,我们研究风险的红队成员之后的首批合作伙伴就是内容创作者。以此来了解技术如何帮助他们,并探讨如何打造一个产品,这个产品既要安全、实用、有益,又能推动社会进步。这就是我们在

Dall-E 上所做的,也是我们在 Sora 视频生成模型上所做的。关于你问题的第一部分。

Joy: 创作权。

Mira: 所以对于...

Joy: 创作权

Joy: 关于报酬、同意,

Mira: 是的。

Joy: 控制和归功。

Mira: 是的,这也是非常重要和具有挑战性的。我们现在与众多媒体公司建立了合作关系,我们也赋予用户在产品中使用他们数据的大量控制权。因此,如果他们不希望他们的数据被用来改进模型,或者让我们进行任何研究或在其上进行训练,那是完全可以的。我们并未使用这些数据。对于创作者社区,我们提前提供了这些工具的使用权限。这样,我们可以第一时间听到他们对如何使用这些工具的想法以及如何制作出最有用的产品。而且,这些产品都是研究成果,我们并不需要不惜一切代价来制作。只有当我们找到一种真正有助于推动人类进步的方式时,我们才会去制作这样的产品。我们也正在尝试用各种方法旨在通过我们的工具使数据贡献者获得补偿。从技术角度和产品制作角度来看,这都非常复杂,因为需要弄清楚特定数量的数据在经过训练后的模型中创造了多少价值。单个数据的价值可能难以衡量,但如果能创建数据联合体和数据池,人们可以贡献他们的数据,或许效果会更好。所以,在过去的两年里,我们一直在试验这种方法的各种版本。虽然还没有正式部署,但我们一直在技术层面进行试验,试图真正理解这个技术问题。我们已经有了一些进展,但这确实是一个非常棘手的问题。

Jeff: 的确如此。我想肯定会有很多新公司尝试为这个问题构建解决方案。

Mira: 是的,的确有其他公司在尝试。

Jeff: 这真的非常困难。

Mira: 确实。

Participant:你好,谢谢你抽出时间来和我们交流。我有一个简单的问题,如果你今天重新回到学校,你会重新选择学什么?你会再做些什么,不会再做些什么?

Mira:我想我还是会学习相同的东西,但可能会少一些压力。

是的,我想我还是会学习数学,并且可能会多学一些计算机科学课程。但我会少一些压力,这样可以更好地以好奇心和快乐的心态学习,这样更有成效。我记得当学生的时候,总是对未来感到有些压力。如果我知道现在的情况,我会对自己说,也对每个人说,“不要有压力”,但不知为何,当时并没有真正听进去。当我与更年长的校友交流时,他们总是告诉我,“尽量享受当下,完全投入其中,少一些压力。

我认为,在具体课程方面,现在特别重要的是要有广泛的知识面,了解各个领域的一些内容。在学校和工作后,我一直在研究组织中工作,不断学习。永不停息的学习是非常有用的,能够帮助你理解各个方面的一些内容。

非常感谢,我相信你的生活也很有压力。(全场笑)

Mira:谢谢你们邀请我。(全场鼓掌)

Jeff:感谢你今天的到来,也感谢你为社会做的非常重要的工作。这确实非常重要,我很高兴你能在这个位置上。

Jeff:谢谢你能来到 Thayer 和达特茅斯。以这个建议作为结束,我觉得再合适不过了。希望我们学生们能够从中受益。再次感谢大家的到来,祝大家度过一个愉快的毕业周末。

(完)

以下是GPT-4o 常识课视频二维码,扫码或点击阅读原文可看



以下是原文:



(00:04) - Good afternoon, everyone. Great to see a nice packed room here in our new building. My name is Alexis Abramson, dean of Thayer School of engineering at Dartmouth, and it's truly a pleasure to welcome you all to this very special event, a conversation with Mira Murati, one of our nation's foremost leaders in artificial intelligence and also a Dartmouth Engineering alum.


(00:30) Before we get started, I wanna extend a special welcome to a special guest, Joy Buolamwini, who is also renowned for her work in AI, AI ethics, and algorithmic justice. She'll also be receiving her honorary degree from Dartmouth tomorrow. And a warm welcome to Mira and all of you who either are part of her family now, or are part of her family when she was here at Dartmouth, including her brother, Ernel Murati, also a Thayer alum from the class of 2016.


(01:03) Thank you to our partners at the Neukom Institute for Computational Science and the Department of Computer Science. From Dartmouth's very first seminal conference on artificial intelligence in 1956 to our current multidisciplinary research on large language models and precision health, Dartmouth has long been at the forefront of AI innovation.


(01:29) So we are especially thrilled to have Mira, Chief Technology Officer at Open AI and their School of Engineering's class of 2012 with us today. She is known for her pioneering work on some of the most talked about AI technologies of our time. At Open AI, she has spearheaded the development of transformative models like ChatGPT and Dall-E, setting new the stage for future generative AI technologies.


(02:00) Now, during her time as a student at Thayer, she applied her engineering skills to design and built hybrid race cars with Dartmouth's Formula Racing Team. Tomorrow at commencement, she will receive an honorary doctorate of science from Dartmouth. Finally, moderating our conversation today is Jeff Blackburn, Dartmouth class of 1991 and current Dartmouth trustee.


(02:28) Jeff's extensive career is centered on the growth of global digital media and technology. He served as senior vice president of Global Media and Entertainment at Amazon until 2023, and has had various leadership positions at the company, his insights into the intersection of technology, and media, and entertainment will certainly make sure we have an engaging conversation today.


(02:57) So without further ado, I'll the conversation over. Please join

me in welcoming Mira Murati and Jeff Blackburn. (audience applauding) - Thank you, Alexis. And this beautiful building, so nice. Mira, thank you so much for coming here and spending time. I can only imagine how crazy your days are right now.


(03:26) - It's great to be here. - It is so nice of you to take t

his time for everybody here. - Really happy to be here. - And I wanna get right to it because I know everybody just wants to hear what's going on in your life and what you're building 'cause it's just fascinating. Maybe we should just start with you, and you leave Thayer, you go to Tesla for a bit, then OpenAI.


(03:49) If you could just describe kind of that period and then joining OpenAI in the early days. - Yeah, so I was... Right after Thayer, I actually worked in aerospace briefly, and then I sort of realized that aerospace was kind of slow-moving and I was very interested in Tesla's mission and of course really innovative challenges in building basically a sustainable future for transportation, and I decided to join then.


(04:25) And after working on Model S and Model X, I thought I don't really wanna become a car person. I kind of want to work on different challenges, at the intersection of really advancing society forward in some way, but also in doing this really hard engineering challenges. And at the time when I was at Tesla, I got very interested in self-driving cars and sort of the intersection of these technologies, computer vision and AI, applying them to self-driving cars.


(05:03) And I thought, okay, I'd like to learn more about AI, but in different domains. And that's when I joined the startup where I was leading engineering and product to apply AI and computer vision in the domain of spatial computing, so thinking about the next interface of computing. And at the time, I thought it was going to be virtual reality and augmented reality.


(05:29) Now I think it's a bit different, but I thought, what if you could use your hands to interact with very complex information, whether it's formulas, or molecules, or concepts in topology? You can just learn about these things and interact with them in a much more intuitive way, and that expands your learning.


(05:57) So it turned out VR was a bit too early then. And so... But this gave me enough to learn about AI in a different domain and sort of I think my career has always been kind of at the intersection of technology and various applications, and it gave me a different perspective of how far along AI was and what it could be applied to- - So the Tesla self-driving, you saw machine learning, deep learning.


(06:26) You could see where this is going. - Vision, yes. - Yeah. - But not clearly- - Did you work with Elon? - I did, yes, in the last year especially. But it wasn't totally clear where it was going. At the time, it was still apply AI to narrow applications, not generally. You're applying it to very narrow specific problems, and it was the same in VR and Ar.


(06:50) And from then I thought I don't really want to just apply it to specific problems. I want to learn about just the research and really understand what is going on, and, from there, then go apply to other things. So this is when I joined OpenAI, and Open AI's mission was very appealing to me. It was a nonprofit back then, and the mission hasn't changed.


(07:17) The structure has changed, but when I joined six years ago, it was a nonprofit geared to build safe, artificial general intelligence, and it was the only other company doing this, other than DeepMind. Now of course there are a lot of companies that are sort of building some version of this. - A handful, yeah. - Yes.


(07:39) And that's sort of how the journey started to OpenAI. - Got it. And so you've been building a lot since you were there. I mean, maybe we could just, for the group, just some AI basics of machine learning, deep learning, now AI. it's all related, but it is something different. So, what is going on there and how does that come out in a ChatGPT, or a Dall-E, or your video product? How does it work? - It's not something radically new.

(08:16) In a sense, we're building on decades and decades of human endeavor. And in fact, it did start here. And what has happened in let's say the last decade is this combination of these three things where you have neural networks, and then a ton of data, and a ton of compute. And you combine these three things, and you get this really transformative AI systems or models that it turns out they can do these amazing things, like general tasks, but it's not really clear how.


(08:57) Deep learning just works. And of course we're trying to understand and apply tools and research to understand how these systems actually work, but we know it works from just having done it for the past few years. And we have also seen the trajectory of progress and how the systems have gotten better over time.

(09:20) When you look at systems that like GPT-3, large language models that we deployed about three, yeah, 3 1/2 years ago. GPT-3 was able to sort of... First of all, the goal of this model is just to predict the next token. - [Jeff] It's really next word prediction. - Yes, pretty much. - Yeah. - And then we found out that if you give this model this objective to predict the next token, and you've trained it on a ton of data, and you're using a lot of compute, what you also get is this model that actually understands language


(10:05) at a pretty similar level to how we can. - [Jeff] 'Cause it's read a lot of books. it's read all the books. - It kinda knows- - Basically all the content- - What words should come next. - On the internet. But it's not memorizing what's next. It is really generating an understanding of its own understanding of the pattern of the data that it has seen previously.


(10:29) And then we found that, okay, it's not just language. Actually, if you put different types of data in there, like code, it can code too. So, actually, it doesn't care what type of data you put in there. It can be images, it can be video, it can be sound, and it can do exactly the same thing. - [Jeff] Oh, we'll get to the images.


(10:51) Yeah. (Jeff laughs) But yes, text prompt can give you images or video, and now you're seeing even the reverse. - Yes, yes, exactly. So you can do... So we found out that this formula actually works really well, data, compute, and deep learning, and you can put different types of data, you can increase the amount of compute, and then the performance of these AI systems gets better and better.


(11:19) And this is what we refer to as scaling laws. They're not actual laws. It's essentially like a statistical prediction of the capability of the model improving as you put in more data and more compute into it. And this is what's driving AI progress today. - [Jeff] Why did you start with a chatbot? - So, yeah, in terms of product, actually, we started with the API.


(11:50) We didn't really know how to commercialize GPT-3. It's actually very, very difficult to commercialize AI technology. And initially, we took this for granted, and we were very focused on building the technology and doing research. And we thought, here is this amazing model, commercial partners, take it and go build amazing products on top of it.


(12:16) And then we found out that that's actually very hard. And so this is why we started doing it ourselves. And we- - [Jeff] That led you to build a chatbot 'cause you just wanted to- - Yes, because we were trying to figure out, okay, why is it so hard for this really amazing successful companies to actually turn this technology into a helpful product? - I see.


(12:38) - And it's because it's a very odd way to build products. You're starting from capabilities. You're starting from a technology. You're not starting from what is the problem in the world that I'm trying to address. It's very general capability. - And so that leads to pretty quickly what you just described there, which is more data, more compute, more intelligence.


(13:06) How intelligent is this gonna get? I mean, it sounds like your description is the scaling of this is pretty linear, you add more of those elements and it gets smarter. Has it gotten smarter ChatGPT in the last couple years, and how quickly will it get to maybe human-level intelligence? - So yeah, these systems are already human-level in specific tasks, and of course in a lot of tasks, they're not.


(13:39) if you look at the trajectory of improvement, systems like GPT-3, we're maybe let's say toddler level intelligence. And then systems like GPT-4 are more like smart high schooler intelligence. And then in the next couple of years, we're looking at PhD-level intelligence for specific tasks. - [Jeff] Like? - So things are changing and improving pretty rapidly.


(14:12) - Meaning like a year from now? - Yeah, a year and a half let's say. - Where you're having a conversation with ChatGPT and it seems smarter than you. - In some things, yeah. In a lot of things, yes. - Maybe a year away from that. - I mean, yeah, could be. -


Pretty close. - Roughly. Roughly. Well, I mean, it does lead to these other questions, and I know you've been very vocal on this, which I'm happy and proud that you are doing on the safety aspects of it, but, I mean, people do want to hear from you on that.


(14:50) So I mean, what about three years from now when it's unbelievably intelligent? It can pass every single bar exam everywhere and every test we've ever done. And then it just decides it wants to connect to the internet on its own and start doing things. Is that real, and is that... Or, is that something you're thinking about as the CTO and leading the product direction? - Yes, we're thinking a lot about this.

(15:19) It's definitely real that you'll have AI systems that will have agent capabilities, connect to the internet, talk to each other, agents connecting to each other and doing tasks together, or agents working with humans and collaborating seamlessly. So sort of working with AI like we work with each other today.

(15:42) In terms of safety, security, the societal impacts aspects of this work, I think these things are not an afterthought. It can be that you sort of develop the technology and then you have to figure out how to deal with these issues. You kind of have to build them alongside the technology and actually in a deeply embedded way to get it right.

(16:07) And for capabilities and safety, they're actually not separate domains. They go hand in hand. It's much easier to direct a smarter system by telling it, okay, just don't do these things. They need to direct a less intelligent system. It's sort of like training a smarter dog versus a dumber dog, and so intelligence and safety go hand in hand.

(16:41) - [Jeff] It understands the guardrails better because it's smarter. - Right, yeah, exactly. And so there is this whole debate right now around, do you do more safety or do you do more capability research? And I think that's a bit misguided because of course you have to think about the safety into deploying a product and the guardrails around that.

(17:04) But in terms of research and development, they actually go hand in hand. And from our perspective, the way we're thinking about this is approaching it very scientifically. So let's try to predict the capabilities that these models will be, the capabilities that these models will have before we actually finish training.

(17:30) And then along the way, let's prepare the guardrails for how we handle them. That's not really been the case in the industry so far. We train these models, and then there are these emergent capabilities we call them, because they emerge. We don't know they're going to emerge. We can see sort of the statistical performance, but we don't know whether that statistical performance means that the model is better at translation, or at doing biochemistry, or coding or something else.

(18:09) And developing this new science of capability prediction helps us prepare for what's to come. And that means... - [Jeff] You're saying all that safety work, it's kind of consistent with your development. - Yes, that's right. - It's a similar path. - Yeah, so you have to kind of bring it along and- - But What about these issues, Mira, like the video of Volodymyr Zelensky saying, "We surrender," the Tom Hanks video, or a dentist ad? I can't remember what it was.

(18:41) What about these types of uses? Is that in your sphere or does it need to be regulation around that? How do you see that playing out? - Yeah, so I mean, my perspective on this is that this is our technology. So it's our responsibility how it's used, but it's also shared responsibility with society, civil society, government, content makers, media, and so on, to figure out how it's used.

(19:10) But in order to make it a shared responsibility, you need to bring people along, you need to give them access, you need to give them tools to understand and to provide guardrails. And I think- - [Jeff] Those things are kind of hard to stop though, right? - Well, I think it's not possible to have zero risk, but it's really a question of, how do you minimize risk? And providing people the tools to do that.

(19:44) And in the case of government, for example, it's very important to bring them along and give them early access to things, educate them on what's going on. - Governments. - Yes, for sure, and regulators. And I think perhaps the most significant thing that ChatGPT did was bring AI into the public consciousness, give people a real intuitive sense for what the technology is capable of and also of its risks.

(20:17) It's a different thing when you read about it versus when you try it and you try it in your business, and you see, okay, it cannot do these things, but it can do this other amazing thing, and this is what it actually means for the workforce or for my business. And it allows people to prepare. - Yeah, no, that's a great point.

(20:37) I mean, just these interfaces that you've created, ChatGPT, are informing people about what's coming. I mean, you can use it. You can see now what's underneath. Do you think there's... Just to finish on the government point. I mean, let's just talk the US right now. Do you wish there was certain regulations that we're actually just putting into place right now? Before you get to that year or two from now.

(21:04) It's extremely intelligent, a little bit scary. So are there things that should just be done now? - We've been advocating for more regulation on the frontier models which will have this amazing capabilities that also have a downside because of misuse. And we've been very open with policy makers and working with regulators on that.

(21:33) On the more sort of near term and smaller models, I think it's good to allow for a lot of breadth and richness in the ecosystem and not let people that don't have as many resources in compute or data not, sort of not block the innovation in those areas. So we've been advocating for more regulation in the frontier systems where the risks are much higher.

(22:04) And also, you can kind of get ahead of what's coming versus trying to keep up with changes that are already happening really rapidly. - But you probably don't want Washington, D.C. regulating your release of GPT-5, like that you can or cannot do this. - I mean, it depends, actually. It depends on the regulation.

(22:27) So there is a lot of work that we already do that has now been sort of, yeah, codified in the White House' commitments, and this- - So it's underway - Work already been done. And it actually informed the White House' commitments or what the UN Commission is doing with the principles for AI deployments.

(22:51) And usually, I think the way to do it is to actually do the work, understand what it means in practice, and then create regulation based on that. And that's what has happened so far. Now, getting ahead of these frontier systems requires that we do a lot more forecasting and science of capability prediction in order to come up with correct regulation on that.

(23:17) - [Jeff] Well, I hope the government has people that can understand what you're doing. - It seems like more and more folks are joining the government that have better understanding of AI, but not enough. - Okay. In terms of industries, you have the best seat maybe in the world to just see how this is gonna impact different industries.

(23:39) I mean, it already is in finance, and content, and media, and healthcare. But what industries do you think, when you look forward, do you think are gonna be most impacted by AI and the work that you're doing at OpenAI? - Yeah, this is sort of similar to the question that I used to get from entrepreneurs when we started building a product on top of GPT-3, where people would ask me, "What can I do with it? What is it good for?" And I would say everything.

(24:16) So just try it. And so it's kind of similar in the sense that I think it'll affect everything, and there's not going to be an area that won't be, in terms of cognitive work and the cognitive labor and cognitive work. Maybe it's gonna take a little bit longer to get into the physical world, but I think everything will be impacted by it.

(24:42) Right now we've seen... So I'd say there's been a bit of a lag in areas that have a lot of, that are high risk, such as healthcare or legal domains. And so there is a bit of a lag there and rightfully so. First, you want to understand and bring it in, use cases that are lower risk, medium risk, really make sure those are handled with confidence before applying it to things that are higher risk.

(25:14) And initially, there should be more human supervision, and then the delegation should change, and to the extent they can be more collaborative, but- - [Jeff] Are there use cases that you personally love, or are seeing, or are about to see? - Yeah, so I think basically the first part of anything that you're trying to do, whether it is creating new designs, whether it's coding, or writing an essay, or writing an email or basically everything, the first part of everything that you're trying to do becomes so much easier.

(26:02) And that's been my favorite use of it. so far I've really used it- - First draft for everything. - Yeah, first draft for everything. It's so much faster. It lowers the barrier to doing something and you can kind of focus on the part that's a bit more creative and more difficult, especially in coding.

(26:25) You can sort of outsource a lot of the tedious work. - Documentation and all that kinda stuff. - Yeah, documentation and... But in industry, we've seen so many applications. Customer service is definitely a big application with chatbots, and writing, also analysis, because right now we've sort of connected a lot of tools to the core model, and this makes the models far more usable and more productive.

(27:01) So you have tools like code analysis. It can actually analyze a ton of data. You can dump all sorts of data in there, and it can help you analyze and filter out the data, or you could use images and you could use browsing tool. So if you're preparing let's say a paper, the research part of the work can be done much faster and in a more rigorous way.

(27:30) So I think this is kind of the next layer that's going to be added to productivity, adding these tools to the core models, and making it very seamless. The model decides when to use say the analysis tool versus search versus something else. - [Jeff] Write a program. yeah, yeah. Interesting. Has it watched every TV show and movie in the world, and is it gonna start writing scripts and making films? - Well, it's a tool.

(28:08) And so it certainly can do that as a tool, and I expect that we will actually, we will collaborate with it, and it's going to make our creativity expand. And right now if you think about how humans consider creativity, we see that it's sort of this very special thing that's only accessible to this very few talented people out there.

(28:36) And these tools actually make it, lower the barrier for anyone to think of themselves as creative and expand their creativity. So in that sense, I think it's actually going to be really incredible. - Yeah, could give me 200 different cliffhangers for the end of episode one or whatever, very easily. - Yes.

(28:58) And you can extend the story, the story never ends. You can just continue. - Keep going. I'm done writing, but keep going. That's interesting. - But I think it's really going to be a collaborative tool, especially in the creative spaces where- - I do too. - Yeah, more people will become more creative.

(29:19) - There's some fear right now. - Yes, for sure. - But you're saying that'll switch and humans will figure out how to make the creative part of the work just better? - I think so, and some creative jobs maybe will go away, but maybe they shouldn't have been there in the first place if the content that comes out of it is not very high quality, but I really believe that using it as a tool for education, creativity will expand our intelligence, and creativity, and imagination.

(29:55) - Well, people thought CGI and things like that were gonna wreck the film industry at the time. They were quite scared. This is, I think a bigger thing, but yeah, anything new like that, the immediate reaction is gonna be, "Oh god, this is..." But I hope that you're right about film and TV. Okay, the job part you raised, and let's forget Hollywood stuff, but there's a lot of jobs that people are worried about that they think are at risk.

(30:31) What's your view on job displacement in AI and really not even just the work you're doing at OpenAI, just over overall. Should people be really worried about that, and which kind of jobs, or how do you see it all working out? - Yeah, I mean the truth is that we don't really understand the impact that AI is going to have on jobs yet.

(30:59) And the first step is to actually help people understand what the systems are capable of, what they can do, integrate them in their workflows, and then start predicting and forecasting the impact. And also, I think people don't realize how much these tools are already being used, and that's not being studied at all.

(31:23) And so we should be studying what's going on right now with the nature of work, the nature of education, and that's going to help us predict for how to prepare for these increased capabilities. In terms of jobs specifically, I'm not an economist, but I certainly anticipate that a lot of jobs will change, some jobs will be lost, some jobs will be gained.

(31:49) We don't know specifically what it's going to look like, but you can imagine a lot of jobs that are repetitive, that are just strictly repetitive and people are not advancing further, those would be replaced. - People like QA, and testing code, and things like that, those jobs are- - Unless they are- - They're done.

(32:11) - Yes, and if it's strictly just that or strictly- - And it's just one example. There's many things like that. - Yeah, many things. - Do you think there'll be enough jobs created elsewhere to compensate for that? - I think there are going to be a lot of jobs created, but the weight of how many jobs are created, how many jobs are changed, how many jobs are lost, I don't know.

(32:36) And I don't think anyone knows really, because it's not being rigorously studied, and it really should be. And yeah, but I think the economy will transform and there is going to be a lot of value created by these tools. And so the question is, how do you harness this value? If the nature of jobs really changes, then how are we distributing sort of the economic value into society? Is it through public benefits? Is it through UBI? Is it through some other new system? So there are a lot of questions to explore and figure out.

(33:21) - There's a big role for higher ed in that work that you're describing there. It's just not quite happening yet. - Yeah. - What else for higher ed and this future of AI? What do you think is the role of higher ed in what you see and how this is evolving? - I think really figuring out how we use these tools and AI to advance education.

(33:49) Because I think one of the most powerful applications of AI is going to be in education, advancing our creativity and knowledge. And we have an opportunity to basically build super high quality education and very accessible and ideally free for anyone in the world in any of the languages or cultural nuances that you can imagine.

(34:18) You can really have customized understanding and customized education for anyone in the world. And of course in institutions like Dartmouth, the classrooms are smaller and you have a lot of attention, but still you can imagine having just one-on-one tutoring, even here, let alone in the rest of the world.

(34:42) - Supplementing. - Yes. Because we don't spend enough time learning how to learn. That sort of happens very late, maybe in college. And that is such a fundamental thing, how you learn, otherwise you can waste a lot of time. And the classes, the curriculum, the problem sets, everything can be customized to how you actually learn as an individual.

(35:11) - So you think it could really, at a place like Dartmouth, it could compliment some of the learning that's happening. Oh, absolutely, yeah. - Just have AIs as tutors and what not. Should we open it up? Do you mind taking some questions from the audience? Is that okay? - Happy to, yeah. - All right. Why don't we do that.

(35:30) Dave, you wanna start? - [Dave] Sure, if you don't. - [Speaker] Hold on one second. I'll give you a microphone. - One of Dartmouth's first computer scientists, John Kemeny, once gave a lecture about how every computer program that humans build embeds human values into that program, whether intentionally or unintentionally.

(35:54) And what I'm wondering is what human values do you think are embedded in GPT products, or, put a different way, how should we embed values in, like respect, equity, fairness, honesty, integrity, things like that into these kinds of tools? - That's a great question and a really hard one and something that we think about, we've been thinking about for years.

(36:22) So right now, if you look at these systems, a lot of the values are input, are basically put in in the data, and that's the data in the internet, license data, also data that comes through human contractors that will label certain problems or questions. And each of these inputs has specific value. So that's a collection of their values and that matters.

(36:56) And then once you actually put these products into the world, I think you have an opportunity to get a much broader collection of values by putting it in the hands of many, many people. So right now, ChatGPT, we have a free offering of ChatGPT that has the most capable systems, and it's used by over 100 million people in the world.

(37:22) And each of these people can provide feedback into ChatGPT. And if they allow us to use the data, we will use it to create this aggregate of values that makes the system better, more aligned with what people want it to do. But that's sort of the default system. What you kind of want on top of it is also a layer for customization where each community can sort of have their own values, let's say a school, a church, a country, even a state.

(37:58) They can provide their own values that are more specific and more precise on top of this default system that has basic human values. And so we're working on ways to do that as well. But it's actually, it's obviously a really difficult problem because you have the human problem where we don't agree on things, and then you have the technology problem.

(38:24) And on the technology problem, I think we've made a lot of progress. We have methods like reinforcement learning with human feedback where you give people a chance to provide their values into the system. We have just developed this thing we call the Spec that provides transparency into the values that are into the system.

(38:46) And we're building a sort of feedback mechanism where we collect input and data in how to advance the Spec. You can think of it as like a constitution for AI systems, but it's a living one that. It evolves over time because our values also evolve over time, and it becomes more precise. It's something we're working on a lot.

(39:12) And I think right now we're thinking about basic values. But as the systems become more and more complex, we're going to have to think about more granularity in the values that's... - [Jeff] Can you keep that from like getting angry? - Getting angry? - Yeah. Is that one of the values? - Well, that should be...

(39:33) No. So that should actually be up to you. So if you as a user- - Oh, if you want an angry chatbot you can have it. - Yes, if you want an angry chatbot, you should have an angry chatbot. Yeah. - Okay, right here, yeah. - Hello. Thank you. Dr. Joy here. And also, congratulations on the honorary degree and all you've been doing with OpenAI.

(39:59) I'm really curious how you're thinking about both creative rights and biometric rights. And so earlier you were mentioning maybe some creative jobs ought not to exist, and you've had many creatives who are thinking about issues of consent, of compensation, of having whether it's proprietary models or even open source models, where the data is taken from the internet.

(40:24) So really curious about your thoughts on consent and compensation as it deals with creative rights. And since we're in a university, do you know the multi-part question piece? So the other thing is thinking about biometric rights, and so when it comes to the voice, when it comes to faces and so forth. So with the recent controversy around the voice of Sky and how you can also have people who sound alike, people who look alike, and all of the disinformation threats coming up in such a heavy election year,

(40:54) would be very curious about your perspective on the biometric rights aspects as well. - Yeah, so... Okay, I'll start with the last part on... We've done a ton of research on voice technologies and we didn't release them until recently precisely because they pose so many risks and issues. But it's also important to kind of bring society along, give access in a way that you can have guardrails and control the risks, and let other people study and make advances in issues like, for example, we're partnering

(41:34) with institutions to help us think about human AI interaction now that you have voice and video that are very emotionally evocative modalities. And we need to start understanding how these things are going to play out and what to prepare for. in that particular case, the voice of Sky was not Scarlett Johansson's, and it was not meant to be, and it was a completely parallel process.

(42:05) I was running the selection of the voice, and our CEO was having conversations with Scarlett Johansson and... But out of respect for her, we took it down. And some people see some similarities. These things are subjective, and I think you can sort of... Yeah, you can kind of come up with red teaming processes where if the voice, for example, was deemed to be super, super similar to a very well-known public voice, then maybe you don't select that specific one.

(42:42) In our red teaming, this didn't come up, but that's why it's important to also have more extended red teaming to catch these things early if needed. But more broadly, with the issue of biometrics, I think our strategy here is to give access to a few people, initially experts or red teamers that help us understand the risk and capabilities very well.

(43:16) Then we build mitigations, and then we give access to more people as we feel more confident around those mitigations. So we don't allow for people to make their own voices with this technology because we're still studying the risks and we don't feel confident that we can handle misuse in that area yet.

(43:38) But we feel good about handling misuse with the guardrails that we have on very specific voices in a small state right now, which is essentially extended red teaming. And then when we extend it to a thousand users, our Alpha release, we will be working very closely with its users, gathering feedback and understanding the edge cases so we can prepare for these edge cases as we expand use to say 100,000 people.

(44:07) And then it's going to be a million, and then 100 million, and so on. But it's done with a lot of control, and this is what we call iterative deployment. And if we can all get comfortable around this use cases, then we just won't release them in this specific... To extended users or for these specific use cases, we will probably try to lobotomize the product in a certain way because capability and risk go hand in hand.

(44:44) But we're also working on a lot of research to help us deal with issues of content provenance and content authenticity so people have tools to understand if something is a deep fake or spread misinformation and so on. Since the beginning of OpenAI, actually, we've been working on studying misinformation and we've built a lot of tools like watermarking, content policies that allow us to manage the sort of, yeah, the possibility of misinformation, especially this year given that it's a global election year.

(45:30) We've been intensifying that work even more. But this is extremely challenging area that we, as the makers of technology and products, need to do a lot of work on, but also partner with civil society, and media, and content makers to figure out how to address these issues. When we make technologies like audio or Sora, the first people that we work with after the red teamers that study the risks are the content creators.

(46:04) to actually understand how the technology would help them and how do you build a product that is both safe, and useful, and helpful, and that actually advances society. And this is what we did with Dall-E, and this is what we're doing with SORA, our video generation model again. And the first part of your question.

(46:27) - [Dr. Joy] Creative rights. So for the- - Creative rights - [Dr. Joy] About compensation, consent, - Yes. - Control and credit. - Yeah, that's also very important and challenging. right now we work, we do a lot of partnerships with media companies and we also give people a lot of control on how their data is used in the product.

(46:51) So if they don't want their data to be used to improve the model or for us to do any research or train on it, that is totally fine. We do not use the data. And then for just the creator community in general, we give access to these tools early. So we can hear from them first on how they would want to use it and build products that are most useful.

(47:18) And also these things are research produced. so we don't have to build product at all costs. We'd only do it if we can figure out a modality that's actually helpful in advancing people forward. And we're also experimenting with methods to basically create our tools that allow people to be compensated for data contribution.

(47:47) This is quite tricky both from technical perspective and also just building a product like that because you have to sort of figure out how much a specific amount of data, how much value it creates in a model that has been trained afterwards. And maybe individual data would be very difficult to gauge how much value that would provide.

(48:14) But if you can sort of create consortiums of an aggregate data and pools where people can provide their data, maybe that'd be better. So for the past I'd say two years, we've been experimenting with various versions of this. We haven't deployed anything, but we've been experimenting on the technical side and trying to really understand the technical problem.

(48:40) And we're a bit further along, but it's a really difficult issue. - It is. I bet there'll be a lot of new companies trying to build solutions for that. - Yeah, there other companies. - It's just so hard. - It is. - How about right there? Yeah. - [Participant] Thank you so much for your time and taking off your time in coming to talk to us.

(49:04) My question is pretty simple. If you had to come back to school today, you found yourself again at Thayer or at Dartmouth in general, what would you do again and what you would not do again? What would you major in or would you get involved in more things? Something like that. - I think I would study the same things but maybe with less stress.

(49:29) (all laughs) Yeah, I think I'd still study math and do... Yeah. Maybe I would take more computer science courses actually. but yeah, I would stress less because then you study with more curiosity and more joy, and that's more productive. But yeah, I remember, as a student, I was always a bit stressed about what was going to come after.

(50:03) And if I knew what I knew now, and to my younger self, I'd say, and actually everyone would tell me, "don't be stressed," but somehow it didn't... When I talk to older alums, they'd always say like, "Try to enjoy it and be fully immersed and be less stressed." I think, though, on specific courses, it's good to have, especially now, a very broad range of subjects and get a bit of understanding of everything.

(50:30) I find that both at school and after, because even now I work in a research organization, I'm constantly learning. You never stop. That is very helpful to kind of understand a little bit of everything. - [Jeff] Thank you so much, 'cause I'm sure your life- - Thank you. - Is stressful. (all laughing) (audience applauding) - Thank you so much.

(51:00) - Thank you for being here today and also thank you for the incredibly important work you're doing for society, quite honestly. It's really important and I'm glad you're in the seat. - Thank you for having me. - Thank you from all of us here at Thayer and Dartmouth as well. So I thought that would be a good place to end on too, some good advice for our students.

(51:23) What a fascinating conversation and just wanted to thank you all again for coming. Enjoy the rest of Commencement Weekend. (no audio) (gentle music)



继续滑动看下一个
AI变现研习社
向上滑动看下一个

您可能也对以下帖子感兴趣

文章有问题?点此查看未经处理的缓存