OpenAI重磅宣布:新模型,训练中!
来自:逆向狮

      凌晨,OpenAI发推宣布两个重磅消息!

       第一个重磅消息:宣布新模型正在训练中,具体模型代码并未透露。(GPT-5.0?)

       原文如下:OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI


       第二个重磅消息:OpenAI宣布成立安全与保障委员会。

       原文如下:(这里会做一段一段翻译)

       Today, the OpenAI Board formed a Safety and Security Committee led by directors Bret Taylor (Chair), Adam D’Angelo, Nicole Seligman, and Sam Altman (CEO). This committee will be responsible for making recommendations to the full Board on critical safety and security decisions for OpenAI projects and operations.

     今天,OpenAI董事会成立了一个安全保障委员会,由董事Bret Taylor(主席)、Adam D’Angelo、Nicole Seligman和CEO Sam Altman领导。该委员会将负责向全体董事会提出关于OpenAI项目和运营的重要安全和保障决策的建议。

     OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI. While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment.

     OpenAI最近开始训练其下一代模型,我们期待由此产生的系统将使我们在通往人工通用智能的道路上达到下一个水平的能力。虽然我们为构建和发布在能力和安全性方面领先于行业的模型感到骄傲,但在这一重要时刻,我们欢迎进行深入的讨论。

     A first task of the Safety and Security Committee will be to evaluate and further develop OpenAI’s processes and safeguards over the next 90 days. At the conclusion of the 90 days, the Safety and Security Committee will share their recommendations with the full Board. Following the full Board’s review, OpenAI will publicly share an update on adopted recommendations in a manner that is consistent with safety and security.

     安全保障委员会的首要任务将是在接下来的90天内评估并进一步发展OpenAI的流程和保障措施。在90天结束时,安全与保障委员会将向全体董事会分享他们的建议。在全体董事会审查后,OpenAI将以符合安全与保障要求的方式公开分享已采纳的建议更新。

     OpenAI technical and policy experts Aleksander Madry (Head of Preparedness), Lilian Weng (Head of Safety Systems), John Schulman (Head of Alignment Science), Matt Knight (Head of Security), and Jakub Pachocki (Chief Scientist) will also be on the committee.

     OpenAI的技术和政策专家Aleksander Madry(准备工作负责人)、Lilian Weng(安全系统负责人)、John Schulman(对齐科学负责人)、Matt Knight(安全负责人)和Jakub Pachocki(首席科学家)也将加入委员会。

     Additionally, OpenAI will retain and consult with other safety, security, and technical experts to support this work, including former cybersecurity officials, Rob Joyce, who advises OpenAI on security, and John Carlin.

     此外,OpenAI将保留并咨询其他安全、保密和技术专家,以支持这项工作,包括前网络安全官员罗布·乔伊斯(Rob Joyce)和约翰·卡林(John Carlin),他们为OpenAI提供建议。


     以下是委员会成员身份信息:

     委员会领导:

     委员会成员: