OpenAI CEO Sam Altman is stepping down from the Safety and Security Committee, which was created in May to make important decisions about the company’s projects and operations. The committee will now be an independent board overseen by Carnegie Mellon professor Zico Kolter and other existing OpenAI board members.
The committee has been reviewing OpenAI’s safety record and has been briefed by the company’s safety and security teams. They will continue to receive regular updates and have the power to delay releases if they don’t think they’re safe.
The committee’s job is to make sure OpenAI’s AI models are safe and secure before they’re released. OpenAI is building a new framework for launching models and will report regularly to the committee.
Some people have raised concerns about OpenAI’s policies, and nearly half of the staff that used to focus on long-term risks have left the company. OpenAI has also increased its spending on lobbying and has been involved in a new board that advises on AI safety.
Even though Sam Altman is no longer part of the committee, it’s unlikely that they will make decisions that could harm OpenAI’s business plans. Some people think that OpenAI can’t be trusted to hold itself accountable, and that’s a concern given the company’s growing profit ambitions. OpenAI is rumored to be raising billions of dollars in funding, which could cause it to abandon its hybrid nonprofit structure and prioritize profits over its founding mission.