News

OpenAI on Monday said its Safety and Security Committee, which the company introduced in May as it dealt with controversy over security processes, will become an independent board oversight ...
OpenAI is making big changes to its Safety and Security Committee, which oversees the safety of AI as its capabilities grow, announcing that CEO Sam Altman will no longer be a member of the group ...
The committee, established in May to oversee critical safety decisions, will now become an independent oversight board with the power to delay releases of new AI models, OpenAI announced Monday.
OpenAI, in response to claims that it isn’t taking AI safety seriously, has launched a new page called the Safety Evaluations Hub. This will publicly record things like hallucination rates of ...
This OpenAI hub provides safety performance on four types of evaluations: harmful content, hallucinations, jailbreaks, and instruction hierarchy.
Discover how OpenAI’s Safety Evaluations Hub is improving transparency and security by addressing AI risks, including jailbreak attempts, harmful content, and hallucinations.
OpenAI Chief Executive Sam Altman said on Tuesday the ChatGPT maker's AI safety leader Aleksander Madry was working on a new research project, as the startup rejigs the preparedness team.
OpenAI also offered updates on its Safety and Security Committee, initially launched by OpenAI's board of directors in May to add another layer of checks and balances to its operations.
OpenAI’s approach to safety testing for its GPT models has varied over time. For GPT-4, the company dedicated over six months to safety evaluations before its public release.
OpenAI promises greater transparency on model hallucinations and harmful content The safety evaluations hub is a new resource that should be regularly updated.
OpenAI gets a new board member. The startup has appointed Zico Kolter, a professor and the director of the Machine Learning Department at Carnegie Mellon University, to its board of directors.
Robust Open Online Safety Tools (ROOST) is a new organization founded by Roblox, Discord, Google and OpenAI to build open-source online safety tools.