The framework document provides detailed definitions for the four risk levels (low, medium, high, and critical) in the four tracked categories. For safety work to keep pace with the innovation ahead, we cannot simply do less, we need to continue learning through iterative deployment. We also want to look beyond what’s happening today to anticipate what’s ahead.We learn from real-world deployment and use the lessons to mitigate emerging risks. In particular, we want to move the discussions of risks beyond hypothetical scenarios to concrete measurements and data-driven predictions. We are investing in the design and execution of rigorous capability evaluations and forecasting to better detect emerging risks. The framework also defines an operational structure and process for preparedness, which includes a Safety Advisory Group (SAG) that is responsible for evaluating the evidence of potential risk and recommending risk mitigations. The framework defines risk thresholds for deciding if a model is safe for further development or deployment. The core technical work in evaluating the models is handled by a dedicated Preparedness team, which assesses a model's risk level in four categories: persuasion, cybersecurity, CBRN (chemical, biological, radiological, nuclear), and model autonomy. The Preparedness Framework is part of OpenAI's overall safety effort, and is particularly concerned with frontier risks from cutting-edge models. The framework lists four risk categories and definitions of risk levels for each, as well as defining OpenAI's safety governance procedures. OpenAI recently published a beta version of their Preparedness Framework for mitigating AI risks.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |