1 분 소요

Frontier Model Forum 공식 사이트

Purpose

Governments and industry agree that appropriate guardrails are required to mitigate risks. Important contributions to these efforts have already been made by the US and UK governments, the European Union, the OECD, the G7 (via the Hiroshima AI process), and others.

To build on these efforts, further work is needed on safety standards and evaluations to ensure frontier AI models are developed and deployed responsibly.

The Forum will be one vehicle for cross-organizational discussions and actions on AI safety and responsibility.

Core objectives of the Forum

  • Advancing AI safety research

    Research will help promote the responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety.

  • Identifying best practices

    Best practices for the responsible development and deployment of frontier models are essential, as is helping the public understand the nature, capabilities, limitations, and impact of the technology.

  • Collaborating across sectors

    Policymakers, academics, civil society, and companies must work together and share knowledge about trust and safety risks.

  • Help AI meet society’s greatest challenges

    Support efforts to develop applications to address issues like climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.

참고 AI타임즈

7월 22일 백악관에서 AI 안정성을 위한 약속을 담은 8개 조항에 서약한 데 따른 후속 조치

참고 OpenAI