New Report from Partnership on AI Aims to Advance Global Policy Alignment on AI Transparency

SAN FRANCISCO--()--In response to the rapid development of policy frameworks to govern artificial intelligence (AI), Partnership on AI, a nonprofit multistakeholder organization dedicated to responsible AI, has published a new report “Policy Alignment on AI Transparency” identifying potential interoperability challenges and opportunities in leading AI policy frameworks.

The report provides a comparative analysis of eight leading policy frameworks, including the EU AI Act, the US White House Executive Order on AI, and the G7 Hiroshima Code of Conduct, with a focus on documentation requirements for foundation models, the large AI models powering generative AI applications.

Without intentional coordination, we risk creating a fragmented landscape in which AI developers and deployers are unclear on best practices for safe, responsible AI,” said Stephanie Ifayemi, Head of Policy, Partnership on AI. “Our new report provides useful insights for policymakers seeking to prioritize interoperability and alignment. We hope this report will drive forward a strong baseline of good practices internationally, where governments converge around what ‘good’ looks like, can effectively hold organizations accountable and provide industry with clarity about best practices for safe, responsible AI.”

As we navigate the complex landscape of AI governance, the need for coordinated, interoperable policy frameworks becomes increasingly clear,” said Rebecca Finlay, CEO, Partnership on AI. “By working together across borders and sectors, we can create a more coherent, effective approach to managing the risks and harnessing the potential of foundation models, ensuring accountability, transparency and fostering innovation in the global AI ecosystem.”

The report analyzes current and potential near-term interoperability challenges between the documentation requirements in leading policy frameworks. In addition, the report offers recommendations that aim to promote interoperability as well as establish a common baseline for best practices for accountability and transparency.

Since its founding in 2016, Partnership on AI has brought together stakeholders from across the AI ecosystem, including academia, civil society, and industry, to address the most pressing questions on artificial intelligence. In recent years, this has included policymakers. PAI helps to shape AI policy through extensive collaboration with key institutions worldwide, including the UN, OECD, G20, and Global AI Safety Institutes.

About Partnership on AI

Partnership on AI (PAI) is a non-profit organization that brings together diverse stakeholders from academia, civil society, industry and the media to create solutions to ensure artificial intelligence (AI) advances positive outcomes for people and society. PAI develops tools, recommendations and other resources by inviting voices from the AI community and beyond to share insights and perspectives. These insights are then synthesized into actionable guidance that can be used to drive adoption of responsible AI practices, inform public policy and advance public understanding of AI. To learn more, visit www.partnershiponai.org.

Contacts

Media
Holly Glisky
PAI@finnpartners.com

Contacts

Media
Holly Glisky
PAI@finnpartners.com