The Foundation Model Transparency Index

The Foundation Model Transparency Index
Photo by Google DeepMind / Unsplash


Original Paper: https://arxiv.org/abs/2310.12941

By: Rishi BommasaniKevin KlymanShayne LongpreSayash KapoorNestor MaslejBetty XiongDaniel ZhangPercy Liang

Abstract:

Foundation models have rapidly permeated society, catalyzing a wave of generative AI applications spanning enterprise and consumer-facing contexts.

While the societal impact of foundation models is growing, transparency is on the decline, mirroring the opacity that has plagued past digital technologies (e.g. social media).

Reversing this trend is essential: transparency is a vital precondition for public accountability, scientific innovation, and effective governance.

To assess the transparency of the foundation model ecosystem and help improve transparency over time, we introduce the Foundation Model Transparency Index.

The Foundation Model Transparency Index specifies 100 fine-grained indicators that comprehensively codify transparency for foundation models, spanning the upstream resources used to build a foundation model (e.g data, labor, compute), details about the model itself (e.g. size, capabilities, risks), and the downstream use (e.g. distribution channels, usage policies, affected geographies).

We score 10 major foundation model developers (e.g. OpenAI, Google, Meta) against the 100 indicators to assess their transparency.

To facilitate and standardize assessment, we score developers in relation to their practices for their flagship foundation model (e.g. GPT-4 for OpenAI, PaLM 2 for Google, Llama 2 for Meta).

We present 10 top-level findings about the foundation model ecosystem: for example, no developer currently discloses significant information about the downstream impact of its flagship model, such as the number of users, affected market sectors, or how users can seek redress for harm.

Overall, the Foundation Model Transparency Index establishes the level of transparency today to drive progress on foundation model governance via industry standards and regulatory intervention.

Summary Notes

image

Figure: Scores for 10 major foundation model developers across 13 major dimensions of transparency.

Introduction


In recent years, foundation models like OpenAI's GPT-4 and Meta's LLaMA have revolutionized artificial intelligence, enabling a surge in generative AI applications across various industries.

Despite their rapid integration into society, transparency in the development and deployment of these models has been declining, mirroring the opacity seen in previous digital technologies like social media.

To address this critical issue, researchers from Stanford, MIT, and Princeton have introduced the Foundation Model Transparency Index (FMTI) - a rigorous framework designed to assess and improve transparency in foundation models.

This blog post delves into the methodologies, findings, and implications of the FMTI, and discusses how it can drive progress towards more transparent and accountable AI.


Key Methodologies


The FMTI evaluates transparency across three primary domains: upstream, model, and downstream. Each domain is further divided into subdomains, encompassing 100 fine-grained indicators that collectively measure transparency.

These indicators cover various aspects, such as data sources, labor practices, computation resources, model capabilities, risks, and downstream impact.


The study assessed ten major foundation model developers, including OpenAI, Google, Meta, and Stability AI, scoring them against the 100 indicators to create a comprehensive transparency profile for each.

The scoring process involved a structured search protocol, rigorous independent evaluation by researchers, and feedback from the developers themselves to ensure accuracy and reproducibility.


Main Findings

  1. Overall Transparency Scores: The highest overall score was 54 out of 100, with Meta leading the pack, while the average score across all developers was 37. This indicates significant room for improvement in transparency across the board.
  2. Domain-Level Scores: Transparency was lowest in the upstream domain, particularly concerning data, data labor, and compute. On average, developers scored only 22.5% in upstream transparency, compared to 42.7% in model transparency and 44.9% in downstream transparency.
  3. High-Scoring Subdomains: Developers were most transparent about model basics, capabilities, and downstream distribution channels. However, even in these areas, there were notable gaps, such as the lack of disclosure about model size and deprecation policies.
  4. Low-Scoring Subdomains: Transparency was poorest concerning data labor practices, compute usage, and downstream impact. No developer provided comprehensive information about the downstream applications, affected individuals, or redress mechanisms.
  5. Open vs. Closed Models: Open model developers (e.g., Meta, Hugging Face, Stability AI) were generally more transparent than their closed counterparts (e.g., OpenAI, Google). Open developers scored higher on upstream and model indicators but showed similar transparency levels in downstream indicators.


Implications and Applications


The FMTI's findings underscore the urgent need for increased transparency in the foundation model ecosystem. Here are some potential applications and implications:

  1. Policy and Regulation: Policymakers can use the FMTI to inform AI regulations, ensuring that transparency requirements are evidence-based and targeted at the most opaque areas. For example, specific transparency mandates on data labor practices and downstream impact could be prioritized.
  2. Industry Standards: The FMTI can guide the development of industry standards for transparency, helping to establish best practices and benchmarks that all developers should strive to meet. This can foster a culture of openness and accountability in AI development.
  3. Public Trust and Accountability: Increased transparency can enhance public trust in AI systems by providing users and stakeholders with the information they need to understand and evaluate the impact of these technologies. This is crucial for ensuring responsible AI deployment and mitigating potential harms.
  4. Research and Innovation: By identifying transparency gaps, the FMTI can spur further research into effective transparency measures and tools, such as better documentation practices, third-party audits, and user-friendly transparency reporting mechanisms.


Conclusion


The Foundation Model Transparency Index offers a comprehensive and actionable framework for evaluating and improving transparency in AI.

While the current state of transparency is far from ideal, the FMTI provides a clear path forward.

By adopting the recommendations and insights from this study, developers, policymakers, and stakeholders can work together to build a more transparent, accountable, and trustworthy AI ecosystem.


For those interested in diving deeper into the specifics of the FMTI, including detailed scores and justifications for each developer, the full report and supporting materials are publicly available on the project’s GitHub repository.

Read more