AI transparency issues.

AI transparency issues: time to reveal the black box

AI transparency issues are pushed into the spotlight as a recent study evaluates the 10 largest AI companies, revealing the intricate challenges of transparency.

As this revolutionary development unfolds, with OpenAI’s ChatGPT attracting 100 million weekly users, other AI systems are closely trailing behind in pursuit of similar advancements. But are we witnessing an opaque tech revolution that benefits only the large companies running these chatbots?

According to a recent report from Standford University, the companies behind today’s AI systems aren’t transparent, which may ultimately harm science, policy-making, content creators, and users.

In a multidisciplinary effort to address transparency issues, researchers at Stanford University, Massachusetts Institute of Technology (MIT), and Princeton University evaluated 100 transparency aspects of the most common AI systems. The Foundation Model Transparency Index (FMTI) employed a 100-point index to score how the 10 largest companies develop, operate, and utilize their AI foundation models. Foundation models refer to the core models upon which more specialized AI applications are built (including so-called large-language models [LLMs]).

Ladies and gentlemen, let’s announce the winner, collecting the highest transparency score: Meta’s Llama 2! But before becoming excessively enthusiastic about the winner, we should note that Llama 2 received 54 points out of 100. In fact, the points of top-5 AI systems in these measurements ranged between 40 and 54.

The least transparent AI system, Amazon’s Titan Text, gathered a futile 12 points. However, Amazon explained that Titan Text is still under private review, indicating their model’s outcome may still be premature.

AI transparency issues represented by Foundation Model Transparency Index Scores.
Foundation Model Transparency Index Total Scores, 2023. Source: 2023 Foundation Model Transparency Index (see

The importance of AI transparency

While the results may seem acceptable at first glance, the authors stress concerns about the increasing opaqueness of these AI models. None of the companies offered market information, such as statistics about the users’ geographical locations. Strikingly, most of them don’t disclose information about their use of copyrighted content or labor practices, which have previously been flagged.  

With AI technology advancing rapidly, journalists and scientists must grasp certain aspects of the “black box” that AI represents. Although the exact workings of AI models post-machine learning remain unclear, we should know the data influencing their output.

Policymakers also have a vast interest in these details. The current AI models pose severe issues and raise questions about biases, labor, and intellectual property. Policymakers need access to the learning data of AI models and the working conditions associated with AI systems to create ethical and moral decisions.

Last but not least, the general public is interested in knowing where our information comes from for the same reasons described above. As these AI models become more prominent in content creation, source verification becomes increasingly crucial. (Our previous post on the threats of AI highlights these issues.)

“Transparency should be a top priority for AI legislation.”

Companies sometimes cite market competition as a reason for their lack of transparency. But, the published report indicates that secrecy is unnecessary to gain a competitive edge. “Our intent is to create an index where most indicators are not in contention with competitive interests; by looking at precise matters, the tension between transparency and competition is largely obviated,” says one of the authors, Rishi Bommasani.

To encourage AI transparency, the researchers propose a shared platform where companies can earn FMTI points by providing updated information. The approach can streamline the efforts to evaluate transparency improvements. “It will be much better if we only have to verify the information rather than search it out,” Bommasani explained.

In the age of AI, transparency has become an imperative, and it’s not just a concern for researchers and policymakers. It’s a matter of public interest.

As we continue to witness the rise of AI, we should emphasize the importance of understanding the sources and processes behind these technologies. The recent Foundation Model Transparency Index shows us we have a long way to go in tackling AI transparency issues. It’s not just about trust; it’s about accountability, fairness, and informed decision-making.

Whether you’re a journalist, scientist, policymaker, or an everyday user, the call for transparency should resonate with us all. As we move forward, it’s time for AI legislation to place transparency at the forefront. In the long term, secrecy is an obstacle to progress. Let’s prioritize transparency in the AI revolution.

What do you think? Should AI systems become more transparent in the interest of science and the public?

Did you like this article? Here’s how you can support. ​

Your help can make a significant impact.

If you enjoyed this article, please take a moment to like and share it. Your actions help us reach a wider audience.

Consider contributing to our project by clicking the Ko-fi button below (or this link). Your donations enable us to create more high-quality content for you.

Also, don’t miss out on future updates—subscribe to our newsletter today!

Share the post:


Connect on social media:

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This is Ivory Embassy

Ivory Embassy’s blog aims to ignite and inspire your curiosity and independent and critical thinking—molding a scientific mindset one step at a time.

Subscribe to our free newsletter below to receive stories, updates, and tips, all to inspire you to think independently, solve problems, and learn. 

Enjoy the stories, and feel free to reach out with questions and content suggestions.

Connect on social media: