Select Page
  • No major foundation model developer is close to providing adequate transparency, revealing a fundamental lack of transparency in the AI industry.

 

AS we transition from the “Wow!” AI stage to the devil in the details, various ways of holding AI developers accountable are arriving too. Here’s one from The Center For Research On Foundation Models that contains an amazing amount on data for us to analyze. Or at least to have ChatGPT analyze for us?

Some believe the development of AI will follow the development of other information tech, such as the arc of behemoths like Meta, Google, Amazon, Apple, Microsoft et al…which in early days talked about “do no evil” (Google) but lost public interest goals in the shuffle toward stock options, quarterly returns and market dominance.

 

The 2023 Foundation Model Transparency Index was created by a group of eight AI researchers from Stanford University’s Center for Research on Foundation Models (CRFM) and Institute on Human-Centered Artificial Intelligence (HAI), MIT Media Lab, and Princeton University’s Center for Information Technology Policy. The shared interest that brought the group together is improving the transparency of foundation models.

The acknowledgements of those who helped is included below as an example of perhaps a large number of motivated researchers hoping to guide the birth of AI in “appropriate” directions.

 

Acknowledgments. We thank Alex Engler, Anna Lee Nabors, Anna-Sophie Harling, Arvind Narayanan, Ashwin Ramaswami, Aspen Hopkins, Aviv Ovadya, Benedict Dellot, Connor Dunlop, Conor Griffin, Dan Ho, Dan Jurafsky, Deb Raji, Dilara Soylu, Divyansh Kaushik, Gerard de Graaf, Iason Gabriel, Irene Solaiman, John Hewitt, Joslyn Barnhart, Judy Shen, Madhu Srikumar, Marietje Schaake, Markus Anderljung, Mehran Sahami, Peter Cihon, Peter Henderson, Rebecca Finlay, Rob Reich, Rohan Taori, Rumman Chowdhury, Russell Wald, Seliem El-Sayed, Seth Lazar, Stella Biderman, Steven Cao, Toby Shevlane, Vanessa Parli, Yann Dubois, Yo Shavit, and Zak Rogoff for discussions on the topics of foundation models, transparency, and/or indexes that informed the Foundation Model Transparency Index. We especially thank Loredana Fattorini for her extensive work on the visuals for this project, as well as Shana Lynch for her work in publicizing this effort.

 

(One  take away from the above list is the extremely broad group of names suggesting a melting pot of sorts in the field. Whether this is because of a world-wide roster, or a USA melting-pot roster, it’s hard to know without talking to the project organizers. But of interest either way as AI is clearly going to be a worldwide phenomenon, as well as quite possibly a nationalistic endeavor competing for world power. )