top of page

Responsible Innovation Labs (RIL) Voluntary AI Compliance Agreements

11.14.2023 - Responsible Innovation Labs (RIL) Voluntary AI Compliance Agreements:

RIL, a group of more than 40 venture capital firms (listed at: https://www.rilabs.org/) – announced entry into “Voluntary Responsible AI Commitments for Startups & Investors,” prescribing terms for responsible AI development by the VCs and the startups they control. RIL presents the Commitments as “part of an effort to enact some guardrails for potentially thousands of startups across the AI industry.” https://www.rilabs.org/responsible-ai.


The responsible AI Commitments focus on: (i) Securing Organizational Buy-in by incorporating responsible AI practices, implementing internal governance processes (including a forum for diverse stakeholders to provide feedback on the impact of new products and identify risk mitigation strategies); (ii) Fostering trust through transparency and documentation concerning how and why AI systems are built and/or adopted (including use of third party AI products); (iii) Forecasting AI risks and benefits (using assessments to inform product development and risk mitigation efforts); (iv) Auditing (data and systems outputs, for harm including bias/discrimination) and testing product safety (using adversarial testing) and documenting results; and (v) Making regular and ongoing improvements.


RIL’s “core working group” reportedly developed the AI “commitments and companion protocol” in collaboration with “startup founders, investors, and policymakers including the U.S. Department of Commerce.”


The protocol is available here.


AI Company criticism includes statements that:

  • “[i]t is literally impossible to *ensure* safety of a general purpose model, and attempts to do so are likely to *reduce* safety”;

  • “public statements like RAI endanger open-source AI research and contribute to regulatory capture”; and

  • VCs/investors are ill-equipped to do things like audit AI models;

  • most of the language is focused on VC/LP risk, rather than advancing AI development.



Core members represent:

  • RIL (Hemant Taneja, Co-founder (and CEO/Managing Director, General Catalyst); Jon Zieger, Founding Executive Director; Jama Adams, COO; Lauren Wagner, Advisor (and Fellow, Berggruen Institute):

  • VC Companies (Hemant Taneja (above), CEO/Managing Director, General Catalyst (and Co-founder, RIL); Chris Kauffman, Principal, General Catalyst; Drake Pooley, Strategic Initiatives, General Catalyst; Kevin Guo, Co-founder/CEO, Hive; Aneesh Chopra, President, CareJourney (frmr U.S. CTO); Daniel Gross, Investor (frmr Apple, Y Combinator); Joy Tuffield, Growth Equity, Generation Investment Management);

  • Tech companies (Chloé Bakalar, Chief Ethicist, Meta; Jonathan Frankle, Chief Scientist, MosaicML; Liane Lovitt, Senior Policy Analyst, Anthropic; John Dickerson, Co-founder/Chief Scientist, Arthur.ai (and Prof, U.Maryland); Munjal Shah, Co-founder/CEO, Hippocratic AI; Navrina Singh, Founder/CEO, Credo AI (and Member, National AI Advisory Committee (NAIAC)); Paula Goldman, Chief Ethical and Human Use Officer, Salesforce (and Member, NAIAC); Rahul Roy-Chowdhury, CEO, Grammarly; Michelle K. Lee, VP, Amazon Web Services Machine Learning Solutions Lab (frmr Under Secretary of Commerce for IP, Director of the U.S.PTO); and

  • Academia (Dan Huttenlocher, Dean, MIT Schwarzman College of Computing).

RIL also claims to be working closely with and endorsed by Open AI and Anthropic, who are “actively contributing to [its] developing protocol”; and claims that “interest in contributing” to the project has been expressed by: the Stanford Human-Centered AI Institute, McKinsey, Schmidt Futures, Ford Foundation, LinkedIn, Data & Trust Alliance, and the Henry R. Kravis Foundation, Business Roundtable.

Comments


bottom of page