Future of AI governance: Mapping regulatory landscape

Academia, industry, policy experts, and international agencies should involve collaboration in regulating AI
A robot looking at the camera. — Pexels
A robot looking at the camera. — Pexels

CEO of OpenAI, Sam Altman recommended last year that lawmakers assess regulating AI during his Senate testimony. The strategies Altman proposed i.e. creating an AI regulatory agency and requiring licensing for companies were found to be very interesting and helpful. Regulating the AI industry will need to account for companies’ economic power and political sway, the Licensing is comprehensible.

Do we need agency to regulate AI?

Across the world, government officials and policymakers have already started to handle the issues raised in Altman’s testimony. The AI Act of the European Union operates on a risk model that categorises AI applications into three levels of risk: unacceptable, low risk, or high risk. Academia, industry, policy experts, and international agencies should involve collaboration in regulating AI. 

Equal Employment Opportunity Commission and the Federal Trade Commission are among the federal agencies that have already acquired recommendations on a few risks that come with AI. Other agencies like The Product Safety Commission and other agencies have part to perform as well.

Emphasising licensing of auditors, not companies

Altman proposed the idea of licensing companies for public release. Particularly he was referring to artificial intelligence which is composed of AI systems having humanlike intelligence which can be a threat to humanity. The process of algorithmic auditing would entail credentialing, standards of practice, and rigorous training.

The authorities of AI fairness argued that addressing bias and fairness in AI cannot be resolved by technical methods only but it will require more detailed risk mitigation practices.

"Strengthening existing statutes on consumer safety, privacy and protection while introducing norms of algorithmic accountability would help demystify complex AI systems," said Anjana Susarla in The Conversation.

What about AI monopolies?

The one thing that was not included in Altman’s testimony was the investment needed to guide large-scale AI models. A couple of companies for example, Meta, Amazon and Microsoft are being held accountable for expanding the world’s largest language models.

Due to the lack of clarity in training the data consumed by companies, Emily Bender, Timnit Gebru among other AI ethics professionals have notified that large-scale implementation of such technologies without appropriate monitoring poses the danger of magnifying machine bias on a societal level. 

Encouraging robust discussions among AI developers, policymakers, and stakeholders affected by the widespread implementation of AI is essential to establish the necessary accountability measures. Without robust algorithmic accountability practices, there is a risk of conducting superficial audits that merely create an illusion of compliance.