US, Britain, other countries sign agreement to make AI 'secure by design'

The agreement comprises recommendations such as proper testing and assessment of AI models before launch
AI letters are placed on computer motherboard in this illustration taken, June 23, 2023. — Reuters
AI letters are placed on computer motherboard in this illustration taken, June 23, 2023. — Reuters

Besides the United States and Britain, 16 more countries have signed a comprehensive accord on how to keep artificial intelligence safe from malicious elements, suggesting companies create AI systems that are "secure by design."

According to Reuters, the 18 countries urge the companies developing and utilising AI to develop and deploy it in a manner that keeps its negative repercussions away from the general public. Nevertheless, the agreement doesn’t bind the companies as it only suggests a set of recommendations,

It was imperative that so many countries put their names to the idea that AI systems needed to put safety first, said Jen Easterly, the director of the U.S. Cybersecurity and Infrastructure Security Agency.

Read also: Who are the new OpenAI board members?

"This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs," Easterly told Reuters.

He added that the guidelines serve as "an agreement that the most important thing that needs to be done at the design phase is security."

The pact marks the most recent and significant in a series of initiatives taken by governments around the world to change the dynamics of AI.

Besides the United States and Britain, the countries part of the agreement include Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria and Singapore.

Read also: China stops selling rare minerals for chips amid tech war

Though it doesn’t address concerns regarding the ethical use of AI, or how the data that feeds these models is gathered, the framework answers questions about how to avert hackers and cyber intruders from hacking the technology.

Moreover, it also comprises recommendations such as introducing updated models after proper assessment and testing.

The rise of AI has led to mounting concerns, including the apprehension of increasing fraud and unemployment, disrupted democratic processes and other harms.

France, Germany and Italy also recently reached an agreement on how artificial intelligence should be regulated that supports "mandatory self-regulation through codes of conduct" for so-called foundation models of AI, that bring a range of fruitful outcomes.