White House Sets AI Safeguards, Tech Companies Accept Them

The promises of generative AI are only matched by its risks. Everyone is enthusiastic about the potential of generative AI, but even AI developers agree that the technology carries unpredictable risks.

Authorities in the US and EU have been scrambling to create legal frameworks for AI development that would mitigate its risks while allowing the public to enjoy its commercial and productivity-related benefits.

The Potential Risks of Generative AI

According to AI itself, the predictable risks it carries are numerous and significant. As the technology becomes more capable, new, unforeseeable risks may emerge.

Some of AI’s risks we can currently comprehend are:

  • Malicious input manipulation. Some actors may manipulate the input for generative AI to create skewed output.
  • Fake content and misinformation. Through generative AI, malicious actors can create convincing fake content tomanipulate public opinion, denigrate someone, or spread fake information.
  • Biases and bias amplification. AI may amplify the biases present in its training data. If this data set contains stereotypes, the AI will contribute to perpetuating and amplifying these biases.
  • Privacy issues. AI may learn information from the public concerning identities and other sensitive personal information. It may then make this information available to the public.
  • Copyright issues. AI may generate content based on people’s work without crediting them.
  • Over-reliance on AI. People may become tempted to let the AI think for them. Over-reliance on AI can backfire spectacularly.
  • Unintended outcomes. Seemingly run-of-the-mill responses of the AI may trigger grave consequences for which no one will assume responsibility.

Addressing these issues and those that may arise in the future requires sweeping cooperation between developers, policymakers, and AI researchers. We are already witnessing the budding stages of this cooperation, and the White House recently released a series of AI safeguards developers can observe before lawmakers enact relevant regulations. Organizations like OpenAI, Microsoft, Google, and Meta have already agreed to these safeguards. The commitment of these technology companies is entirely voluntary. The proposal of the White House is not legally binding.

AI Safeguards

Safeguards are necessary to keep the risks of the new technology in check. And the ones the White House has proposed reflect the mentioned concerns. Here’s how the authorities and AI developers hope to mitigate the risks of AI.

  • Red-teaming. Developers have committed to allowing third-party experts to try to push their generative AI into rogue behavior. Researchers hope they’ll be able to gain a better understanding of AI behaviors and the risks they may create this way.
  • Information-sharing. Tech companies agreed to share AI trust and safety information with the Government and other parties. Should they find that their creations are prone to bad behavior, they will share that information, allowing the authorities to tweak regulations to address emerging problems.  
  • Focusing on cyber security. AI developers will invest heavily in cyber security measures.
  • Proactively researching AI risks. Although we know AI poses some risks, at this point, we can’t imagine what future risks may emerge. Only continuous and wide-scoped research can identify future risks, including societal ones before they spin out of control.
  • Working with third parties. Developers committed to working with third parties to identify risks and AI security vulnerabilities.
  • Watermarking. AI-generated visual and audio content will henceforth bear watermarks that make it easy to identify it as AI-generated. This way, the stakeholders hope to head off the risks deep-fake videos and photos may pose.
  • Solving actual problems. Developers will use cutting-edge versions of their products to solve real societal problems.

The agreement expires when lawmakers hammer out relevant legislation to regulate AI. It is meant as a stop-gap measure to allow the slow legislative process to catch up with technological developments.

The fact that the agreement is non-compulsory underlines the limitations of the Government’s powers in this matter. The Biden Administration has already issued a series of guidelines for AI development. The AI Risk Management Framework has no legal bearing either.

Managing AI Risk

The AI Risk Management Framework does a good job of framing risk and taking measures to understand and address the harms, risks, and impact AI may present.

The AI RMF acts as a resource and guide for organizations developing or using artificial intelligence.

Its designers put it together to:

  • Improve the trustworthiness of AI systems
  • Promote responsible design
  • Encourage responsible deployment and use

The RMF is a practical guide with built-in flexibility. It will need flexibility as the technology develops over time and sets the stage for new risks and opportunities.

Voluntary commitments from AI developers have become necessary, even in the EU, where legislators are further ahead in assessing and addressing AI risks. Until relevant laws are in place, these commitments are the only things keeping the potential harms of AI at bay.

Author