Home  /  About  /  Contact Us  /  Shopping Cart  

Preview / Hot News

► Government Regulation Of AI Is Coming, So Be Prepared

Hate it or love it, an artificial intelligence (AI) revolution is going to affect our job market. With the launch of ChatGPT in November 2022, it isn’t hyperbole to suggest our eco-nomy sits at a historic inflection point. The impact of AI depends on many variables. Importantly, government regulation will serve a critical role in mitigating some of the dooms-day projections faced by certain industries.

AI regulation in the U.S. is still in its infancy

The first domino to fall in AI regulation came at a Senate subcommittee hearing on May 16, 2023, in Washington D.C. Sam Altman, the CEO of OpenAI, testified before a panel of Senators that his own company’s product needs to be regulated. He encouraged them to adopt regulations that create licensing and testing requirements.

Altman welcomed the idea of creating a new federal agency tasked to oversee AI development in the U.S. For example, to enter the market, the agency could require AI to meet certain thresholds on security. In addition, the agency could restrict the capabilities of the intelligence to ensure reliance. However, this level of oversight and regulation in the realm of development could be met with firm resistance because of the potential consequences for stifling product advancement.

There’s also the threat of monopolization. The Federal Trade Commission (FTC) may be called upon in the near future to investigate major tech companies trying to corner the AI market. In fact, the FTC opened an investigation in mid-July into OpenAI’s alleged mishandling of personal data and consumer protections.

Other companies invested in the AI arms race, such as Google and Meta, should take note of the FTC’s actions. Whether it be consumer protection violations or unfair business practices that limit competitiveness in the market, the FTC has turned its watchful eye towards enforcing fairness and security in the AI market.

The White House hasn’t remained idle either. The Biden administration recently hosted personnel from Google, Meta, OpenAI, and four other major tech companies to discuss the associated risks and security measures needed when, inevitably, more advanced AI rolls out into the market. According to a fact sheet released by the executive branch, the administration has secured voluntary commitments from the companies to manage the risks posed by the rapidly growing development and use of AI technology.

While these commitments remain surface-level, it demonstrates a willingness for open channels and the likelihood that deeper—and preferably more substantial—commitments are possible down the road.

European Union set to lead on AI regulation

In contrast to the U.S., the European Union has adopted a distinct approach to regulate artificial intelligence through the introduction of the “AI Act.” Representing a groundbreaking piece of legislation, the Act marks the EU’s first significant effort to govern AI technologies. It classifies AI into three categories of risk: “low or minimal,” “unacceptable,” and “high.”

AI with “low or minimal” risk is deemed not a risk or linked to the protection of people’s health, safety, or fundamental human rights. For instance, an AI algorithm that translates Italian to English is an example of such technology. These applications of AI are widely accepted because their effects on human health, safety, or fundamental rights are limited.

On the other hand, AI is deemed “unacceptable” if it poses threats to health, safety, or violates fundamental human rights. An example of such a scenario would be an AI algorithm designed to perpetuate harmful biases based on race, gender, or religion. The Act strictly prohibits the use of these technologies in its market.

The crux of the AI Act, however, lies in the regulation of “high-risk” AI technologies, which have significant implications for health, safety, or fundamental rights but do not reach the level of being considered “unacceptable.”

Examples of such high-risk include AI-powered financial trading systems, AI in employment decision-making, and AI technologies in medicine. To ensure compliance, these technologies must meet mandatory requirements and demonstrate conformity through a rigorous assessment process. The Act mandates that conformity assessment for the most critical high-risk AI technologies be conducted by independent “notified bodies,” ensuring an extra layer of oversight.

The AI Act represents the first significant step toward the regulation of AI technologies. The EU Council and EU Parliament are in the process of finalizing the exact text. Once the details are agreed upon, the Act will be signed into law most likely within the year. Whether it becomes the global standard remains to be seen.

What happens next?

Forecasting where the U.S. federal government will ultimately come down on AI regulation is impossible at this point. Additionally, it remains unclear how effective any regulation of AI will be.

Recent actions by Congress, the FTC, and the White House suggest they’re interested, want to stay on top of the developing technology, and are already collecting information to inform future decisions. The federal government can wait and see how the AI Act in Europe will play out. Using that Act as a model, it can adopt similar legislation or pivot in another direction if the Act falls short.

Whatever path forward it takes, businesses should stay alert on how the new legal structure unfolds and, ultimately, affects their interests.

By Bryanna Devonshire and Nicolas Harris.  Ms. Devonshire is an attorney with Sheehan Phinney Bass & Green PA in Manchester, New Hampshire. Ms. Harris was a 2023 summer associate with Sheehan Phinney.

[9/2023]

< Back

 

Be Bound By