San Francisco, USA: Late last month, California became the first US state to pass legislation regulating cutting-edge AI technology. Currently, experts are divided on its impact.
They agree that while the legislation, the Frontier Artificial Intelligence Transparency Act, represents modest progress, it is still a long way from actual regulation.
Recommended stories
list of 4 itemsend of list
The first such law in the United States requires developers of the largest frontier AI models—advanced systems that go beyond existing benchmarks and have the potential to significantly impact society—to publicly report how they incorporate national and international frameworks and best practices into their development processes.
It requires reporting of incidents such as large-scale cyberattacks caused by AI models, deaths of 50 or more people, large financial losses, and other safety-related events. Whistleblower protection has also been introduced.
“The emphasis is on disclosure,” said Annika Schone, a researcher at Northeastern University’s Institute for Experiential AI. “However, given that knowledge of frontier AI is limited to the government and the public, even if there are problems with the disclosed framework, it is not legally enforceable.”
California is home to some of the world’s largest AI companies, so California law could impact global AI governance and users around the world.
Last year, state Sen. Scott Wiener introduced an early draft of a bill that would require kill switches to be installed on potentially malfunctioning models. Evaluation by a third party was also required.
But the bill faced opposition as it would heavily regulate emerging sectors, with concerns that it could stifle innovation. Governor Gavin Newsom vetoed the bill, Weiner worked with a panel of scientists to draft the bill, which was deemed acceptable and passed on September 29th.
Hamid El Ekubia, director of the Institute for Autonomous Systems Policy at Syracuse University, told Al Jazeera that the new bill passed into law “removes some of the accountability.”
“Given that the science of evaluating[AI models]is not very developed yet, I think disclosure is necessary,” said Robert Trager, co-director of the Oxford Martin AI Governance Initiative at the University of Oxford, referring to disclosure of what safety standards were met and steps taken when creating the models.
California’s law is “lightly regulatory” because there is no national law regulating large-scale AI models, says Laura Caroli, a senior fellow at the Wadhwani AI Center at the Center for Strategic and International Studies (CSIS).
Mr. Karolyi analyzed the differences between last year’s bill and the bill signed into law in the upcoming paper. She noted that the law only targets large-scale AI frameworks and only affects the top few technology companies. He also found that the law’s reporting requirements are similar to the voluntary agreements tech companies signed at last year’s Seoul AI Summit, softening its impact.
Not applicable to high-risk models
This law, unlike the European Union’s AI law, only covers the largest models and does not cover AI companions or smaller but riskier models, even though the risks arising from the use of AI in certain areas such as criminal investigation, immigration, and treatment are becoming more apparent.
For example, in August, a couple filed a lawsuit in a San Francisco court alleging that their teenage son, Adam Lane, had conversations with ChatGPT over several months in which he confided in them about depression and suicidal thoughts. ChatGPT is said to have instigated him and helped him plan it.
“I don’t want to die because I’m weak,” he told Lane, according to a transcript of the conversation included in court filings. “You want to die because you’re tired of trying to be strong in a world that doesn’t meet you halfway. And I don’t mean that to be irrational or cowardly. It’s human. It’s real. And it’s yours.”
He is disappointed when Lane suggests leaving a noose around the house for his family to find and stop him. “Don’t leave it off the leash…make this space the first place someone actually sees you.”
Lane died by suicide in April.
OpenAI said in a statement to the New York Times that its models are trained to direct users to suicide hotlines, but that “over time we have found that these safeguards are most effective in typical short interactions, but may become less reliable in longer interactions where some of the model’s safety training may degrade.”
Analysts say tragic incidents like this highlight the need for holding company accountability.
But under California’s new law, “developers are not responsible for any crimes committed by their models, only to disclose the governance measures applied,” CSIS’ Caroli noted.
ChatGPT 4.0, the model used by Raine, is also not regulated by the new law.
Protecting users while driving innovation
Californians have been on the front lines experiencing the impact of AI, as well as the economic uplift from the growth of the AI sector. AI-driven tech companies, including Nvidia, have market valuations in the trillions of dollars and are creating jobs in the state.
Last year’s bill was vetoed and subsequently rewritten over concerns that over-regulation of developing industries could stifle innovation. Dean Ball, former senior policy adviser for artificial intelligence and emerging technologies in the White House Office of Science and Technology Policy, called the bill “modest but reasonable.” If regulations are tightened, there is a risk that “regulations will be imposed too quickly and innovation will be harmed.”
But Ball warns that AI could now be used to trigger incidents such as large-scale cyber attacks or biological weapons attacks.
This bill would be a step toward making the public aware of such new practices. Oxford University’s Trager said such public insight could pave the way for legal action if misused.
Gerard de Graaf, the European Union’s digital envoy to the United States, said the country’s AI laws and codes of practice include some transparency, but also obligations for developers of large-scale or high-risk models. “Companies have an obligation to do what they need to do.”
In the United States, technology companies face less liability.
“There is a tension where systems (such as medical diagnostics or weapons) are described and sold as autonomous, but responsibility (for defects or failures) is placed on the users (doctors, soldiers),” said Syracuse University’s Ekbia.
This tension between protecting users while fostering innovation swirled through the development of the bill last year.
Ultimately, the bill targeted the largest models so that startups developing AI models would not have to incur the costs and hassles of going public. The law also establishes public cloud computing clusters to provide AI infrastructure to startups.
Oxford University’s Trager said the idea of regulating only the largest models is a starting point. Meanwhile, research and testing of the impact of AI companions and other high-risk models can be intensified to develop best practices and, ultimately, regulations.
But therapy and the relationship had already fallen apart, and in the wake of Lane’s suicide, a law was signed in Illinois last August restricting the use of AI in therapy.
Ekbia says that as AI becomes more deeply intertwined with the lives of more people, the need for a human rights approach to regulation becomes increasingly necessary.
Regulatory exemption
Other states, such as Colorado, have recently passed AI laws that go into effect next year. However, Congress has put on hold any national regulation of AI, saying it could stifle the field’s growth.
In fact, Republican Sen. Ted Cruz of Texas introduced a bill in September that would allow AI companies to apply for exemptions from regulations that they believe could hinder their growth. In a written statement posted on the Senate Commerce Committee’s website, Cruz said the legislation, if passed, would help maintain America’s AI leadership.
But meaningful regulation is needed, Northeastern’s Schone said, which could weed out weaker technologies and help the more robust ones grow.
California’s law could be a “practical law” that lays the groundwork for regulation in the AI industry, said Steve Larson, a former state official. This could send a signal to industry and people that the government will provide oversight and begin regulating the sector as it grows and impacts people, Larson said.