A first-of-its-kind AI bill moving through Connecticut’s legislature went up in smoke this week after the state’s governor said he would veto it if it reached his desk.
“I’m just not convinced you want all 50 states to do their own thing. I’m not sure you want Connecticut to be the first state to do that,” Gov. Ned Lamont. told CT InsiderHe added that the bill is too soon.
“I said, ‘Why don’t we lead, together, and work with all the other governors?'” he said. “If you don’t think the feds are going to take the lead on this, maybe the states should, but you shouldn’t have one state do it. You should have us do it collectively.
Over the past few years, 18 states have passed narrow AI laws regulating automated employment decision tools, mandating that individuals be notified when they are interacting with AI systems, and Take other specific measures intended to protect individuals from the harmful effects of being unsafe or ineffective. AI Systems. The Connecticut bill — which passed the state Senate but is unlikely to advance in the House after Lamont expressed his intention to veto it — is different because of how broad it is. Bill It will introduce rules on the development and use of both general-purpose AI systems and models considered high-risk, including the requirement to implement a risk management policy before the release of high-risk systems and mandates around technical documentation. required, and in some cases, made public, along with many other requirements. The bill would also ban the dissemination of certain artificial images, establish an AI advisory council, provide AI training for workers, establish a secret computer cluster and more.
The enactment of the Connecticut bill encapsulates many of the concerns and challenges surrounding AI regulation. As flawed AI systems proliferate and rapidly improving deepfakes threaten everything from individuals to businesses to elections, lawmakers face more reasons to act every day. At the same time, they are wary of being too early or too bold. From the state to the federal level, AI laws push and pull between moving too fast (and potentially stifling innovation) and not moving fast enough (leaving society open to predictable, widespread harm). It is becoming an obstacle in the way of creation.
Indeed, while Lamont supports aspects of the bill, one of his main reasons for opposing it as a whole stems from concerns about deterring businesses pursuing AI.
“I worry that if it’s too burdensome and too regulatory, all the startups around AI won’t be in Connecticut. They’ll be in Georgia or Texas. And I don’t want that to happen,” he said. The state Department of Economic and Community Development similarly opposed the legislation during a recent public hearing, warning that some of the proposed regulations could hinder early-stage businesses.
Some industry stakeholders have. Expressed strong opposition., which seems to have had an impact on the bill and its support. Officials from the Virginia-based Consumer Technology Association (CTA) — the trade association that holds the annual CES conference in Las Vegas — sent a three-page letter to Connecticut lawmakers last month that The proposed legislation has been described as a threat to the industry. would “impose significant new duties on developers and deployers of AI” and “effectively impose strict new compliance obligations that reach far beyond Connecticut.” Douglas Johnson, the organization’s VP and author of the letter, has since expressed further concerns about the “fragmented approach” and called for policy at the federal level.
While warnings from tech executives to stifle innovation are usually made in their own self-interest, both CTA officials and Lamont have a point. A state-by-state approach would be cumbersome for companies from a compliance perspective. And as we’ve seen with the states’ piecemeal approach to data privacy legislation, state laws make more sense when we’re talking about technology that doesn’t start or end at state borders. do not have The lack of federal data privacy law in the United States—which exists in the majority of countries around the world—has left Americans open to widespread and highly damaging data privacy breaches. Countries around the world are currently grappling with AI legislation, and the EU AI Act, the first comprehensive AI legislation to be passed, kicked it off in March. So while the lack of federal legislation on AI doesn’t make the U.S. out yet, we’ve seen how that happens.
Lamont also acknowledged that these are extremely complex issues and that the bill is moving too quickly, needing to be pushed through in a matter of days after two weeks of intense writing. Instead, he expressed support for a smaller, pending bill that would criminalize the distribution of fraudulent synthetic media as well as AI-generated deepfake porn 90 days before an election. It seems like everyone can agree on a reasonable course of action, but wouldn’t it be more effective if Congress passed this law instead?
And with that, here’s more AI news.
AI in the news
OpenAI is working on a web search feature for ChatGPT. That’s according to Bloomberg.. This feature will allow users to ask ChatGPT questions and get answers from the web with references in response. One version also replaces the images. ChatGPT currently offers information from the web only to paying users and in a limited capacity. The new product seems to try to expand on that capability — and compete with Google and unicorn AI search startup Perplexity AI, which uses AI as a way to access information on search chatbots. are increasing to raise
Google DeepMind and Isomorphic Labs unveil AI breakthrough for biology research Google DeepMind and its spin-off Isomorphic Lab have created a new AI model that they say can help predict both the structure and interactions of most molecules involved in biological processes. It contains proteins, DNA, RNA as well as some chemicals used to make new drugs and could be a giant leap forward for biological research. The companies are allowing researchers working on non-commercial projects to query the model for free. Isomorphic Labs is already using the system internally to accelerate new drug discovery, one of them. The most anticipated use cases for generative AI. (Other research BCG today found that AI-discovered drugs have an 80-90% success rate in Phase I clinical trials, compared to an average success rate of 40-65% for human-discovered drugs. You can read my colleague Jeremy Kahn’s coverage of the new model, called the Alpha Fold 3, here. Here.
Stack Overflow users are rebelling against the company’s deal with OpenAI. According to it. Tom’s Hardware. After the company signed an agreement to remove user posts for OpenAI to train ChatGPT, users have begun removing or editing their questions and answers to make them appear as AI training. Can be prevented from being used. According to the report, users participating in the protest are now being suspended and banned widely. Stack Overflow is a long-standing foundation for the developer community where users share coding knowledge and ask and answer questions. With all the interest in teaching LLMs to code, it makes sense why companies like OpenAI would want to get their hands on the data. But just as writers, artists, musicians, and other creators take issue with AI companies training them to do their work, programmers aren’t okay with seeing their contributions used for profit without their consent. It doesn’t help that for years, Stack Overflow had a strict policy banning the use of generative AI to contribute to the site.
Election officials undergo training to identify and respond to AI threats. This is an interesting and important read. gave The Washington PostDetails of an intensive, multi-day training seminar held by Arizona election officials recently to prepare for the upcoming election. Participants at the training in Phoenix studied AI-generated content that could be used to influence elections—from fake social media posts designed to suppress voter turnout to AI-generated voices. Mails purportedly from the Secretary of State’s office asking them to keep polling places open. Lacking court orders, even the introductory video featuring Arizona Secretary of State Adrian Fontes that officials watched to begin training, they later learned, was a very convincing deepfake. was One officer who completed the training described the simulations as “mind-blowing” and disturbing. “By the end of the second day, you’re like: don’t trust anybody,” he explained gave Post.
Fortune on AI
AI panic looks like an early critique of lightning. We know how that turned out. – Rachel Jones
OpenAI is rolling out a new plan to protect creators’ works — here’s how it won’t actually solve AI’s copyright crisis. – Sharon Goldman
Is your company moving too slow, or too fast on gen AI? – John Cale
If they have AI skills, LinkedIn and Microsoft data shows, Gen Xers can swipe millennial jobs. Oriana Rosa Royal and Jane Thayer
Data-driven tactics are great, but Liverpool FC’s real AI goal is to help fans get more of a kick out of their content — Molly Flett
Politicians and nonprofits will struggle to keep AI under control—but corporate boards can’t afford to fail. Jeffrey Seven (commentary)
You have a calendar.
May 14: Google I/O
May 21-23: Microsoft Build in Seattle
May 21-22: AI Seoul Summit in Seoul, South Korea
June 5: FedScoop’s FedTalks 2024 in Washington, DC
June 25-27: 2024 IEEE Conference on Artificial Intelligence in Singapore
July 15-17: Fortune Brainstorm Tech in Park City, Utah (Register Here)
July 30-31: Fortune Brainstorm AI Singapore (Registrar Here)
August 12-14: Ai4 2024 in Las Vegas
Keep an eye on the AI numbers
129%
According to data shared by Manpower Group with IONAI, job postings for AI security and compliance-related roles have increased so far in 2024 compared to the same period last year.
The workforce solutions company also saw a 117% year-over-year increase in immediate engineer listings. But overall, lead engineer and senior engineer in generative AI intelligence product development are the most sought-after AI roles, according to the firm. Anthropic is one of the top tenants of these roles, experiencing a staggering 459% increase.
Credit : fortune.com