Connecticut Poised to Enact One of the Nation's Most Comprehensive AI Laws
On May 1, 2026, the Connecticut House voted 131–17 to pass Senate Bill 5 — the Connecticut Artificial Intelligence Responsibility and Transparency Act — sending the 71-page omnibus to Governor Lamont's desk. The Senate passed it 32–4 on April 21. The governor's spokesperson has said the governor "looks forward to signing SB 5 into law," a marked departure from 2025, when a similar effort died under veto threat. This year's bill was negotiated to include governor-backed provisions, a regulatory sandbox and youth social media protections, that appear to have secured his support.
Attorney General William Tong, who will serve as the bill's primary enforcement authority, has championed SB 5 from the outset. Tong's February 2026 advisory memorandum to businesses on how existing Connecticut law already applies to AI systems previewed his enforcement posture: his office views AI as squarely within its remit and SB 5 gives it significantly expanded, purpose-built tools. With limited exception, nearly every provision is enforceable exclusively by the AG as an unfair or deceptive trade practice, with an express provision that prevents a private right of action. Critically, however, section 39 (the social media / covered platform provisions) declares violations to be unfair or deceptive trade practices but does not disclaim a private right of action. This is a departure from the pattern used in every other enforcement provision in the bill, and it could be read to leave open private enforcement under CUTPA's general private-action mechanism.
Below are key provisions in the bill and how they compare with other state laws, and some recommendations for companies:
Key Provisions and How They Compare
- Automated employment decision technology (effective October 1, 2026; deployer obligations from October 1, 2027). Developers of AI tools used as a "substantial factor" in hiring, promotion, discipline, or discharge must provide deployers with compliance-related information. Deployers must notify affected employees and applicants of the technology's use, purpose, data categories, and sources. The bill also amends Connecticut's anti-discrimination statutes to codify that automated decision-making is not a defense to a discrimination claim, going further than any other state, while allowing courts to consider proactive anti-bias testing as a mitigating factor. The developer/deployer structure resembles Colorado's SB 205, but the Connecticut law (i) is narrowly scoped to employment rather than all "consequential decisions" and (ii) does not require developers to provide deployers with detailed information about the tool’s training, evaluation, uses etc. Unlike Illinois's HB 3773, Connecticut offers no private right of action and includes a 60-day AG cure period through 2027.
- Frontier model whistleblower protections (effective October 1, 2026). Developers training foundation models using more than 10²⁶ computing operations must protect employees who report concerns that the developer or model may contribute to a catastrophic risk — defined as an event that would result in injury or death to 50+ people or $1B+ in property damage from CBRN assistance, autonomous cyberattacks, or autonomous criminal conduct. Large frontier developers (over $500M revenue) must establish anonymous internal reporting by January 1, 2027. The approach is narrower than California’s Transparency in Frontier Artificial Intelligence Act, or New York’s RAISE Act, which impose broader safety testing and reporting obligations, and Connecticut's $1,000-per-violation penalty is modest by comparison.
- AI companion chatbot regulation (effective January 1, 2027). Operators must implement evidence-based suicide and self-harm detection protocols, refer users who express suicidal or self-harm ideation to mental health resources (including the 9-8-8 Lifeline), and disclose to users that they are interacting with AI. The law also includes new protections for minors. Operators must make tools available to a minor user and the user’s parents to manage the minor’s screen time and account settings. Additionally, the law requires operators to implement measures that “meet or exceed industry standards” to prevent the chatbot from engaging in a variety of potentially harmful interactions with minors, including: romantic or sexual interactions, encouraging self-harm or substance use, offering unsupervised mental health services, or deploying manipulative techniques to foster emotional dependence. Connecticut's provisions are consistent with similar laws that have passed across the country that require chatbot operators to implement specific content protections for minors (including in Nebraska, Oregon, and Washington). We anticipate that states will continue to pass legislation that includes or mirrors this level of detail on minor protections and engagement manipulation.
- Synthetic content provenance (effective October 1, 2026). Large generative AI providers — those with more than one million monthly users — will be required to embed provenance data into any audio, image, or video content their systems generate or materially alter. This functions as a machine-readable record of origin, allowing downstream users and consumers to verify whether a given piece of content was AI-produced. Providers must also take reasonable steps, consistent with standards like C2PA, to make that provenance data resistant to removal or tampering. The provision addresses a growing challenge in the information ecosystem: as synthetic media becomes increasingly indistinguishable from authentic content, provenance requirements create a traceable chain of origin at the point of generation, equipping consumers, platforms, and institutions with a practical mechanism to assess content authenticity and guard against AI-enabled disinformation.
- Youth social media protections (effective January 1, 2028). Platforms must obtain parental consent before exposing minors to personalized algorithmic feeds and enforce defaults including a one-hour daily time limit, no notifications outside 8 AM–9 PM ET, and blocking of sensitive content. The algorithmic feed restriction follows the model pioneered by New York's SAFE for Kids Act and California's SB 976, but Connecticut goes further than either by imposing a broader notification curfew window and a prescriptive Surgeon General warning label that must occupy 75% of the screen for 30 seconds on first daily access — among the most aggressive warning label requirements any state has enacted. As noted above, this provision is arguably enforceable by private right of action.
- Independent verification pilot (effective July 1, 2027). Connecticut's Department of Consumer Protection will approve up to five third-party organizations to verify AI models against safety standards — to our knowledge, the first state-level program of its kind. Verification evidence is admissible only in private civil suits for personal injury or property damage, not in state enforcement actions, and does not create a presumption or defense.
- Regulatory sandbox and workforce provisions. Connecticut's Department of Economic and Community Development (DECD) must plan a sandbox program for testing AI products under reduced regulatory requirements (report due January 1, 2028), following the model of Texas' Responsible Artificial Intelligence Governance Act (TRAIGA). The bill also creates a Connecticut AI Academy, requires AI-related layoff disclosure, mandates state agency AI inventories and impact assessments, and adds AI to K-12 computer science curricula.
Federal Preemption Risk
The bill enters a contested federal landscape. President Trump's December 2025 executive order directed the DOJ to challenge state AI laws deemed inconsistent with federal policy, specifically citing Colorado's SB 205. Connecticut's child safety and state procurement provisions likely fall within the executive order's recognized carve-outs, but the employment and frontier model provisions could face scrutiny.
Considerations for Companies
Connecticut's legislation is one of the most ambitious state-level AI regulatory packages enacted to date. The bill's staggered effective dates leave limited runway. Companies with Connecticut operations, users, or employees should consider prioritizing the following:
- Inventory AI systems across employment, consumer-facing, and subscription use cases to identify applicable provisions; engage with AI tool vendors now on the data, logic, and anti-bias testing disclosures that employment deployers will need by October 2027;
- Evaluate consumer-facing chatbots against the companion definition and minor-protection requirements effective January 2027;
- Assess readiness for C2PA-aligned provenance data if operating a generative AI system above the 1M-user threshold; monitor federal preemption developments and maintain compliance programs flexible enough to adapt; and
- Consider participating in the AI working group (first meeting by August 31, 2026) and sandbox planning processes to help shape implementation.
The largely AG-only enforcement model offers some comfort, but the bill's breadth means a wide range of companies will need to assess their exposure and the bill leaves open a private right of action for one of its most salient provisions concerning youth protection. With the governor's signature expected shortly, the time to begin that assessment is now.
