78 Bills, 27 States, One Problem: Who's Tracking Your Chatbot Obligations?
If your chatbot serves a 16-year-old in Portland, you must remind them every hour that they’re talking to AI. In San Francisco, it’s every three hours. In Miami, their parent had to consent first. In Richmond, none of this applies — the bill died in committee.
Four states. Four different answers to the same compliance question. And this is just one obligation type — break reminders for minors — across four bills. The total count: 78 chatbot-related bills across 27 states, as of late February 2026.
That’s not a trend piece. It’s an operational problem.
The Bill That Passed Unanimously
Oregon SB 1546 cleared the state House 52-0. The Senate vote: 26-1. In a political environment where AI regulation divides along every conceivable line, one state found near-total consensus — and the resulting law is the most prescriptive chatbot safety statute in the country.
Effective January 1, 2027, SB 1546 imposes four distinct obligation categories on operators of AI systems that interact conversationally with users:
Disclosure frequency. Hourly reminders that the user is interacting with an AI system — specifically when the system suspects the user is a minor. Not a one-time disclosure at session start. Hourly. Every hour the conversation continues.
Emotional manipulation prohibition. Oregon bans engagement-maximizing reward systems, abandonment messaging, and other techniques designed to deepen emotional attachment between a minor and an AI system. Section 3(b) lists specific prohibited patterns — this isn’t a vague “don’t manipulate users” standard. The statute names the behaviors.
Annual reporting. Operators must file annual reports with the Oregon Health Authority detailing system interactions with suspected minors, disclosure compliance rates, and remediation actions taken.
Enforcement. Statutory damages of $1,000 per violation. Per violation. A chatbot serving 10,000 Oregon minors without hourly disclosures isn’t facing one fine — it’s facing 10,000.
Four obligation types from a single state bill. Now multiply that by 27 states with active legislation.
California Got There First — But Oregon Went Further
California SB 243 holds the distinction of being the first state chatbot safety law. Effective January 1, 2026, it established the baseline. Oregon studied that baseline and exceeded it on three of four axes.
| Requirement | California SB 243 | Oregon SB 1546 |
|---|---|---|
| Effective date | January 1, 2026 | January 1, 2027 |
| Break reminders | Every 3 hours | Every 1 hour |
| Emotional manipulation ban | No | Yes (specific prohibited patterns) |
| Suicide prevention protocol | Required (operational gate) | Required |
| Annual reporting | Yes (Office of Suicide Prevention, starting 2027) | Yes (Oregon Health Authority) |
| Disclosure to minors | Required | Required (hourly) |
| Parental consent | No | No |
| Statutory damages | Not specified | $1,000/violation |
Two things jump out.
First: California’s 3-hour break reminder versus Oregon’s 1-hour. If your chatbot serves users in both states, which standard do you follow? The stricter one. Always the stricter one. Your break reminder interval just dropped from 180 minutes to 60 because one state legislature voted differently.
Second: California SB 243 includes a hard operational gate that Oregon lacks in a different dimension. You cannot deploy a conversational AI system in California without a functioning suicide prevention protocol. That’s not a “should have” recommendation — it’s a condition of operation. No protocol, no deployment. Oregon requires it too, but California made it a prerequisite before your system can serve any user, not just minors.
The cross-jurisdictional math is already ugly, and we’re only comparing two states.
The Bills That Stalled — And Why They Still Matter
Not every bill becomes law. But stalled bills signal where regulation is heading — and they create a planning problem for compliance teams.
Florida SB 482 passed the Senate 35-2. Strong bipartisan support. Then it stalled in the House. Speaker Danny Perez publicly stated that the “federal government should handle AI” regulation. The bill includes parental consent requirements for minor chatbot accounts and consumer AI transparency rights. It could resurface in the next session — or Florida could wait for federal action that may never come.
Virginia SB 796 passed the Senate 39-1. Nearly unanimous. Then it was continued to the next legislative session — effectively shelved but not killed. Virginia’s pattern of passing bills through one chamber before parking them means the obligations in SB 796 could become law with minimal notice.
Washington HB 2225 passed the Senate 43-5 on March 6, 2026. Strong momentum. The bill adds disclosure and transparency requirements for AI chatbots serving Washington residents.
Each of these bills creates a compliance planning dilemma. Do you build to the obligations in a bill that passed one chamber but stalled? Do you wait? If Florida’s SB 482 becomes law next session, organizations that didn’t prepare will face a scramble. Organizations that tracked the obligation requirements — even from the stalled version — can activate compliance controls they’ve already designed.
Tracking bills isn’t enough. You need to track the obligations inside them.
The Federal Question
Senator Josh Hawley and Senator Richard Blumenthal introduced the GUARD Act at the federal level. Its approach is blunter than any state bill: ban AI companion systems for minors entirely.
No break reminders. No disclosure frequency. No emotional manipulation prohibitions. Just prohibition.
If the GUARD Act passes, it preempts the entire state-level chatbot safety debate for minors. Every disclosure requirement, every reporting obligation, every parental consent flow — replaced by a single prohibition.
If it doesn’t pass — and federal AI legislation has a poor track record of clearing both chambers — the state patchwork continues to grow. Every legislative session adds bills. Some pass. Some stall. Some die and come back stronger. The obligation count only moves in one direction.
The Cross-Jurisdictional Math
Consider a company operating a consumer chatbot that serves users across California, Oregon, and Florida (assuming SB 482 passes in a future session). Here’s a partial obligation inventory:
Disclosure obligations: Initial disclosure (all three states). Hourly reminders for suspected minors (Oregon). Three-hour reminders (California). Unclear frequency (Florida — bill language pending).
Age-related requirements: Parental consent before account creation (Florida). Hourly AI reminders for suspected minors (Oregon). Suicide prevention protocol as operational prerequisite (California, Oregon).
Prohibited behaviors: Engagement-maximizing reward systems targeting minors (Oregon). Abandonment messaging (Oregon). No equivalent prohibition in California or Florida’s current text.
Reporting obligations: Annual report to Office of Suicide Prevention (California, starting 2027). Annual report to Oregon Health Authority (Oregon, starting 2028). No equivalent in Florida.
Enforcement exposure: $1,000/violation statutory damages (Oregon). Regulatory enforcement (California). Attorney General enforcement (Florida).
That’s 25+ distinct obligations across three states — for one product category. Some overlap. Some conflict. The break reminder conflict alone (1-hour versus 3-hour) requires a policy decision: apply the strictest standard nationally, or build state-specific logic that adjusts behavior based on user location.
Most organizations don’t even have a list.
Where Companies Fail
“We comply with SB 243.”
That sentence appears in board presentations, vendor questionnaires, and internal compliance memos. It sounds definitive. It means almost nothing.
SB 243 contains at minimum 9 distinct requirements. Suicide prevention protocol implementation. Break reminder timing and delivery. Disclosure of AI nature. Reporting cadence and content. Data handling for minor interactions. Protocol update requirements. Each of these decomposes into specific evidence requirements — documentation, testing records, operational logs, audit trails.
“We comply with SB 243” is framework-level thinking applied to an obligation-level problem. An auditor won’t ask “do you comply with SB 243?” An auditor will ask: “Show me the evidence that your three-hour break reminder fires correctly for every minor interaction. Show me the suicide prevention protocol. Show me the annual report you filed.”
The same gap exists at every level. Companies track headlines — “Oregon passed a chatbot safety bill” — but not the obligations. They know SB 1546 exists. They don’t know it has four distinct obligation categories with different evidence requirements, different enforcement mechanisms, and different effective dates.
Tracking regulations is necessary. But it’s the easy part.
Tracking obligations — the specific, enforceable requirements inside each regulation — is where compliance actually happens. And where most organizations lose the thread.
One Map, Not Twenty-Seven Spreadsheets
The chatbot safety wave is a compressed version of the broader AI compliance challenge. Same pattern. Different scale.
EU AI Act: 334 obligations. GDPR: 630. Five US state AI laws: 282. When chatbot safety bills pass in even half of the 27 states with active legislation, the obligation count for a single product category could exceed 200.
The operational question isn’t whether these laws will pass. Many of them already have. The question is whether your compliance tracking operates at the obligation level — specific requirements, mapped to specific evidence, tracked across jurisdictions — or at the headline level, where “we comply with SB 243” substitutes for the 9 specific proofs an auditor will request.
ReguLume maps obligations across regulations and jurisdictions into a single, searchable compliance workbench. Browse the full obligation database at regulume.com/compliance — free, no account required. We’re adding chatbot safety regulations to the platform as states finalize their requirements.
Because tracking 78 bills across 27 states in a spreadsheet was never going to scale.
Tracking the obligations inside them is the only approach that does.
ReguLume tracks AI compliance obligations across 15 regulations and 2,964 obligation points. Browse the database at regulume.com/compliance.
Map obligations to your AI systems
ReguLume covers 2,964 obligations across 15 regulations. Score your compliance posture in hours, not months.
Get Started