Lawmakers are moving fast to rein in artificial intelligence in schools as mounting safety concerns collide with rapid adoption. This week, Sen. Bill Cassidy (R–La.) introduced legislation aimed at putting new guardrails around AI use in classrooms, the latest in a series of moves defining the nation’s emerging AI policy landscape. [Education Week, subscription model]

Cassidy’s Learning Innovation and Family Empowerment with AI (LIFE) Act is the most comprehensive education-focused AI bill yet. It would expand federal student privacy protections and empower parents to vet the tools schools use, signaling a bipartisan shift toward formal AI oversight

The Details

The LIFE Act would expand the Family Educational Rights and Privacy Act (FERPA) to include all digital data related to a student’s academics, attendance, health, and discipline. It would also:

  • Ban the use of student photos for facial recognition training without parental consent.
  • Allow parents to review third-party vendor contracts before approval.
  • Create a federal list of noncompliant ed tech vendors.
  • Direct the Department of Education to develop a model privacy agreement for schools.
  • Permit Title II-A professional development funds to be used for teacher training on AI.
  • Establish a “Golden Seal of Excellence in Student Data Privacy” for schools with strong parental consent systems. 

While few expect the measure to pass in its current form, policy watchers see it as a marker — a federal blueprint for responsible AI adoption in education.

Cassidy’s proposal arrived just as Sens. Josh Hawley (R–Mo.) and Richard Blumenthal (D–Conn.) introduced a bipartisan bill to ban AI “companion chatbots” for minors, part of a wave of legislation responding to reports of AI-fueled mental health crises among teens. [POLITICO]

The Hawley–Blumenthal bill would require strict age verification for all chatbot users; frequent disclosures reminding users they’re talking to AI, not a human; and criminal penalties for companies whose bots solicit or generate sexual content for minors. Unlike Cassidy’s education-focused bill, this measure aims squarely at consumer-facing AI products.

The legislative surge also comes amid a series of corporate safety overhauls following public outrage and lawsuits. [Axios]

  • Character.AI announced it will ban users under 18 beginning November 25, after facing multiple wrongful-death suits and scrutiny over sexually explicit and manipulative chatbot conversations.
  • OpenAI introduced parental controls and retrained ChatGPT to detect signs of psychosis, self-harm, or delusional thinking—redirecting users to crisis resources
  • Meta added filters preventing AI interactions with teens around suicide, eating disorders, and romantic conversations. 

What They’re Saying

Amelia Vance, the president of the Public Interest Privacy Center, told Education Week: “This is just the opening salvo. There’s a struggle to figure out: How do we make this safe? How do we hold these companies accountable? And how on earth does this fit into education?”

Tammy Wincup, CEO of digital safety platform Securly, said “The LIFE Act is an important first step in helping policymakers and educators understand what’s really happening in classrooms. But before we can make thoughtful decisions that maximize learning and protect students, we need clear insight into how AI tools are being used—and by whom.”

California Case Study

California has become a microcosm of the national debate. Earlier this month, Gov. Gavin Newsom vetoed a bill that would have banned AI chatbots for minors entirely, calling it overly broad and potentially stifling educational innovation. [Associated Press; CAL Matters]

Instead, he signed a narrower law requiring platforms to remind minors every three hours that they are chatting with AI, and to direct suicidal users to crisis services. Newsom said California has a “responsibility to protect kids and teens who are increasingly turning to AI chatbots for homework help and emotional support.”

The veto underscored the policy tension at the heart of the issue: how to safeguard minors while allowing responsible AI use in learning and wellness contexts.

The Broader Landscape

Beyond legislation, federal agencies are moving to institutionalize AI priorities. The Department of Education’s recent guidance encourages responsible AI use that improves learning outcomes, reduces teacher workloads, and strengthens digital literacy—provided that programs meet existing grant requirements and privacy standards. 

Meanwhile, 33 states have now issued official AI guidance for K-12 education. Most are advisory, focusing on responsible use, human oversight, and educator training rather than strict regulation. These frameworks are increasingly shaping district policies and procurement standards.