Anecdotally, I’ve seen two family court cases where one party submitted full AI chats — prompts and colorful complaints included — as formal filings. The complaints wouldn’t pass muster with a real lawyer, but the conflict was nurtured by AI nonetheless. One was dinged for wasting the judge’s time.
I’ve posted a couple of times about instances I’m aware of where people are using AI in pro se court cases, especially family courts. A new study shows evidence of increasing numbers in pro se cases at the federal level, exacerbating existing bottlenecks. Many trade-offs abound here.
A professor asked students to self-report AI usage on their homework, leading to lots of confusion and uproar. Points aside, it’s clear people want more clarity up front about when and whether to use LLM tools. In the meantime, treating students like they’re guilty until proven innocent is a bad MO.
I’m following a guy in TX who is using AI to write and illustrate children’s books whole cloth, then self-publishes using Amazon, and getting recognition in his region as a laudable children’s author. The books are categorically not good. It’s like people are rewarding his content strategy.