Microposts

“Evidence from a study about workplace writers who use AI suggests that writers are outsourcing some of their research, editing, or drafting to AI, but that they retain responsibility for their writing.”

A lot of readers are fascinated with the “black box” of AI writing, and trying to reverse engineer what it does and why. John Gallagher goes down the rabbit hole and articulates some credible theories about why LLMs use lists and listing to create meaning, and why it matters.

French overlooks how smartphones and social media raised the stakes on debate and discussion, transforming campus discourse. Today’s students worry that one viral misstep (in countless directions) may define them forever.

Connected Places uses ICE as a case study to explore trust, safety, and community dynamics on decentralized social networks, examining how federation changes community moderation expectations we’ve developed from centralized platforms.

A new paper in Science Magazine explains how AI now allows propaganda campaigns to reach previously unprecedented scale and precision. This gets into the implications for organizations, institutions and nations.

The TikTok deal means American users will see a US-only algorithm. Brands and creators will likely see smaller audiences and higher costs for domestic reach. ByteDance faces split algorithms, divided workforces and parallel governance, complicating product delivery across global markets.

The promise of AI is that it makes work more productive, but the reality is proving more complex and less rosy.

I’m generally skeptical of anyone selling a solution to a social problem that relies on individual abstinence, so I tend to be annoyed with many arguments about the attention economy. I more or less land here on the question of AI, which I know many of my contemporaries will find similarly annoying.

Searching for Suzy Thunder: In the ’80s, Susan Headley ran with the best of them—phone phreakers, social engineers, and the most notorious computer hackers of the era. Then she disappeared.

While conspiring with a friend about life and work in these trying times, both of us confessed that we believe, at the root, that reading and writing are ultimately the cure for everything that ails us: collectively, individually, epistemically, existentially. Maybe that’s naive, but I’ll take it.

Make Canadian TV weird again (sponsored by The Red Green Show, probably).

UW-Madison is among universities seeing federal terminations of international student visas. Public research universities have come to rely on these students to offset funding cuts. The losses are both financial and cultural.

404 Media on Wikipedia, reciprocity and collaboration online, and how to protect the public commons in the age of AI.

Wondering whether I want to switch to something more robust like Wordpress if I keep doing this thing, but at the same time I have enjoyed not working within and around the world of WP, which has dominated my CMS experience since the aughts.

This observation at the end of Manton’s post on AI and Wikipedia made me chuckle:

AI using Wikipedia reminds me of the FAQ on setting up a Little Free Library: _I think someone is stealing books from my library and selling them, what do I do?_ Remember that the purpose of a Little Free Library is to share books—you can’t really steal from it._

Woof: ads are coming to ChatGPT.

I too have been challenged by defining “federation” to non-technical audiences, so it was fun and instructive to see others give it a go.

The CEO of Instagram says that social media platforms will be under mounting pressure to help users tell the difference between human-made and AI content, and that going forward, it will be more practical to label real content over AI.

“It’s really a software maturity story,” Sag says. “But that’s not very sexy.”

Folks are beginning to wonder why Twitter and Grok are still in the app stores, given the latest trend in using the LLM to generate non-consensual imagery of people (namely women and children) at alarming rates.

Blogging from the Ruins is an essay getting a ton of attention in the fediverse this week, making a strong case for intentionally building non-algorithmic intellectual communities on the open web.

A new study suggests that countries who report more positive experiences with social media also feel more positive about AI. It seems to come down to tech regulation and trust.

On the Media spends an hour exploring the media strategy behind the calls for debate. Tl;dr: controversy extends unpopular ideas much further than they would reach organically on their own.

New numbers from Pew Research on how teens used social media and AI chatbots in 2025.

One of my favorite low-cost cooking hacks is baking nice charcuterie on a frozen pizza. Today, I found out that Alimentari carries Smoking Goose products from my home state, so we’re having frozen pizza from Sal’s with capicola for dinner. Rawr, yum.

The deepfake goes mainstream, and it sounds like virtually everyone is unprepared for the negative social implications.

Our friend, the RSS feed.

Nieman Lab’s Predictions for Journalism 2026

Anthropic on how AI is changing how people work. This is a marketing piece, of course, but useful nonetheless.

Substack entrapment theory