Searching for Suzy Thunder: In the ’80s, Susan Headley ran with the best of them—phone phreakers, social engineers, and the most notorious computer hackers of the era. Then she disappeared.
Newsletter
While conspiring with a friend about life and work in these trying times, both of us confessed that we believe, at the root, that reading and writing are ultimately the cure for everything that ails us: collectively, individually, epistemically, existentially. Maybe that’s naive, but I’ll take it.
Make Canadian TV weird again (sponsored by The Red Green Show, probably).
Rules without lessons
Wednesday, January 21, 2026
If you spend time around cycling and pedestrian advocates, the debate between bans and regulations is familiar territory. When I got deep into road biking, where I learned to ride long distance through a red state with almost no bike infrastructure outside tight urban and exurban areas, one of the best things I did was take road classes through the League of American Bicyclists. You learn the rules of the road from a cyclist’s perspective and practice skills like riding with car traffic under expert guidance, including how to change a flat on the side of the road in the height of summer, gritty with sweat and road grime.
The challenge is that bike education isn’t standardized, so most cyclists never learn the fundamentals anyway. Many of us learned as kids and haven’t had a refresh since. I get stomach pain when I see people riding at night without a light, going too fast on a dedicated path, and adults riding their bike on a pedestrian sidewalk. But when I think about e-bike bans and pedestrian right-of-way debates, it strikes me that outside of getting a driver’s permit for car drivers, there’s essentially no infrastructure for learning how to share roads and paths safely. We’re trying to regulate behavior most of us didn’t learn in earnest.
UW-Madison is among universities seeing federal terminations of international student visas. Public research universities have come to rely on these students to offset funding cuts. The losses are both financial and cultural.
404 Media on Wikipedia, reciprocity and collaboration online, and how to protect the public commons in the age of AI.
Wondering whether I want to switch to something more robust like Wordpress if I keep doing this thing, but at the same time I have enjoyed not working within and around the world of WP, which has dominated my CMS experience since the aughts.
This observation at the end of Manton’s post on AI and Wikipedia made me chuckle:
AI using Wikipedia reminds me of the FAQ on setting up a Little Free Library: _I think someone is stealing books from my library and selling them, what do I do?_ Remember that the purpose of a Little Free Library is to share books—you can’t really steal from it._
Woof: ads are coming to ChatGPT.
Affinity as an organizing principle
Friday, January 16, 2026
Reading this blog post by a political scientist explaining the problem with our fractured information landscape, and how calls for more information and media literacy are not likely solutions:
“In short, decades of research have demonstrated that our political beliefs and behavior are thoroughly motivated and mediated by our social identities: i.e., the many cross-cutting social groupings we feel affinity with. And as long as we do not account for this profound and pervasive dependence, our attempts to address the epistemic failures threatening contemporary democracies will inevitably fall short. More than any particular institutional, technological, or educational reform, promoting a healthier democracy requires reshaping the social identity landscape that ultimately anchors other democratic pathologies.”
As always, this drives me back to Haraway’s cyborg, a useful metaphor for thinking about our political, environmental and social tangle and how it butts up against emerging tech and science. (In Haraway’s context, it was the rise of STEM as a driving force in academia at the dawn of the computer age.) Bagg’s argument lands in familiar territory for anyone who’s wrestled with the cyborg metaphor. Both reject the assumption that better information alone will save us from ourselves, whether from context collapse or the dualisms (binaries, heh) that structure how we think about technology, nature, humanity and politics.
Bagg arrives at something parallel from political science: We trust information that affirms the groups we belong to. (Business and marketing, for what it’s worth, tell us the same thing from a slightly different angle: you’re most likely to convert on a recommendation from a trusted friend. The next best thing in our current media landscape: a trusted influencer you identify with, which is why TikTok increasingly feels like QVC.) The problem isn’t that people lack access to truth, it’s that they’ve lost affinity with the experts, institutions and collaborative practices that produce expertise.
Both perspectives point toward the same conclusion: you have to recognize shared affinities through the slow work of creating conditions where people want to trust each other across differences.
I too have been challenged by defining “federation” to non-technical audiences, so it was fun and instructive to see others give it a go.
The CEO of Instagram says that social media platforms will be under mounting pressure to help users tell the difference between human-made and AI content, and that going forward, it will be more practical to label real content over AI.
“It’s really a software maturity story,” Sag says. “But that’s not very sexy.”
The trust gap
Friday, January 9, 2026
I suspect these three trends are connected: Women reportedly use AI at significantly lower rates than men—25 percent lower on average—in part because they’re more concerned about ethics, including privacy, consent and intellectual property. At the same time, countries with more positive social media experiences tend to be more open to AI, while Americans’ distrust is shaped by years of watching tech platforms erode trust. Meanwhile, one of the largest social platforms has turned its AI chatbot into a harassment tool—generating roughly one nonconsensual sexualized deepfake image per minute, disproportionately targeting women and girls.
When platforms enable abuse at scale, it makes sense that people most likely to be harmed would be most attuned to ethical concerns, and would thus be the most cautious about AI adoption.
Folks are beginning to wonder why Twitter and Grok are still in the app stores, given the latest trend in using the LLM to generate non-consensual imagery of people (namely women and children) at alarming rates.
Blogging from the Ruins is an essay getting a ton of attention in the fediverse this week, making a strong case for intentionally building non-algorithmic intellectual communities on the open web.
A new study suggests that countries who report more positive experiences with social media also feel more positive about AI. It seems to come down to tech regulation and trust.
On the Media spends an hour exploring the media strategy behind the calls for debate. Tl;dr: controversy extends unpopular ideas much further than they would reach organically on their own.
New numbers from Pew Research on how teens used social media and AI chatbots in 2025.
One of my favorite low-cost cooking hacks is baking nice charcuterie on a frozen pizza. Today, I found out that Alimentari carries Smoking Goose products from my home state, so we’re having frozen pizza from Sal’s with capicola for dinner. Rawr, yum.
The deepfake goes mainstream, and it sounds like virtually everyone is unprepared for the negative social implications.
Wednesday, December 10, 2025 →
Our friend, the RSS feed.
Nieman Lab’s Predictions for Journalism 2026
A meta lesson about AI assistance
Sunday, December 7, 2025
I just completed my first attempt at coding using AI, in this case having Claude assist me with putting together a simple client-side OPML parser using Dave Winer’s Feedland service.
Winer’s original script is pretty slick, and includes a list of all my feeds with titles, URLs, and categories; click-to-expand functionality to see the 5 most recent posts from each feed; clickable post titles that open articles in new tabs; sort options (by title or by update); and automatic updates when I change my FeedLand subscriptions.
You can check it out here: Feeds
The official documentation method didn’t initially work because Hugo (the blogging software behind micro.blog) was wrapping client-side templates around the script. The toolkit requires server-side dependencies that don’t exist on static sites like micro.blog, and we hit a cascade of missing JavaScript dependencies (jsonStringify, servercall, etc.). Each fix revealed another dependency, leading to some “sunk cost” frustrations for me. I kept trying because I wanted to see if Claude could pull it together. Through trial and error, I got to a point where the OPML file was rendered correctly without server dependencies or complex external libraries.
Time invested: ~3 hours (including wrong turns)
Time it should take: 10 minutes
AI extended my code reach beyond my practical skillset by quite a lot. I now have a dynamic and dedicated place to read and share news feeds as I wish. Though even when generative AI works and works well, I have significant concerns about the intellectual property implications of AI, and this project brought those tensions into sharp focus. The AI could only help me because it was trained on documentation and intellectual work from the open source community, contributions made freely in the spirit of knowledge sharing, not to train commercial AI systems. I tapped into their expertise by paying Anthropic $15 a month. While I’m grateful for the accessibility this provides to non-developers like me, I recognize there’s an unresolved ethical question about whether this use respects the intent and labor of the original creators. The feat is incredible; the foundation it’s built on deserves careful consideration.
After the exercise was complete, I asked Claude how I could have improved my prompting to make this process easier, and in short, Claude said I could have been a web developer. But since I’m not, here’s what it recommended:
✅ When the process isn’t working, question the process mid-stream. Most people either give up or keep following bad advice deeper into rabbit holes. Stop and question the LLM’s process and ask for alternatives to force a reset.
✅ Push for usability. Keep bringing the conversation back to what you actually need the end result to do, not what’s technically impressive or “correct.” In my case, this meant repeatedly asking “can I click through to the articles?” rather than getting lost in discussions about CORS proxies or JavaScript syntax. Focus on outcomes, not implementation details.
✅ Ask for complete solutions. Instead of trying to mentally patch together incremental changes across multiple responses, ask the LLM to provide fresh, complete code each time. This prevents copy-paste errors and ensures you’re always working with a coherent, tested solution. There’s more than one way to crack an egg, but you want the whole egg regardless.
After all that, I got it to work but can’t figure out how to make it show up in my header menu, with or without Claude. TBD.
Anthropic on how AI is changing how people work. This is a marketing piece, of course, but useful nonetheless.
Just speaking this into the universe, but it would be exceedingly cool if someone pulled together a micro.blog plugin for Feedland blogrolls and page feeds.
An interesting update by Dave Winer on how blog comments might work in and around federation.
I love this: Sam at Yale Climate Connections suggests some slow fashion gift ideas for the holidays.
Wednesday, November 26, 2025 →
Determined to finish at least one more novel this year, and this one fits the bill. Currently reading: Moderation by Elaine Castillo 📚