I’ve been spending more time in tech spaces online and getting good information from folks like @manton, creator of Micro.blog. Like this reflection on how to think about AI now that vibe coding works. Something I’m thinking about: there’s an emerging tension between those who see value in being able to immediately prototype an idea and the people downstream who have to manage the outputs/code over time. The ability to proof every idea sounds like a superpower until you’re the one driving and maintaining the results.

Making the rounds on Twitter today, an AI programmer pulled together a dashboard comparing how LLMs respond when given a series of nonsense prompts. There is a notable difference in quality even between Claude’s Opus and Sonnet, in my experience. The lesser GPTs will take a wobbly claim as truth and run with it.

Some additional discussion of LLM models, including open weight and “staggered openness,” where orgs “release previous versions of proprietary models once a successor is launched, providing limited insight into the architecture while restricting access to the most current innovations.”

“Open source,” “open weight,” and “proprietary” describe different relationships between LLM model producers and users, governing what you see, modify and control. Comparing them isn’t necessarily about “best,” but whether you’re optimizing for transparency, compliance or performance. Massive investment in proprietary models means the best-resourced research teams, the largest training runs, and the most sophisticated safety work tend to happen behind closed doors.

A new report from Pew on how teens view and use AI says that more than half of teens turn to chatbots for help with school, and some are turning to AI for emotional support and companionship.

A brief history on writing, a forgotten technology: how the challenges we face in today’s digital age can be traced back to one of the most significant yet underrated innovations of all time.”

“Digital silence” is a new ethic emerging on the visual web, where travelers are no longer posting about their travels. Reading along, thinking of how I rarely post about family out of respect for their privacy, and recently took several road trips without a single post.

History was unmade last year, as engineers began the massive project of ripping the first-ever transoceanic fiber-optic cable from the ocean floor.”

My life is work right now, so I’ve been training my reading and writing habits in that direction in the hope it will be additive. So when a friend who works in tech suggested I pick up some Ellen Ullman, I snapped it up. Ullman was a programmer who wrote about her experience as a woman in tech in the 1990s, a diligent personal accounting of the early days of Silicon Valley that foreshadows so much of what people are worried about today. Through her first person account of life as a programmer, she consistently reminds the reader that computers are made of boxes and wires, with choices made by mortals (often imbued with dreams of immortality) written on chips and tape, and are limited to only know what we tell them. The Y2K essay was an especially welcome reminder in the era of “singularity” — we’ve been here before.

Here’s a taste.

My friend Dave Bangert has been a Midwestern reporter for thirty years, and began his own newsletter project after layoffs and mergers emptied his local newsroom. Now his 8000 subscriber project is a major influence in a Big Ten town that otherwise has a weak media market. In this podcast he talks about how it all works, what we lose as legacy media dries up, and what it means when your readers know exactly who you are, and you run into them in the produce aisle.

As a true blue Hoosier, here’s my hot take on the Chicago Bears moving to Hammond: no dunking on NWI.

I’m seeing a lot of disgust for Indiana in related Bears commentary that troubles me, especially the comments about NWI as a place of blight, beyond consideration or repair, obscuring the stakes behind a regional geography of race and class. Meanwhile, I am glad that people from Chicago suburbs have found occasion to think about their neighbors in Hammond and Gary. There is life outside of Schaumburg.

Indiana’s history as a state is thick with race and class problems, perhaps best captured in the old phrase calling it the “middle finger of the South thrust into the North.” That middle finger reaches right up and touches Chicagoland. Indiana’s policy approach to this dynamic is predictably toxic, having cleaved communities like Hammond and Gary from state investments and statehouse power. Indy loves business and it loves a chance to razz Chicago. Guess why.

“The Region” in Northwest Indiana is practically and functionally the south side of Chicago. You can drive from Gary through East Chicago and Whiting straight into Pullman and Bronzeville, without much marking a state line other than some signage and an iconic White Castle. But everything on the Indiana side of the line is governed from a statehouse that rarely considers them at all until there’s a chance to further demean and isolate them. Like so, with Indiana’s current plans for redistricting that further disenfranchise Gary residents.

Does it serve to remind us that Gary was founded as a company town by the world’s first billion dollar corporation? I think so (and great article btw). The town was the subject of serious curiosity and consternation during the early days of industrialization, especially when comparing the claims of the owners and capitalists to the realities of the workers. Poet Carl Sandburg famously wrote the poem “The Mayor of Gary” to illustrate the casual venality that the ownership class felt toward the steelworkers. This dynamic is still very much alive in the minds of the people in surrounding communities, who are perpetually worried about a “Chicago influence.”

This is why we train a jaundiced eye on characterizing working class communities like they’re beyond consideration or redemption. The world isn’t a Costco. People live among the brownfields. I spent a lot of time in this area in my twenties – all my friends in The Region were somehow connected to US Steel, even still. Feel however you feel about the Bears deal, but we don’t do Gary slander in this house.

Yanking this back to the question of writing: if you want to catch an interesting news source from the area, Capital B’s Gary outlet does some amazing reporting and opinion.

The last time I went truly viral was in April of 2020 in the height of the COVID shutdown. I posted a tweet and walked away from my phone. By the time I checked back, my notifications were out of control.

The next morning, I got a message from an old friend familiar with my handle. “I think I saw your account on Good Morning America?” Hilariously, there I was, and I couldn’t even claim responsibility for the meme.

I’ve spent most of my time on the internet as a pseudonymous account, using my first name only, or using a handful of handles (incl feministe, fauxrealtho, flotisserie, paired with punny names like “Petty White” and “Frieda People,” a convention from the Tumblr years). Even as a visible personality online, I was only known by my first name and URL and/or handle. In recent years I de- and reactivated some of my socials, so some of these breadcrumbs no longer exist, but I’ll do my best.

Pseudonymity allows writers to explore complex ideas in digital spaces while protecting their identity, location, and other identifying factors, while also maintaining a throughline of identity and storytelling.

There are a lot of trade offs in using a pseudonym, especially around how to claim credit for your work. But people use them because they give you the privacy to be honest, real, weird, authentic, and to escape the creep of modern social media presence into high stakes spaces like the workplace. This dynamic was kind of the impetus behind the era of “weird Twitter,” where people using pseudonymous social handles routinely threw out funny, absurdist one-liners to impress their friends and followers, while taking a turn at the social media slot machine. Not every post or joke lands in a way that converts to numbers, but some do, and it’s fun to try.

I’ve written a little previously about how the English program I was a part of used fiber arts to illustrate the fundamentals of technical writing, but one of their other methods of teaching the internet was through board games. Games, like the internet, and much like writing, provide rules and structure for communication and engagement, but everything that happens inside the container of game board and game play is a mystery until it emerges through human interaction. So goes the internet (and to some extent, so goes AI). Games and gaming were used as a method to think through what it means to create rules of play, then let a community rip through the model – and many of the thinking and skills involved around game design apply in social digital spaces, from chat rooms and Teams channels to the open seas of the WWW. The longer you’ve been playing the slot machine, the more you get a feel for the kind of thing that will get seen and read, and who among your readership will take your content to the next level.

Why does it matter? Because understanding what “works” to make ideas travel further online, the more you can tap into it. Big Tech is under fire right now for amplifying some of the worst impulses of the internet, by cranking engagement algorithms to exploit messages that produce outrage. Yes, big emotions create virality, but so do relatability and sharability.

So back to the meme: here’s how it went down.

The meme was a list of six hypothetical celebrity households: pick one to quarantine in.

It had been circulating on Facebook, started by a Christian influencer named Savannah Locke. I encountered it deep in a Real Housewives fan group, and felt the pull of a good parlor game. In April 2020, everyone I knew was sitting at home, fretting about the COVID-19 pandemic looming over all of us, so I shared it with minimal ceremony, a couple of buddies with a slightly larger following hit retweet, and within two days the tweet was being cited by The Cut, Time, the Washington Post, CBS, and others. Know Your Meme documented it for posterity. Several outlets named me as the originator, but trust, I was only trying to delight my friends with low grade Facebook content. I couldn’t find the original meme at the time – Locke appears to have had a name change that scrambled my search. But hey, as these things go, nobody earned a dime or promoted anything weird, so no harm, no foul. Business Insider managed to credit it correctly, so a special kudos to their editor.

The core game mechanism behind the meme is forced choice: constraints generate opinions and opinions generate activity. Each “house” also represents a personality type. House 3 is chaos, House 6 is aspirational, House 5 is the one where someone is definitely cooking and someone is definitely yelling about it. The choices are arbitrary, which invites curiosity about who grouped what and why. In short, this meme offered a light conversation starter at the right time, with low stakes, high personal reveal, and endlessly discussable combinations. It also drew from existing memes and games that are popular online, like “where you sitting” in the proverbial lunch room. This one became news because of the timing and gamability, not because I was particularly clever, but hey.

What does it feel like to go so viral? It’s hilarious, strangely affirming, and also a little crazy-making. It opens the door to a whole lot of wild people and ideas on the internet, not all of them flattering or welcome. Virality is sometimes paired with incredible harassment and requires “more condolences than congratulations.” But as far as this particular experience went – hilarious, whoops, and wow. It just felt like a Lebron James, Post Malone and Jennifer Aniston hang would be a good time.

A tweet features a selection of quarantine house groups, each with a different combination of celebrities.

In an Internet of things (IOT), what happens when the companies behind the service model cease to be?

What to do when the biggest platforms for readers are kind of evil?

Making the rounds on social today: New study suggests that while 70% of financial firms use AI today, 80% report no impact on employment or productivity.

“Because RSS is an open protocol, and because there are so many different possible ways to follow and read the news, RSS readers ought to be a UI playground in the way Twitter clients once were, where innovation and experimentation are normal and celebrated.

There are many hyperbolic essays on AI going viral this week. This essay gets into some of the emerging politics around AI in the United States and how they map onto electoral politics, and is irritated with the American left’s approach to AI.

I agree on one angle, that as a cohort, abstaining entirely from new tech is a bad approach.

Haraway (obvs) suggests we stick with the trouble, that sometimes our very presence in the room is what’s required to trouble existing narratives, and that it’s important that share what we learn back to our people at home, whatever that means in our context.

For me: I spend a lot of time at work translating technical ideas and projects in an institutional voice, but on nights and weekends, I’m explaining the ins and outs of the internet to my working class friends and family. Increasingly, I’m asked to explain LLMs and AI and how it relates to their needs around business, the news, entertainment, and as a legal aid.

The “cyborg’s mark” is a metaphor for writing, our duty to translate, and the inevitability of translation through experience. Perhaps staying with the trouble allows us to articulate our experiences in and around the science in ways that both advance our interests and that keep our communities safer. For those of us who straddle multiple worlds, translation is a responsibility.

Who decides what’s real in the age of AI? Instagram.

Widespread AI adoption reopens some basic questions in business: who your audience is, what work can reasonably be automated, what absolutely requires human oversight, and how the service and information environment actually holds together when we shuffle these circumstances around. All this, with many social and environmental risks and a side of existential doom.

The hype makes it harder to see a change that’s already here. To me, the shift between SEO to GEO is one of the clearest places where a change in technology produces a significant process change in the relationship between a writer and an audience, with all the tradeoffs around authority, context and information architecture that implies. But web search is just one place this shows up.

I suspect the predictions of economic and social doom are overblown by a lot – in life as in business, generalisms are easy and specifics are hard. Last mile issues remain a reality of nearly any technical or software implementation, and differentiation is in many ways the art of business. There are serious incentives to promising a smooth, profitable near-future, especially around emerging markets, so I feel pretty comfortable forecasting that the era we currently live in is valuation masquerading as value. Instead, I think we should worry more about how the tech changes our relationship to information and to each other.

I’ve been writing about how writing and code are the same thing in digital environments, and about how that equivalence shaped the early web. AI changes that relationship.

The dominant conversation is about whether LLMs can write well, but I suspect that’s the wrong frame. Human storytelling will probably always be more interesting than generated storytelling, because humans love quirks and novelty that can’t be produced artificially.

The more consequential change is that AI-generated text doesn’t just sit on the web waiting to be read, and instead feeds back into the system that produced it. It becomes training data, source material, and eventually, architecture. Remember: When an LLM generates text, it’s producing word sequences based on statistical patterns. The output is one plausible version to your prompt, not a definitive one — but it gets indexed, linked, and cited like any other writing. Nothing about its surface tells you it doesn’t carry the same authority.

Researchers call what follows “model collapse,” a feedback loop where models trained on AI-generated content lose touch with the range of human-produced data. The rare and specific details disappear first, then the middle narrows. Eventually what’s left is smooth, confident, increasingly generic text that sounds authoritative whether it’s accurate or not, which becomes the training data for the next round.

I’m thinking about it in terms of the shift from SEO to GEO. SEO preserved a connection between writing and human judgment. Someone wrote content, search engines indexed it, readers got a list of links and decided which to trust by comparing to their own experience and knowledge. This system was gameable through various sleights of hand, but it assumed a reader with agency. The creator’s job was to be easy to find and worth finding. Streaming video complicated this process but still worked with the same basic ideas. Meanwhile, GEO operates on a different premise. The goal isn’t to get found by a person, but to be found by an algorithm assembling a response the user may or may not independently verify.

(Sad news: today, only about 8% of LLM users verify their output against source material.)

(This does not bode well.)

Consider what happens to the same piece of writing in each system. In the SEO world, your article gets indexed, shows up in search results, someone clicks through, reads it, evaluates whether you or your institution is credible on the topic, maybe skeets it or sends it to a colleague. A human encountered your work, weighed it and decided it was useful, the algo responds and indexes accordingly.

In GEO, an AI system parses that same article, extracts the most clearly structured claims, and drops them into a synthesized answer alongside fragments from other sources the user never sees individually. The reader gets a confident, blended paragraph.

In the old way, the reader moved through the web. AI yanks that experience into a single response from a single interface. We don’t fully understand how AI systems decide what to cite, which makes this power shift feel especially risky. Worse, different people will get different responses from LLMs, even using the same prompts and source materials. We don’t know why.

Fewer entry points to the web means fewer opportunities for diverse or unexpected sources to gain traction, which means the training data gets narrower, which means the outputs get more generic, which means the architecture narrows further, which means fewer perspectives represented in the output. For the reader, it accelerates context collapse in much the same way. Fewer inputs means fewer opportunities to stress test your ideas against new information.

So, what to do?

If generative AI grows as predicted, SEO and GEO will coexist for awhile, and working developers and communicators will need to understand both and how they layer. Strong SEO foundations give you a great head start in AI visibility too, so the fundamentals of good writing and web taxonomy still matter a lot.

But the production of knowledge, the keeping of data, and how it’s all indexed are subjects that are about to become very important, and very political. So I suspect that any fields that touch those topics will also become very important, and very political, very soon.