It looks like Wordpress has produced a short-form blogging app that duplicates the model we are trying over here on the even-more-indie web. I might give it a spin.
Tech
I’m reading about trends in book and phone bans in American public schools, and reminded that reading novels was once considered an idle and immoral pasttime, just as internet use is today. This 2016 article from JSTOR goes into the history of reading books and the fear that it “enfeebled the mind.”
Fellow Madisonians, someone pulled together a website ranking local businesses in Madison by how local they are (by what criteria, idk). In my experience, this is one way we’re likely to see AI used in the next couple of years, via prototyping and/or executing ideas that result in dynamic websites.
Centaurs and Cyborgs on the Jagged Frontier by Ethan Mollick in 2023: “On some tasks AI is immensely powerful, and on others it fails completely or subtly. And, unless you use AI a lot, you won’t know which is which.”
Last night I had dinner with a friend in tech who recently attended a training on AI and analytics, where they made the observation that we’re in the “Napster era” of artificial intelligence. It’s an imperfect comparison but useful to consider.
I’ve posted a couple of times about instances I’m aware of where people are using AI in pro se court cases, especially family courts. A new study shows evidence of increasing numbers in pro se cases at the federal level, exacerbating existing bottlenecks. Many trade-offs abound here.
A professor asked students to self-report AI usage on their homework, leading to lots of confusion and uproar. Points aside, it’s clear people want more clarity up front about when and whether to use LLM tools. In the meantime, treating students like they’re guilty until proven innocent is a bad MO.
I missed this 2025 article by Noah Hawley on Vonnegut, war and the atomic bomb, and it is worth the time.
What does it mean for the culture when everything (everything!) is content, even war and mass shootings?
Timothy Chester offers some thoughts on the place of AI-assisted software development in a modern research university, and suggests that just because you can doesn’t necessarily mean you should.
Out: Twitter on a vape. In: AI-powered crypto vape. Is this real? Who can tell anymore.
A nostalgic read on how music nerdery bloomed online.
Tom’s Hardware on Mythos and marketing hype. Additional commentary from Michael Corn, asking whether Mythos coverage reinforces or establishes perceptions about cybersecurity.
Garbage Day on what makes something cool to Gen Z, and how this is impacted by the degradation of “pillars of coolness” in previous generations like culture rags and MTV. GD predicts that “cool” will be measured by lack of polish and “how many human beings left their house to experience it.”
What the internet was made for: A Chicago music fan secretly recorded over 10,000 live concerts from Nirvana to Sonic Youth to Spoon, and has made them all available online.
I once worked in a role where I keyed million dollar manufacturing orders into SAP, information that directly fed into factory specs for a manufacturing facility based in another country. Our regional office fed into a massive, global electrical engineering firm that ran on small margins (electricity delivery is a well-trod market), so our ability to deliver accurate orders on time was a differentiator in a field that is otherwise easily interrupted by chip shortages and logistics chains.
It was a big job. I learned a ton about electrical engineering, manufacturing and global logistics from a particular vantage point in North America. Our headquarters were based in Sweden, with locations around the world to support the electrical grid(s), covering both hardware and software solutions. My colleagues and I worked in positions that sat somewhere between B2B customer service, inside sales and data entry, and were expected to maintain a 99.8% accuracy rate because a single fat finger error would cascade across myriad systems, impacting real-world operations to the tune of hundreds of thousands of dollars per error.
Once (and only once), I fat-fingered a serial number during data entry which ruined an entire shipment of widgets. In response, the factory in Mexico sent the incorrect order of widgets, about five pallets, to my location in the United States so I could correct the order by hand. One by one, I had to physically remove each widget from a pallet, then from its individual shipping container, make a correction on the widget itself, and repackage each one, signing my name on each unit to ensure it was corrected by an accountable employee. I can’t recall why the issue couldn’t have been corrected on the factory floor, but it wasn’t on the menu. It was going to stay my problem.
This was the one and only factory error I made in about five years of tenure, precisely because it was so painful to correct it. The process was a little embarrassing but nobody made it especially so. Instead my coworkers up and down the org chart relayed a simple expectation: the desk workers need to pay attention to the details because the alternative is too costly. A few old-timers made sure to razz me about it in good humor, but ultimately the error was mine and the fix was mine, and the experience stuck because the whole chain of responsibility understood the stakes and reinforced the consequences. They also trusted me to stick around and continue to do my best.
During my annual review that year, I was dinged for only having a 98% accuracy rate, and I knew why that was a fair assessment.
I thought about this when I read today’s NYT piece on whether 90% accuracy is good enough for LLMs.
Related-ish: The important legacy of the Sarbanes-Oxley Act.
Amid renewed calls for the death of feminism, or of millennial feminism or whatever (TERF alert), I found this old victory lap from the era where it looked like new media had permanently paved the way for a newer, fresher publishing industry. It also tips the hat to Jezebel’s place in the fray, “the first online women’s publication to successfully combine feminist writing with a for-profit motive.”
Poell argues AI is entangled with platform capitalism through shared infrastructure, reinforcing concentration of the market. The hype obscures local realities of adoption, putting public alternatives in the position of proving their existence alongside advocating for their place in the market.
Angst about data center development continues to grow in the Rust Belt.
This post and the accompanying graphs have been a trending topic on social all weekend, particularly on Twitter/X, which is negatively implicated in the findings. In short, social media looks weirder than ever.
Silicon sampling is the practice of using LLMs to run surveys without talking to any people at all.
An observation on feminist writing and Jezebel: Throughout history, a lot of time and energy has been spent mediating how and whether certain kinds of people talk to one another. The feminist blogosphere, for all its faults, was the first time lateral, public, unmediated conversation happened among women at scale. Many kinds of women were there that had no space at other tables. And it was very messy, and very revealing, because it was the first time that happened at scale for all to see.
Some thoughts:
- When I started in 2001, the most viewed website about feminism on the internet was a solo blog by a man in Portland.
- I just pulled Mindy Seu’s “Cyberfeminism Index” off my shelf, a veritable tome, and the feminist sites don’t even make the table of contents. Which is not an indictment of the book, but an example of how ephemeral all this is online.
- You can see this on Twitter in the live debate about what Jezebel was and wasn’t, a conversation happening mostly through collective remembering. Jezebel joined the pack of blogs at some point, and they were big dogs, but they weren’t responsible for the creation or steering of the digital movement. They were a glossy, spendy repackaging of the organic thing, backed by real investors. It’s wild to see the entirety of the movement attributed to Jezebel by young Twitter, given that most of the indies kept Jezebel at arms’ length until much later. They were pretty widely considered a corporate interloper in a grassroots arena, though in hindsight that wasn’t right either.
- The first time the indie bloggers gave Jezebel their day was when they published the untouched photographs of Faith Hill on the cover of Redbook, a women’s magazine. The Redbook post tore the veil off the beauty industry in real time. Until then, the idea that “celebrities are photoshopped” and “photos are retouched” was almost an urban myth, which is easy to forget given the ubiquity of filters and AI now. Then, you heard about it but couldn’t confirm it for yourself unless you had firsthand experience. Retouching was a discrete industry practice, digital tools were still expensive and clunky, and all of it was surrounded by a lot of tradecraft and secrecy. The mystery of celebrity image management exploded with an animated gif on a public-facing post, and it was rad.
This post is mostly an excuse to talk about my latest playlist: Digital Animal. This one consists of about 200 songs across genres, all reflecting on our human relationship to science and technology, futurism, digital culture and the internet.
When I’m chewing over a big idea, I like to compile resources in and around that idea to help support my thinking. Embracing my angst about artificial intelligence, I started compiling songs that reach back to the early 20th century, tapping into prior generations’ anxieties about telephones, television and early networking technology, adding more contemporary concerns as I went. The playlist runs from David Bowie and Blue Öyster Cult and Kraftwerk to Zapp and Radiohead, then forward into modern takes on social media, cell phones, and the internet from Missy Elliott, Gillian Welch and Charli XCX.
I like a playlist because it’s convenient, and because songs are one place where meaning and feeling are created simultaneously, and because it’s easy to spot salient patterns across disparate sources. Scholars in interdisciplinary studies have long argued that you cannot fully understand a thing without understanding what it feels like to live with it, and that cultural analysis may get you there faster than surveys will anyway. Meanwhile most writing about technology separates feelings from form and function. Art and music compress all three, and have the potential to surface ideas that professional and institutional language can’t. Art and culture frequently peg an issue down before emerging best practices are formalized in business and academia.
For anyone working in technology communications, that lag between culture and practice has practical consequences. The language we use to describe technical systems shapes what people can think and do about those systems. An institutional frame — efficiency, access, innovation, value — consistently misses important dynamics that people living inside those systems are experiencing as users and as people. Art keeps the human subject inside the frame, functioning as both anecdote and data.
Plus, it’s fun and we should collectively think about art as much as possible. So, treat every song like a portal.
What does it mean to be both digital and animal? Some observations:
- Grief and sadness sit next to fun and ecstasy. Many of the older songs see computers as friend and companion, treating tech with a kind of tender curiosity, a toy that makes you happy as much as a medium for expressing longing and desire. These nevertheless tend to be layered with a sense of loneliness, a minor key, a dynamic that bears out through user research.
- Between the era of telephones and the digital web, there are a charming number of songs about beepers, beeping and pagers. I’m hopelessly devoted to Missy’s “Beep Me 911” right now.
- Recent songs are concerned about understanding identity and the self under the pressure of social media. Songs talking about social media persistently trawl the gap between your real identity and your public identity and personal brand, and how this dynamic can hollow out your relationships and quality of life. Songwriters like Jazmine Sullivan consistently refer to tech as a place of escapism, like retreating into the creation of a new Tinder profile while avoiding reflection on your last relationship. Comparison is the product, and sometimes it stinks.
- Lots of songs from peak elder Millennial/Gen X songwriters are about watching the old analog world being swallowed by the new. The older the songs, the more aware they are of the machine across the room, and the more they lean into allegory. The contemporary songs don’t worry about the boundary between human and machine much at all, likely reflecting the high level of technical integration (and acceptance) we live with right now.
Is fretting about our relationship to tech and industry part of the human condition? Or is there something specific to tech that accelerates these anxieties and impulses?
A few favs:
- I’m enjoying almost everything from the band Automatic. One of the band members is the daughter of the drummer from Bauhaus, and their work is heavily influenced by 80s era synth pop and new wave. I have several of their songs represented on the list, and particularly like “Black Box” off their 2025 album.
- Erykah Badu’s “Cel U Lar Device,” which positions the phone as an instrument of interpersonal surveillance, has been on rotation in my house for (cough) years. It is also a reinterpretation of Drake’s hit “Hotline Bling,” itself forever memorialized as a popular meme format.
- The Talking Heads’ 1988 “(Nothing but) Flowers” imagines a sunny life after the fall of industrial civilization. For the doomer take on this theme, check out Nina Simone’s cover of “22nd Century.”
- Sophie’s “Faceshopping” is about the negative pressures of cultivating a public persona and brand, and tips a hat to our relatively new ability to use science and technology to curate a physical appearance that matches your internal (and digital) one.
- Nobody talks about Ladytron anymore.
Rapper Afroman is going ultra viral this week as his “Lemon Pound Cake” trial plays out in the news. He captured the raid on security cameras in his home and used the footage in a series of songs, videos and merch. He ultimately did not face charges after the search, and argues (with evidence) that the police broke his door and stole $400, which provides the platform and substance for everything that followed. He argues the police shouldn’t have been there at all, and didn’t follow protocol when they were, and that as a citizen and artist he’s expressing his feelings about it in his preferred medium. Is this a winning legal strategy? Time will tell.
In the meantime he’s winning at public opinion. The trial is shaping up in the public view as a defamation vs. free speech trial, with the artist’s prolific work about this no-knock raid performed at his house, itself arguably unethical, held up as harassment by the officers who did the job. True crime, legal experts and court watcher accounts are going gangbusters providing cultural and legal analysis alongside video of court testimony. It helps that the court footage is a rich text — both hilarious and revealing.
Meanwhile: another first amendment case in and around rap lyrics is playing out now. A brief history of rap and the First Amendment.
Come look over my shoulder while I explore how and whether LLMs are good writing tools: Here’s a wee version of the LLM comparison exercise I did with my team. We’ll make it a two-fer so you can see how the “good writing” skill works in practice, though we’ll see how that actually goes.
One of the more useful things you can do with an LLM is hold up a few ideas side by side and apply lenses to them. I know this history pretty well, so I asked a series of LLMs, why is Wisconsin’s cultural identity and cohesion stronger than Indiana’s, from a historical and business perspective?
Here are the answers in one doc, for comparison.
Each LLM will give us more or less the same story, different flavor. Within the industry, the differences across the models reflect “model personality.” Asking “why” instead of “whether” will probably drive the answer to favor Wisconsin. Using multiple lenses (two states, historical + business, identity + cohesion) forces the LLM to cross-reference across more of its training data, which tends to produce a more comprehensive answer.
For all the chatter about consciousness and whatever, remember that an LLM is an infinite series of if/then/elses applied to human language and semantics, so being able to talk about language and communication, getting meta with the tool and how you think through language, helps a lot when using one. This is maybe the one thing I like about experimenting so hard with the tools. I’m thinking about the technical side of writing and enjoying it quite a lot.
Functionally: all of them acknowledge hard historical truths within the subject matter and don’t shy away from critical perspectives, which is good. Both Gemini and Copilot include in-line links, which lets you judge the output’s authority in the moment as a reader. I liked Copilot’s more than I expected here. Claude’s answers are more lyrical and do provide more context, and yet do not encourage checking against outside sources by providing links within the output. And you can see that even with the good writing skill calling out hard bans on certain structure, Claude plows right through them.
Model personality: Claude favors sociological answers to Copilot’s economic answers. Claude is also highly intellectual and narrative by comparison, and that narrative style can mask nuance by sinking relative context within the storytelling. Gemini simplifies, boosts and cheerleads where the others don’t, and really goes hard on Wisconsin’s reputation as a drinking and Packers state when there are stronger structural arguments in play. Copilot is tricky because it looks authoritative like a briefing, which also makes it easily “extractible” for the user, but every citation requires authentication unless this is one of those “good enough” tasks.
As a writer, something I find annoying across the whole spread is the semantic reveal. LLMs are semantic machines, and it is persistently revealed in ways that are weird to the human ear. All of them go out of their way to describe things as “structural,” “connective” as in “connective tissue,” “load-bearing” and “legible.”
Finally, I included a second tab where I asked Claude for analysis across the four outputs, where it suggests that my framing of the question is altogether kind of problematic. It shows how a strong prompt is sometimes also a bad approach.
There are a lot of possible takeaways here, but I’d rather set aside the question of which tool is “good” or “bad” or “better” and think more about the patterns across the tools and their implications.
One of the tricky things about consumer AI tools like Claude and Gemini is that the experience varies widely depending on the person using it, and it’s not always clear why. I have spent a lot of time learning the tools so I can advise around them in my work, and this variance of experience has become a frustrating part of the deal.
I manage a team of writers and creatives at work, and we are expected to have familiarity with the tools, despite complex and sometimes hostile feelings about the political and environmental implications around this sector. That’s quite a pickle, organizationally, managerially. Borrowing from Haraway, I thought okay, what if we take these tools seriously as a team of writers and creatives and put our professional standards up against them?
Among other exercises, I did a couple of comparisons on my team that are helpful for creating discussion around the “plausibility” question. People dismiss LLMs outputs as being merely plausible answers, rather than accurate or factual ones. And that’s correct, they are, and that’s the design. In many cases, plausibility is fine. Take Wikipedia, for example, which we understand to be a pretty good source, a plausible source, unless you’re writing a formal paper requiring original sources.
I digress – ultimately we needed to understand together that LLMs are not a WYSIWYG tool and talk through the implications.
I asked everyone to run the same paper through their LLM of choice, prompting it for a plain language summary. We then copied and pasted it into a shared doc, and compared and contrasted for discussion. Upon discussion, we had several takeaways, including that they were all similar in spirit but sometimes varying wildly in style and approach.
Knowing that algorithms are responsive and not static, we did it again later in the day, and copied and pasted our outputs into the shared doc. We compared and contrasted the difference between AM and PM. Again, it was similar in spirit but varied in style and approach. Some changed dramatically. One team member whose morning summary had been jokey and conversational received a much more staid and serious version in the afternoon.
At the time, I asked Claude to explain the variance: “Even with the same prompt and source material, LLMs don’t produce identical outputs each time. This is by design — there’s a degree of randomness (called “temperature”) in how the model selects words, which means each run produces a slightly different path through the text.”
Anyway, this got our gears turning on how (and whether) to approach LLMs as a team and as individuals and led to good group discussion. (It’s important to create space for criticism and critical approaches here.) It also gave us more confidence as a team responding to this new layer of complexity in our work, and helping our professional contacts and peers think about how to approach the tools and when and whether to use them. There will be tasks where AI-based tools are “good enough,” and tasks where they are not.
The swirl of mystery and speculation around this sector have people up in arms, and it’s useful to have approaches that give people firsthand experience, and to see how the experience works for others. The god trick of the singular interface turns out to be a bear for navigating it in the workplace, where our work is foundational, prosocial and specific.
A smart take on nonconsensual deepfakes, considering them not just as a social issue but as a cybersecurity issue, as they’re often connected to broader financial and social harassment campaigns.
WaPo on the many issues of LLM house writing styles. I have much to add. Here are some notes on applying a writing skill to override house style on Claude.
You might call this a taste test: Obsessed with the story about the McDonald’s CEO and how his LinkedIn-style videos selling the McD’s franchise have escaped containment, leading to one of the funnier CEO/product marketing dynamics in recent history. Burger King’s CEO swooped in, holding widely-marketed listening sessions with customers and to demonstrate his love for the Whopper in contrast with the deeply weird McD’s videos. Must read: Internet long-hauler Katie Notopoulos on how direct-to-public marketing works when the public is more familiar with the product than the org’s leaders.
My teenager thinks fast food is cool and subversive (cue the sound of one hundred moms groaning) and she and her friends regularly talk shop. They are Taco Bell fans and think burger stops are kind of gauche. Not gauche: Baja Blast.
In the meantime, the fast food sector is becoming a playground for AI approaches, causing a lot of nervous discussion on social media and in tech spaces.