I agree on one angle, that as a cohort, abstaining entirely from new tech is a bad approach.
Haraway (obvs) suggests we stick with the trouble, that sometimes our very presence in the room is what’s required to trouble existing narratives, and that it’s important that share what we learn back to our people at home, whatever that means in our context.
For me: I spend a lot of time at work translating technical ideas and projects in an institutional voice, but on nights and weekends, I’m explaining the ins and outs of the internet to my working class friends and family. Increasingly, I’m asked to explain LLMs and AI and how it relates to their needs around business, the news, entertainment, and as a legal aid.
The “cyborg’s mark” is a metaphor for writing, our duty to translate, and the inevitability of translation through experience. Perhaps staying with the trouble allows us to articulate our experiences in and around the science in ways that both advance our interests and that keep our communities safer. For those of us who straddle multiple worlds, translation is a responsibility.
Widespread AI adoption reopens some basic questions in business: who your audience is, what work can reasonably be automated, what absolutely requires human oversight, and how the service and information environment actually holds together when we shuffle these circumstances around. All this, with many social and environmental risks and a side of existential doom.
I suspect the predictions of economic and social doom are overblown by a lot – in life as in business, generalisms are easy and specifics are hard. Last mile issues remain a reality of nearly any technical or software implementation, and differentiation is in many ways the art of business. There are serious incentives to promising a smooth, profitable near-future, especially around emerging markets, so I feel pretty comfortable forecasting that the era we currently live in is valuation masquerading as value. Instead, I think we should worry more about how the tech changes our relationship to information and to each other.
The dominant conversation is about whether LLMs can write well, but I suspect that’s the wrong frame. Human storytelling will probably always be more interesting than generated storytelling, because humans love quirks and novelty that can’t be produced artificially.
The more consequential change is that AI-generated text doesn’t just sit on the web waiting to be read, and instead feeds back into the system that produced it. It becomes training data, source material, and eventually, architecture. Remember: When an LLM generates text, it’s producing word sequences based on statistical patterns. The output is one plausible version to your prompt, not a definitive one — but it gets indexed, linked, and cited like any other writing. Nothing about its surface tells you it doesn’t carry the same authority.
Researchers call what follows “model collapse,” a feedback loop where models trained on AI-generated content lose touch with the range of human-produced data. The rare and specific details disappear first, then the middle narrows. Eventually what’s left is smooth, confident, increasingly generic text that sounds authoritative whether it’s accurate or not, which becomes the training data for the next round.
I’m thinking about it in terms of the shift from SEO to GEO. SEO preserved a connection between writing and human judgment. Someone wrote content, search engines indexed it, readers got a list of links and decided which to trust by comparing to their own experience and knowledge. This system was gameable through various sleights of hand, but it assumed a reader with agency. The creator’s job was to be easy to find and worth finding. Streaming video complicated this process but still worked with the same basic ideas. Meanwhile, GEO operates on a different premise. The goal isn’t to get found by a person, but to be found by an algorithm assembling a response the user may or may not independently verify.
Consider what happens to the same piece of writing in each system. In the SEO world, your article gets indexed, shows up in search results, someone clicks through, reads it, evaluates whether you or your institution is credible on the topic, maybe skeets it or sends it to a colleague. A human encountered your work, weighed it and decided it was useful, the algo responds and indexes accordingly.
In the old way, the reader moved through the web. AI yanks that experience into a single response from a single interface. We don’t fully understand how AI systems decide what to cite, which makes this power shift feel especially risky. Worse, different people will get different responses from LLMs, even using the same prompts and source materials. We don’t know why.
Fewer entry points to the web means fewer opportunities for diverse or unexpected sources to gain traction, which means the training data gets narrower, which means the outputs get more generic, which means the architecture narrows further, which means fewer perspectives represented in the output. For the reader, it accelerates context collapse in much the same way. Fewer inputs means fewer opportunities to stress test your ideas against new information.
So, what to do?
If generative AI grows as predicted, SEO and GEO will coexist for awhile, and working developers and communicators will need to understand both and how they layer. Strong SEO foundations give you a great head start in AI visibility too, so the fundamentals of good writing and web taxonomy still matter a lot.
But the production of knowledge, the keeping of data, and how it’s all indexed are subjects that are about to become very important, and very political. So I suspect that any fields that touch those topics will also become very important, and very political, very soon.
When I started building websites in the late ’90s, the line between writing and coding didn’t really exist. A person probably learned HTML because she had something to say and needed a place to put it. The internet was free and anonymous and it felt audacious to put your stuff online, like flinging a message in a bottle out to sea. The code was a container for the ideas that rendered them onscreen, and every post and page you published was both a piece of your thinking and a brick in something larger.
People forget that the early web was a writing community. Writers, or bloggers, built their own sites, maintained their own archives, linked to each other deliberately. A blogroll was both a reading list and a show of solidarity, a trackback was a way of saying, “I see you, I’m thinking with you.” The technical architecture - RSS feeds, permalinks, comment threads - existed to organize the writing and the writers’ thoughts, and to push their ideas forward on the open web.
This worked for a time. Communities of writers, most of them without institutional backing or media credentials, built new bodies of knowledge together through interacting as readers and writers, communicating across a foundation of code. The work didn’t stay online. It spilled into conference halls and state houses and newsrooms and policy discussions. As the body of communication built, it created something that accumulated over time. These people influenced mainstream journalism, shaped public conversations, launched careers and movements. In many ways, the national political conditions we face today are a reaction to that movement, and how it allowed regular people to influence the world through the democratization of mass communication.
Midway through the aughts, the brick and mortar publishers and venture capitalists started looking across the landscape, at all the writers creating fantastic content, largely for free, and sucked them into their content and editorial teams. Google Reader lost institutional and financial support as writers moved off the open web and onto publishing platforms, often backed by VC money, that measured the quality of your work by engagement. The addition of algorithmic feeds further broke down this structure – the algorithm doesn’t measure whether your work contributed to shared understanding, but whether it generated a click, a share. The code changed, and the writing changed with it.
The writers changed with it, too. The blogger became the influencer. Bloggers operated in a gift economy of ideas: you wrote to think, to argue, to contribute, and your standing in the wider community came from the quality of your work and contributions over time. Was it a meritocracy? No, but the conditions made it possible for a regular person to talk with experts as peers, which upended traditional power structures around authority and expertise (in both directions, good and bad). Meanwhile, influencers operate in a heavily capitalized attention economy where engagement converts to dollars. The audience is a market to press for money.
The gendered dimension of this shift matters as well. The early blogosphere was full of women writing sharp, rigorous work about politics, culture, parenthood, identity, and technology — work that was explicitly feminist and anti-racist and genuinely moved public conversations. This was the community I helped build (Feministe.us was my project, a community platform of writers and commenters whose coverage and discussion broadly fell under, but was certainly not limited to, the topic of feminism). When the monetized platforms absorbed that energy, the commercial model recast women’s online authority almost entirely in terms of consumer influence. What could we sell? And to whom? The framing around our work went from “this person has important ideas” to “this person can sell things to a niche market.” Meanwhile, men who’d built audiences through tech or political blogging were more likely to be absorbed into mainstream media as columnists and analysts, roles that kept their intellectual authority intact. The influencer label, with all its connotations of superficiality, landed disproportionately on women, and it stuck.
There’s a class piece here, too. The platform model offered something the early blogosphere mostly didn’t — a way to get paid. For women who’d been doing enormous amounts of unpaid intellectual labor building online communities, the question of monetization wasn’t shallow. The implications of information centralization and monetization were as present then as they are now with LLMs and AI. Some people figured out the social platforms and worked their way into viable digital careers. Platforms offered a lot of perks, but all of the perks had a backstop. Corporate interests introduced the problems of advertising, audience and sponsorship, which meant reorienting your individual practice around maximizing your commercial value over and above your intellectual contribution and community management skills. It often meant giving away some or all of your IP rights.
For most people, new system didn’t offer a viable way to get from “respected independent writer” to “respected, protected and compensated writer.” Many of us found ourselves in positions too precarious to take the leap into freelancing and social media, and some, like me, got regular jobs doing regular stuff. Some married money. And in the meantime, some folks figured out how to get into real journalism, which looks much different in 2026.
Great storytelling helps people understand themselves and their world. We let some of that depth go on the Internet with the onslaught of digital marketing and all of its implications, and today the internet feels less useful and less trustworthy than it once did.
It feels like there’s something to take forward from the experience.
I’ve been running a series of informal experiments with my team to understand how we should advise people on using AI tools like ChatGPT, Claude and Gemini around our work.
In our last session, I gave everyone the same task: ask your AI tool to summarize a single article in plain language. Every person got a factually accurate summary, but the style and depth of analysis varied widely from person to person. We tested this further by entering the same prompt multiple times. Each time, the output was slightly different. We copied and pasted everything into a shared document so we could compare side by side.
Later that day, I asked everyone to reopen the tool they’d used that morning and repeat the prompt that had worked best: summarize this article in plain language. The outputs shifted again. Some changed dramatically. One team member whose morning summary had been light and conversational received a much more formal, serious version in the afternoon.
I then asked Claude to compare the morning and afternoon versions of one summary. “In short, the morning version emphasizes practical implications and accessibility; the afternoon version prioritizes completeness and specificity. They complement each other well — someone reading both would get a fuller picture than either one alone provides.”
Why would Claude produce these differences despite the same source material and prompt?
I asked the source.
Even with the same prompt and source material, LLMs don't produce identical outputs each time. This is by design — there's a degree of randomness (called "temperature") in how the model selects words, which means each run produces a slightly different path through the text.
But the differences here go beyond random word choices. The morning version landed on a journalistic structure while the afternoon version landed on an academic one. That likely reflects what the paper calls sensitivity to framing and order effects — one of the cognitive biases it catalogs. Small, invisible differences in how the model’s internal processing unfolds at the start of generation can cascade into meaningfully different organizational choices. Once the morning version opened with a news-style framing, the rest followed that pattern. Once the afternoon version opened with a descriptive academic tone, it followed that thread instead.
This is actually a concrete example of the robustness issues the paper identifies. The underlying reasoning about the paper’s content is consistent across both summaries — neither gets the facts wrong. But the presentation varies in ways that weren’t requested, which means someone relying on just one version would get a subtly incomplete picture depending on which run they happened to see.
It’s a useful reminder that LLM outputs are one plausible version of a response, not the definitive one.
It’s important to understand how these tools work. What looks like an authoritative answer is really just one of many possible outputs the system could have generated. The text will sound plausible whether it’s accurate or not, and you may get different results each time you ask.
Researchers from Caltech and Stanford conducted the first comprehensive survey of how and why large language models (LLMs) — AI systems like ChatGPT or Claude — fail at reasoning tasks. Rather than focusing on what AI can do well, they cataloged and organized the ways it goes wrong.
Why it matters
LLMs are increasingly used in high-stakes settings, but their reasoning failures are often unpredictable and poorly understood. The researchers argue that systematically studying failures — like accident investigations in aviation — is essential for building more reliable AI.
The researchers created a framework with two dimensions:
• Type of reasoning: informal (intuitive, social), formal (logic, math), and embodied (physical, spatial)
• Type of failure: fundamental flaws built into the architecture; domain-specific weaknesses; and robustness problems (inconsistent performance when small details change)
Informal/intuitive reasoning
LLMs exhibit human-like cognitive biases — confirmation bias, anchoring, framing effects — but without human ability to recognize and correct for them. They also struggle with “theory of mind” (understanding what others believe or intend), and with applying consistent moral or ethical reasoning.
Formal/logical reasoning
LLMs often can’t reverse simple logical relationships (if they know “A is B,” they may not infer “B is A”). They struggle to chain multiple reasoning steps together. Basic counting and arithmetic fail in ways that seem surprising given their other capabilities.
Embodied/physical reasoning
LLMs have poor intuitions about the physical world — gravity, spatial relationships, object properties — because they’ve learned only from text, not from physical experience. This extends to visual AI systems as well.
Many failures trace back to how LLMs are trained: they predict the next word in a sequence rather than reasoning deliberately. This makes them good at pattern-matching but unreliable when tasks require genuine logical inference, especially under slight variations in how a question is phrased.
Researchers have proposed fixes including better training data, techniques that force step-by-step reasoning (like “chain-of-thought” prompting), connecting LLMs to external tools like calculators or physics simulators, and architectural changes. However, no single fix is comprehensive — many improvements in one area don’t transfer to others.
My undergrad experience in college really shaped my approach to the internet. I was an English Education major at the time, in the early 00s, when the internet was around but mostly the wheelhouse of scholars and nerds. I was a nerd, learning code as a vehicle for writing, primarily to amuse myself and my friends.
The English department was an embattled unit within a school within a college of a STEM-centric university whose administration was perennially annoyed by the Humanities and their writing requirements. One of the English department’s survival tactics was to grow their approach to technical writing, getting deep into the question of how technology changes, shapes and shifts reading, writing and literacy. Thus they organized loosely around an emerging field called “digital rhetoric.”
For a time this was the top rhetoric and composition program in the field, populated by scholars from scrappy programs. My closest mentor, an English PhD from Wayne State in Detroit, studied race and gender representation in video games and how programmers (particularly Black and trans programmers) write themselves into existence through code, design and other aesthetic and storytelling choices. Outsiders had a really hard time understanding how this work belonged in an English department, but ultimately, she was focused on the question of authorship and how the author is projected throughout her work, a classic literary debate. She treated video games as texts and gamers as an audience, an approach that foretold many things about our current political era.
In this space, “digital” doesn’t just mean content on a screen. The concept is more complex, including social, cultural and rhetorical dimensions, in addition to shifts through time. Digital “texts” and practices exist on a continuum with print and other media, rather than in isolation, transforming how persuasion and communication work, both separately and together.
I took all these lessons and ran with them. This is where my approach to the internet is situated, and there are a few truisms that I learned from that time and era that further position my writing and approach.
Writing is code, code is writing
Writing and code are fundamentally the same thing in digital contexts. Both are systems of symbols that create meaning and action through semantic rules. The line between “content” and “container” blurs in digital spaces. A blog post is the copy on the page – and it’s also the metadata, the responsive design that adapts to different screens, the accessibility markup that makes it readable by screen readers. Each of these elements is written (coded) and each carries rhetorical weight and communicates something to the audience, intended or not. If you understand these relationships, you understand how the internet works as a social and information system.
This convergence of digital and material amplifies the concept of intertextuality, the idea that all texts reference, respond to and build upon other texts. In the classroom, intertextuality often focuses on plays and novels, and explores how authors speak and refer to one another’s work over time. In music, this is the study of sampling and referencing and why.
In digital environments, intertextuality becomes literal and functional. Code libraries reference other code libraries. Websites link to and embed other websites. APIs allow different platforms to communicate and share data. A single digital text might pull content from multiple sources simultaneously – a Twitter embed, a YouTube video, a Google Map – creating a networked document that exists across multiple platforms and authors. Virality builds rhetorical velocity through layers of meaning being added by individual users in real time, creating new texts and contexts through iteration and sharing.
Writing makes reality
A lot of students of this era took up knitting. It was trendy, yes, but the professors also taught knitting as an applied example of technical writing, and how writing produces a material reality.
Knitting patterns are technical writing in its purest form. A pattern is a set of instructions that must be precise, unambiguous, and reproducible, the same goals as any technical document. Pattern writers use specialized notation (K2tog, SSK, yo) that functions like code, compressing complex physical actions into standardized symbols that individuals interpret using sticks and string. The pattern must account for different skill levels, anticipate common errors, and provide enough context for the knitter to understand not just what to do, but why.
Good instructions and an able translator may result in a wearable delight: a sweater, a scarf, a cozy and colorful pair of socks. When a pattern fails the result is the same as failed technical documentation: confusion, wasted time and an unusable product. Piles of string. Dumb, useless sticks. It is an incredibly strong reminder that technical writing isn’t confined to manuals and protocols. It exists anywhere complex processes need to be communicated clearly and consistently so others can replicate results – including in your granny’s yarn basket.
So that’s how I learned to knit. Digital rhetors link physical practices to digital ones to illustrate highly conceptual ideas about writing and social networks. And one reason why digital spaces like Ravelry deserve recognition as thoughtful, functional social platforms is that this link between conceptual and material is made explicit in the digital knitting community. Designed for information sharing among a particular audience, decisions about information architecture and community management reasonably cascade from the mission, so Ravelry has remained a reasonably healthy community experience for most users despite its massive size and sprawling discussion. It remains an example of positive social dynamics online, unlike its behemoth competitors.
Always returning to Haraway
Many thinkers and texts built out this field of thought, but Cyborg Manifesto sits at the forefront for me. Writing during the Reagan era, with the populace freaking out about the rise of biotechnology and personal computing all around her, Haraway entered debates about whether women should enter male-dominated, militaristic fields like engineering and computer science, bringing an overtly feminist lens to questions of technology and power.
One major takeaway from Haraway’s work is the importance of rejecting binary thinking around technology and science. This approach aligns with other humanist and feminist perspectives that foundationally believe technology is by, about, and for the human experience, thus providing new and novel sites for political struggle. This gave people frustrated by tech a permission structure for interacting with technology rather than avoiding or abstaining from it entirely.
If these questions of knowledge and power remain central to technology, we want the people making those decisions to share our values and interests, and to be in the room when decisions are made. This argument is ripe for various challenges, which is why it was such a provocative starting point for cyberpunks and cyberfeminists alike.
Sharing is caring
This was an open source culture that meant sharing not just finished products, but the breadcrumbs and other attempts at learning along the way. It requires the safety that supports a yes/and culture, where people can collaborate with transparency, in spite of, or in consideration of, the ugly stuff and the many unknowns.
We let public and private live alongside each other without rigid boundaries about professionalism and polish. Your serious professional work could sit next to a meme, which could sit next to a picture of your cat, and none of it diminished the other.
This was an intentional acknowledgment that people are multifaceted, and that the digital spaces we inhabit should reflect that complexity. Putting the personal and the real alongside the artificiality of digital communications builds a relationship between the viewer and creator in ways that carefully curated, brand-managed presences just can’t (also: yawn).
Vulnerability, humor, expertise, horror, scholarship, and joy coexist, as in real life.
Something else I noticed in my experimentation with AI and creative writing: Claude prefers a mid-length, declarative sentence, while I prefer a lot of variety in my prose. Sentence variety is a primary consideration in any text-based communication approach. Write accordingly.
I have a confession. While experimenting with AI over the last year, I wondered what would happen if I crammed an unfinished novel draft, one I actually care about, into Claude. Claude is pitched as the LLM for writers, with Claude 3.7 Sonnet and 3 Opus widely regarded as the premier LLMs for writers, including creative writing, long-form content and human-like prose. Meanwhile, I majored in English and work in mass communications, so I’m trained to think about writing creatively, strategically and tactically. Writing and personal expression have been part of my daily life for most of my life. If this tool could in fact produce a quality story, someone like me should be able to make it happen. Instead, the experience left me confident that AI isn’t a good vehicle for creative, narrative writing.
Here’s what I found:
On the technical side, Claude struggled to maintain a narrative thread over time. The longer the chat, the more the bot drifted and eventually lost track of details and claims made about characters earlier in the plotline. It’s not a sustainable approach for narrative writers because continuity matters: outsource too much plotline to the bot and your characters lose relationship to one another.
LLMs like Claude work fine for writing support—they can function something like a synonym machine, helping writers work through technical questions of redundancy, register, length, and other semantic needs while drafting. But when you outsource world-building and meaning-making to an LLM, it becomes narratively confusing fast. Despite giving Claude extensive background on my primary characters and the world they live in, it would confidently declare that a character’s relationship to another was X, then claim the opposite on the next page. Dialogue was thin and expository. It preferred a sort of “maid and butler” style of dialogue where two characters artificially recap shared knowledge for the reader. Meanwhile Claude does not do feelings well, which is arguably the point of much narrative writing.
Ultimately my drafts were worse off than what I started with – less organized, more confusing, with so much narrative drift that almost nothing was usable, even as a first draft. A devil’s advocate might argue that my prompting wasn’t sophisticated enough to produce the results I wanted. Sure.
But then we have the second problem: Claude’s approach to storytelling isn’t narratively interesting. Fiction and narrative writers put tremendous energy into world-building and sensory experiences. The goal is to immerse the reader in a sensory experience so total that they can experience another world entirely – the original VR, if you will. A great writer even exploits your higher-level cognitive functions by reusing parts of the brain that evolved for action and perception, which is why a good story makes you think, feel, and wonder.
Claude does not feel or wonder. Claude collates.
A key part of this essay suggests that LLMs create meaning through triangulation – that by pinging other ideas and vocabulary, an LLM can get a human reader close, or close enough, to suffice in many cases of writing. In my experience, this is true enough in business writing, where tinkering with approach and register can become as important as precise verbiage.
But this misses the pleasure and the point of good storytelling, which is myriad but usually centers on the satisfaction of expanding your imagination and experience through narrative, by seeing your own messy, striving, failing, hopeful, and collective human experience reflected in another person’s expression. That kind of meaning-making doesn’t happen through triangulation. It happens through the labor of human thought, experience and skilled articulation. That’s art, babes.
This article gets into the mess of AI and creative writing, within the domain of the romance genre, which famously cranks out variations on romance themes at a rapid clip. It drills down into some of the debates about writing, authority and authorship in relationship to LLMs that are playing out across the publishing sector now. Remember: early research suggests that most writers who use LLMs as part of their workflow ultimately retain their sense of authorship in and around the tools, suggesting that even when writers adopt AI assistance, they still see themselves, not the tool, as the creative and accountable source. So based in my experience above, I suspect that if an AI approach to creative writing is successful, it’s because the author is linking her approach to emerging tech, not because the work is good, and that’s a difference worth distinction.
About fifteen years ago, when I began to take my work and career more seriously, I turned toward other women in my orbit who were in the same phase of life. For many years now we have supported each other through our various iterations, talking interviews and talent and resumes and obstacles and salary alongside art and life and leisure, staying connected through a mix of persistent digital communication and travel.
These women underpin everything I’ve built since, and I am so grateful for their observations and experience, and for the pleasure of seeing how we have grown into our choices over time.
Personal support means everything; assemble your team.
The article argues that Silicon Valley’s shift from long-term employment to talent migration has created a model where workers maximize their individual compensation through frequent job changes, fundamentally altering tech industry culture and economics for the worse.
This PBS doc recently came to Netflix, and I tucked in expecting a nice but predictably boring documentary about one of my favorite authors. To the contrary, it really plumbs Morrison’s writing craft, not only as an author but as an editor who brought a generation of incredible thinkers into the limelight. She talks in depth about her writing process, her approach to authorship and editing, and how she kept these roles separate at the peak of her midcentury literary career in NYC’s publishing industry.
There’s something so powerful about hearing her discuss the craft, her deliberate choices, the refusal to center whiteness, and the insistence that Black readers were her intended audience. She saw her critics and wrote around their critiques with authority and confidence. This was a breath of fresh air since I’m so immersed in the AI era, which threatens to change our perceptions around value of writing, the choices and experiences behind the author, and how we consider the influence and responsibility of authorship.
“Evidence from a study about workplace writers who use AI suggests that writers are outsourcing some of their research, editing, or drafting to AI, but that they retain responsibility for their writing.”
A lot of readers are fascinated with the “black box” of AI writing, and trying to reverse engineer what it does and why. John Gallagher goes down the rabbit hole and articulates some credible theories about why LLMs use lists and listing to create meaning, and why it matters.
French overlooks how smartphones and social media raised the stakes on debate and discussion, transforming campus discourse. Today’s students worry that one viral misstep (in countless directions) may define them forever.
The TikTok deal means American users will see a US-only algorithm. Brands and creators will likely see smaller audiences and higher costs for domestic reach. ByteDance faces split algorithms, divided workforces and parallel governance, complicating product delivery across global markets.
I’m generally skeptical of anyone selling a solution to a social problem that relies on individual abstinence, so I tend to be annoyed with many arguments about the attention economy. I more or less land here on the question of AI, which I know many of my contemporaries will find similarly annoying.
Searching for Suzy Thunder: In the ’80s, Susan Headley ran with the best of them—phone phreakers, social engineers, and the most notorious computer hackers of the era. Then she disappeared.
While conspiring with a friend about life and work in these trying times, both of us confessed that we believe, at the root, that reading and writing are ultimately the cure for everything that ails us: collectively, individually, epistemically, existentially. Maybe that’s naive, but I’ll take it.