Meta

I’m following a guy in TX who is using AI to write and illustrate children’s books whole cloth, then self-publishes using Amazon, and getting recognition in his region as a laudable children’s author. The books are categorically not good. It’s like people are rewarding his content strategy.

I’m collecting mentions of the old blog while they still exist bc lot about this cohort is hagiography written well after the fact. I don’t think I saw it way back when, from the first time I “quit blogging” in 2006 by Rebecca Traister. As ever, a big part of social media is quitting social media.

While poking around for some examples of media coverage of the early blogosphere, I found this mention from the time Liza Jervis of Bitch Magazine did a guest stint at Feministe. For context, Feministe hosted regular guest blogging seasons where we opened our comparatively gigantic platform to much smaller bloggers. What we didn’t pay in money, we could offer in audience and attention, with mixed results. Several people who guest blogged found the audience and attention experience terrible – which began its own media cycle.

The reality is that having a platform on Al Gore’s unregulated internet means you face a lot of gross commentary from in and outside the proverbial house. Invitations for guest blogging were happily accepted, but the attention wasn’t reliably durable for writers and their goals. This reflection in Salon, “In Defense of Ladyblogging,” recaps what it’s like to have a massive platform on the open internet: “For every commenter who thoughtfully critiqued my message, there would be one who’d say I was a tool of the patriarchy, and another who’d accuse me of abusing my class privilege. It’s a vibrant, razor-sharp community and I was honored to be a part of it, but my point is, if explicitly feminist blogs are the only acceptable online outlet for feminists to inhabit, we’d get exhausted mighty quickly.” This is a limited telling, but there are some nuggets here that forecast the commentary around Lindy West’s book – particularly around the pressures of being smart in public while also being a person who wants to be liked and included.

An observation on feminist writing and Jezebel: Throughout history, a lot of time and energy has been spent mediating how and whether certain kinds of people talk to one another. The feminist blogosphere, for all its faults, was the first time lateral, public, unmediated conversation happened among women at scale. Many kinds of women were there that had no space at other tables. And it was very messy, and very revealing, because it was the first time that happened at scale for all to see.

Some thoughts:

This post is mostly an excuse to talk about my latest playlist: Digital Animal. This one consists of about 200 songs across genres, all reflecting on our human relationship to science and technology, futurism, digital culture and the internet.

When I’m chewing over a big idea, I like to compile resources in and around that idea to help support my thinking. Embracing my angst about artificial intelligence, I started compiling songs that reach back to the early 20th century, tapping into prior generations’ anxieties about telephones, television and early networking technology, adding more contemporary concerns as I went. The playlist runs from David Bowie and Blue Öyster Cult and Kraftwerk to Zapp and Radiohead, then forward into modern takes on social media, cell phones, and the internet from Missy Elliott, Gillian Welch and Charli XCX.

I like a playlist because it’s convenient, and because songs are one place where meaning and feeling are created simultaneously, and because it’s easy to spot salient patterns across disparate sources. Scholars in interdisciplinary studies have long argued that you cannot fully understand a thing without understanding what it feels like to live with it, and that cultural analysis may get you there faster than surveys will anyway. Meanwhile most writing about technology separates feelings from form and function. Art and music compress all three, and have the potential to surface ideas that professional and institutional language can’t. Art and culture frequently peg an issue down before emerging best practices are formalized in business and academia.

For anyone working in technology communications, that lag between culture and practice has practical consequences. The language we use to describe technical systems shapes what people can think and do about those systems. An institutional frame — efficiency, access, innovation, value — consistently misses important dynamics that people living inside those systems are experiencing as users and as people. Art keeps the human subject inside the frame, functioning as both anecdote and data.

Plus, it’s fun and we should collectively think about art as much as possible. So, treat every song like a portal.

What does it mean to be both digital and animal? Some observations:

Is fretting about our relationship to tech and industry part of the human condition? Or is there something specific to tech that accelerates these anxieties and impulses?

A few favs:

Developing a new pet theory that you could theoretically crank AI outputs toward your vision by having an organized link blog revolution.

Second wavers in tech and engineering talked a lot about the “god trick” of presenting knowledge and information, particularly around math and science, in a way that suggests its objectivity is eternal, immortal, unknowable. Work by Safiya Umoja Noble and others extended this lens to Internet search and architecture, finding that the search algorithm was never neutral, instead it was a series of business decisions wearing neutrality like a costume, creating a customer service experience. LLMs take that same trick and compress it further.

It’s an old idea, and one I’ve been drawing from while I tinker with Claude, which is purportedly the best in the game. The “god trick” is baked right into the AI interface: one input, one output, an authoritative-seeming answer, offered without named perspectives behind it, trained on text produced overwhelmingly by a narrow demographic who has historically had access to both literacy and publishing, by programmers and new media drawing from the same well. Smushed together, it gives the impression that consensus exists where there are in fact many, many loose ends.

I increasingly find it annoying that even “good” AI outputs seem fixed on phrases like “key,” “core,” “exist,” “actually,” “never,” and possibly the worst sentence structure of all time, “it’s not X, it’s Y” — and I’ve begun to recognize how LLMs work like autocorrect for phrases and ideas, drawing from ranked search sources first before fanning out to more obscure sources, trying to determine and assert what’s important to me, a user known by demographics and data. It feels like a big linguistics machine, which is pretty cool in some regards, but also aggressively semantic. The math doesn’t always work to connect me to what I want to find because I am situated in my individual context in ways LLMs are not able to understand, with my memory, in my body, with my unique experiences, which shape and translate meaning for me as I interact with the world (and the web).

And so for you, in your body and memory and experience. An LLM can approximate the outputs of an experience without having access to the experience itself. Sometimes this is useful, sometimes it’s reckless.

Overall the dynamic reminds me of the famous scene from Good Will Hunting: Claude is a smart kid, and he’s never been outta Boston.

McKenzie Wark for Verso Books on Donna Haraway, in 2015: “Creating any kind of knowledge and power in and against something as pervasive and effective as the world built by postwar techno-science is a difficult task. It may seem easier simply to vacate the field, to try to turn back the clock, or appeal to something outside of it. But this would be to remain stuck in the stage of romantic refusal. Just as Marx fused the romantic fiction that another world was possible with a resolve to understand from the inside the powers of capital itself, so too Haraway begins what can only be a collaborative project for a new international. One not just of laboring men, but of all the stuttering cyborgs stuck in reified relations not of their making.”

Sharing Digital Animal, a new, curated playlist of songs about all the angst and joys of living with modern technology.

Closer to the machine

There is something about the AI moment that reminds me a lot of when the internet was new. A lot of what was imagined and promised about the internet was never realized. But much was.

I’ve been reading Ellen Ullman’s memoirs - “Life in Code” and “Close to the Machine” - and her observations about proximity to technology feel relevant here. Being close to the machine means understanding its actual capabilities and limitations apart from the prevailing sales narratives. It also means a kind of loneliness, because you are working in a space that others don’t yet see clearly or fully understand.

I suspect people thinking seriously about AI right now will experience something similar: a stretch of hostility and discomfort while the rest of the world catches up and the consumer market level-sets on the promises being made. In the interim, the hype will not match the reality, and the reality will sometimes exceed the hype in ways no one predicted. And for a while, how it works and why it matters won’t be legible to everyone at once.

Crunching for clarity

In 1999, academic and theorist Judith Butler famously won an award for the worst academic sentence, raising good questions about how we read difficult texts, who gets to access academic ideas, and the role of academic and plain language in and around the academy:

The move from a structuralist account in which capital is understood to structure social relations in relatively homologous ways to a view of hegemony in which power relations are subject to repetition, convergence, and rearticulation brought the question of temporality into the thinking of structure, and marked a shift from a form of Althusserian theory that takes structural totalities as theoretical objects to one in which the insights into the contingent possibility of structure inaugurate a renewed conception of hegemony as bound up with the contingent sites and strategies of the rearticulation of power.

This was my era in college, and I remember feeling challenged and overwhelmed as an undergrad trying to read theory for the first time. So today, I asked Claude.ai to put it in plain language, just to see what would happen. Here’s the after:

Scholars used to think that capitalism shaped society in consistent, predictable patterns. But newer thinking focuses on how power works—how it repeats, overlaps, and gets reshaped over time. This brought time into the picture and moved away from older theories that treated social structures as fixed. Instead, recognizing that structures can change opened up a new way of understanding power: it's tied to specific moments and strategies, and it's always being renegotiated.

A meta lesson about AI assistance

I just completed my first attempt at coding using AI, in this case having Claude assist me with putting together a simple client-side OPML parser using Dave Winer’s Feedland service.

Winer’s original script is pretty slick, and includes a list of all my feeds with titles, URLs, and categories; click-to-expand functionality to see the 5 most recent posts from each feed; clickable post titles that open articles in new tabs; sort options (by title or by update); and automatic updates when I change my FeedLand subscriptions.

You can check it out here: Feeds

The official documentation method didn’t initially work because Hugo (the blogging software behind micro.blog) was wrapping client-side templates around the script. The toolkit requires server-side dependencies that don’t exist on static sites like micro.blog, and we hit a cascade of missing JavaScript dependencies (jsonStringify, servercall, etc.). Each fix revealed another dependency, leading to some “sunk cost” frustrations for me. I kept trying because I wanted to see if Claude could pull it together. Through trial and error, I got to a point where the OPML file was rendered correctly without server dependencies or complex external libraries.

Time invested: ~3 hours (including wrong turns)

Time it should take: 10 minutes

AI extended my code reach beyond my practical skillset by quite a lot. I now have a dynamic and dedicated place to read and share news feeds as I wish. Though even when generative AI works and works well, I have significant concerns about the intellectual property implications of AI, and this project brought those tensions into sharp focus. The AI could only help me because it was trained on documentation and intellectual work from the open source community, contributions made freely in the spirit of knowledge sharing, not to train commercial AI systems. I tapped into their expertise by paying Anthropic $15 a month. While I’m grateful for the accessibility this provides to non-developers like me, I recognize there’s an unresolved ethical question about whether this use respects the intent and labor of the original creators. The feat is incredible; the foundation it’s built on deserves careful consideration.

After the exercise was complete, I asked Claude how I could have improved my prompting to make this process easier, and in short, Claude said I could have been a web developer. But since I’m not, here’s what it recommended:

When the process isn’t working, question the process mid-stream. Most people either give up or keep following bad advice deeper into rabbit holes. Stop and question the LLM’s process and ask for alternatives to force a reset.

Push for usability. Keep bringing the conversation back to what you actually need the end result to do, not what’s technically impressive or “correct.” In my case, this meant repeatedly asking “can I click through to the articles?” rather than getting lost in discussions about CORS proxies or JavaScript syntax. Focus on outcomes, not implementation details.

Ask for complete solutions. Instead of trying to mentally patch together incremental changes across multiple responses, ask the LLM to provide fresh, complete code each time. This prevents copy-paste errors and ensures you’re always working with a coherent, tested solution. There’s more than one way to crack an egg, but you want the whole egg regardless.

After all that, I got it to work but can’t figure out how to make it show up in my header menu, with or without Claude. TBD.

What is this site and why am I doing it?

In recent history I stopped posting on most social media and moved to the fediverse. I still browse the social platforms to keep up with trends and friends, but I only post on my private IG and here.

What I share here is separate from but related to my professional life — I’m thinking out loud and making room for rough, unfinished ideas. I write mainly for myself, but if others find it useful, that’s great. The practice of reading and reflecting makes your thinking stick, and I am from a certain time and place, so this is how I approach learning and communicating about what I’m learning. It’s a habit.

While this is my preferred approach, I acknowledge that sharing unfinished ideas publicly is risky and you have to accept accountability for the messiness that comes with that. But I also know that working through your vulnerability through the act of writing lets you tap into your most creative, innovative self and test your ideas against an evolving sense of what’s good. The potential for an audience, however real or implied, keeps you more honest and less self-indulgent. Despite the trade offs, I think it’s worthwhile.

As I add to this page, I’ll be thinking out loud about digital rhetoric and communication alongside emerging technology, and linking back to foundational ideas I see reflected online today. Occasionally I’ll say something longer.