Tech

Rapper Afroman is going ultra viral this week as his “Lemon Pound Cake” trial plays out in the news. He captured the raid on security cameras in his home and used the footage in a series of songs, videos and merch. He ultimately did not face charges after the search, and argues (with evidence) that the police broke his door and stole $400, which provides the platform and substance for everything that followed. He argues the police shouldn’t have been there at all, and didn’t follow protocol when they were, and that as a citizen and artist he’s expressing his feelings about it in his preferred medium. Is this a winning legal strategy? Time will tell.

In the meantime he’s winning at public opinion. The trial is shaping up in the public view as a defamation vs. free speech trial, with the artist’s prolific work about this no-knock raid performed at his house, itself arguably unethical, held up as harassment by the officers who did the job. True crime, legal experts and court watcher accounts are going gangbusters providing cultural and legal analysis alongside video of court testimony. It helps that the court footage is a rich text — both hilarious and revealing.

Meanwhile: another first amendment case in and around rap lyrics is playing out now. A brief history of rap and the First Amendment.

Come look over my shoulder while I explore how and whether LLMs are good writing tools: Here’s a wee version of the LLM comparison exercise I did with my team. We’ll make it a two-fer so you can see how the “good writing” skill works in practice, though we’ll see how that actually goes.

One of the more useful things you can do with an LLM is hold up a few ideas side by side and apply lenses to them. I know this history pretty well, so I asked a series of LLMs, why is Wisconsin’s cultural identity and cohesion stronger than Indiana’s, from a historical and business perspective?

Here are the answers in one doc, for comparison.

Each LLM will give us more or less the same story, different flavor. Within the industry, the differences across the models reflect “model personality.” Asking “why” instead of “whether” will probably drive the answer to favor Wisconsin. Using multiple lenses (two states, historical + business, identity + cohesion) forces the LLM to cross-reference across more of its training data, which tends to produce a more comprehensive answer.

For all the chatter about consciousness and whatever, remember that an LLM is an infinite series of if/then/elses applied to human language and semantics, so being able to talk about language and communication, getting meta with the tool and how you think through language, helps a lot when using one. This is maybe the one thing I like about experimenting so hard with the tools. I’m thinking about the technical side of writing and enjoying it quite a lot.

Functionally: all of them acknowledge hard historical truths within the subject matter and don’t shy away from critical perspectives, which is good. Both Gemini and Copilot include in-line links, which lets you judge the output’s authority in the moment as a reader. I liked Copilot’s more than I expected here. Claude’s answers are more lyrical and do provide more context, and yet do not encourage checking against outside sources by providing links within the output. And you can see that even with the good writing skill calling out hard bans on certain structure, Claude plows right through them.

Model personality: Claude favors sociological answers to Copilot’s economic answers. Claude is also highly intellectual and narrative by comparison, and that narrative style can mask nuance by sinking relative context within the storytelling. Gemini simplifies, boosts and cheerleads where the others don’t, and really goes hard on Wisconsin’s reputation as a drinking and Packers state when there are stronger structural arguments in play. Copilot is tricky because it looks authoritative like a briefing, which also makes it easily “extractible” for the user, but every citation requires authentication unless this is one of those “good enough” tasks.

As a writer, something I find annoying across the whole spread is the semantic reveal. LLMs are semantic machines, and it is persistently revealed in ways that are weird to the human ear. All of them go out of their way to describe things as “structural,” “connective” as in “connective tissue,” “load-bearing” and “legible.”

Finally, I included a second tab where I asked Claude for analysis across the four outputs, where it suggests that my framing of the question is altogether kind of problematic. It shows how a strong prompt is sometimes also a bad approach.

There are a lot of possible takeaways here, but I’d rather set aside the question of which tool is “good” or “bad” or “better” and think more about the patterns across the tools and their implications.

One of the tricky things about consumer AI tools like Claude and Gemini is that the experience varies widely depending on the person using it, and it’s not always clear why. I have spent a lot of time learning the tools so I can advise around them in my work, and this variance of experience has become a frustrating part of the deal.

I manage a team of writers and creatives at work, and we are expected to have familiarity with the tools, despite complex and sometimes hostile feelings about the political and environmental implications around this sector. That’s quite a pickle, organizationally, managerially. Borrowing from Haraway, I thought okay, what if we take these tools seriously as a team of writers and creatives and put our professional standards up against them?

Among other exercises, I did a couple of comparisons on my team that are helpful for creating discussion around the “plausibility” question. People dismiss LLMs outputs as being merely plausible answers, rather than accurate or factual ones. And that’s correct, they are, and that’s the design. In many cases, plausibility is fine. Take Wikipedia, for example, which we understand to be a pretty good source, a plausible source, unless you’re writing a formal paper requiring original sources.

I digress – ultimately we needed to understand together that LLMs are not a WYSIWYG tool and talk through the implications.

I asked everyone to run the same paper through their LLM of choice, prompting it for a plain language summary. We then copied and pasted it into a shared doc, and compared and contrasted for discussion. Upon discussion, we had several takeaways, including that they were all similar in spirit but sometimes varying wildly in style and approach.

Knowing that algorithms are responsive and not static, we did it again later in the day, and copied and pasted our outputs into the shared doc. We compared and contrasted the difference between AM and PM. Again, it was similar in spirit but varied in style and approach. Some changed dramatically. One team member whose morning summary had been jokey and conversational received a much more staid and serious version in the afternoon.

At the time, I asked Claude to explain the variance: “Even with the same prompt and source material, LLMs don’t produce identical outputs each time. This is by design — there’s a degree of randomness (called “temperature”) in how the model selects words, which means each run produces a slightly different path through the text.”

Anyway, this got our gears turning on how (and whether) to approach LLMs as a team and as individuals and led to good group discussion. (It’s important to create space for criticism and critical approaches here.) It also gave us more confidence as a team responding to this new layer of complexity in our work, and helping our professional contacts and peers think about how to approach the tools and when and whether to use them. There will be tasks where AI-based tools are “good enough,” and tasks where they are not.

The swirl of mystery and speculation around this sector have people up in arms, and it’s useful to have approaches that give people firsthand experience, and to see how the experience works for others. The god trick of the singular interface turns out to be a bear for navigating it in the workplace, where our work is foundational, prosocial and specific.

A smart take on nonconsensual deepfakes, considering them not just as a social issue but as a cybersecurity issue, as they’re often connected to broader financial and social harassment campaigns.

WaPo on the many issues of LLM house writing styles. I have much to add. Here are some notes on applying a writing skill to override house style on Claude.

You might call this a taste test: Obsessed with the story about the McDonald’s CEO and how his LinkedIn-style videos selling the McD’s franchise have escaped containment, leading to one of the funnier CEO/product marketing dynamics in recent history. Burger King’s CEO swooped in, holding widely-marketed listening sessions with customers and to demonstrate his love for the Whopper in contrast with the deeply weird McD’s videos. Must read: Internet long-hauler Katie Notopoulos on how direct-to-public marketing works when the public is more familiar with the product than the org’s leaders.

My teenager thinks fast food is cool and subversive (cue the sound of one hundred moms groaning) and she and her friends regularly talk shop. They are Taco Bell fans and think burger stops are kind of gauche. Not gauche: Baja Blast.

In the meantime, the fast food sector is becoming a playground for AI approaches, causing a lot of nervous discussion on social media and in tech spaces.

Ryan Broderick is currently my fav reporter on the Internet culture beat, and on this podcast he’s talking about Epstein and why he was interested in putting his resources to developing the early internet (spoiler: for crimes).

Ruling: AI art is not copyrightable.

Apparently one thing LLMs excel at is deanonymization at scale. The original promise of pseudonymity online was social and normative, over and above any question of technical depth: decent people don’t try to unmask you, because why. What strikes me today is how what used to be unacceptably antisocial behavior online is now both automated and unremarkable.

Over the last couple of weeks, I asked a couple of chatbots what could be known about me from this pseudonymous site, where I am more intentional about what I choose to reveal and conceal. It pulled the obvious but also drew conclusions based on a few geographic points I’d made in context that were both revealing and correct. I also noticed that it only drew from the top two pages of information - anything beyond page two of posts wasn’t part of the compute. Archives are for humans?

People assume that there is some computer magic on the backend where the LLMs connect all your account logins behind the scenes, but no, in fact it does all this through inference, by linking your digital trail, your friends, your breadcrumbs of likes and hearts and follows, and obvs your posts, into a picture of who you are, practically and demographically.

Developing a new pet theory that you could theoretically crank AI outputs toward your vision by having an organized link blog revolution.

The Guardian has a long-running series where readers answer one another’s questions, which gives a pretty good point-in-time survey of how people of a certain demographic are feeling on a given subject. This week’s question is on AI futures: “what would happen to the world if computer said yes?”

Reading about a new “slow” RSS app called Current.

Second wavers in tech and engineering talked a lot about the “god trick” of presenting knowledge and information, particularly around math and science, in a way that suggests its objectivity is eternal, immortal, unknowable. Work by Safiya Umoja Noble and others extended this lens to Internet search and architecture, finding that the search algorithm was never neutral, instead it was a series of business decisions wearing neutrality like a costume, creating a customer service experience. LLMs take that same trick and compress it further.

It’s an old idea, and one I’ve been drawing from while I tinker with Claude, which is purportedly the best in the game. The “god trick” is baked right into the AI interface: one input, one output, an authoritative-seeming answer, offered without named perspectives behind it, trained on text produced overwhelmingly by a narrow demographic who has historically had access to both literacy and publishing, by programmers and new media drawing from the same well. Smushed together, it gives the impression that consensus exists where there are in fact many, many loose ends.

I increasingly find it annoying that even “good” AI outputs seem fixed on phrases like “key,” “core,” “exist,” “actually,” “never,” and possibly the worst sentence structure of all time, “it’s not X, it’s Y” — and I’ve begun to recognize how LLMs work like autocorrect for phrases and ideas, drawing from ranked search sources first before fanning out to more obscure sources, trying to determine and assert what’s important to me, a user known by demographics and data. It feels like a big linguistics machine, which is pretty cool in some regards, but also aggressively semantic. The math doesn’t always work to connect me to what I want to find because I am situated in my individual context in ways LLMs are not able to understand, with my memory, in my body, with my unique experiences, which shape and translate meaning for me as I interact with the world (and the web).

And so for you, in your body and memory and experience. An LLM can approximate the outputs of an experience without having access to the experience itself. Sometimes this is useful, sometimes it’s reckless.

Overall the dynamic reminds me of the famous scene from Good Will Hunting: Claude is a smart kid, and he’s never been outta Boston.

An open letter from employees of Google and OpenAI in support of Anthropic, against the DoD: notdivided.org

McKenzie Wark for Verso Books on Donna Haraway, in 2015: “Creating any kind of knowledge and power in and against something as pervasive and effective as the world built by postwar techno-science is a difficult task. It may seem easier simply to vacate the field, to try to turn back the clock, or appeal to something outside of it. But this would be to remain stuck in the stage of romantic refusal. Just as Marx fused the romantic fiction that another world was possible with a resolve to understand from the inside the powers of capital itself, so too Haraway begins what can only be a collaborative project for a new international. One not just of laboring men, but of all the stuttering cyborgs stuck in reified relations not of their making.”

New research shows that social media advertising suppresses voting in targeted communities, and is the first to quantify the effect of this kind of microtargeting on voter turnout.

Sharing Digital Animal, a new, curated playlist of songs about all the angst and joys of living with modern technology.

I’ve been spending more time in tech spaces online and getting good information from folks like @manton, creator of Micro.blog. Like this reflection on how to think about AI now that vibe coding works. Something I’m thinking about: there’s an emerging tension between those who see value in being able to immediately prototype an idea and the people downstream who have to manage the outputs/code over time. The ability to proof every idea sounds like a superpower until you’re the one driving and maintaining the results.

“Open source,” “open weight,” and “proprietary” describe different relationships between LLM model producers and users, governing what you see, modify and control. Comparing them isn’t necessarily about “best,” but whether you’re optimizing for transparency, compliance or performance. Massive investment in proprietary models means the best-resourced research teams, the largest training runs, and the most sophisticated safety work tend to happen behind closed doors.

A new report from Pew on how teens view and use AI says that more than half of teens turn to chatbots for help with school, and some are turning to AI for emotional support and companionship.

A brief history on writing, a forgotten technology: how the challenges we face in today’s digital age can be traced back to one of the most significant yet underrated innovations of all time.”

“Digital silence” is a new ethic emerging on the visual web, where travelers are no longer posting about their travels. Reading along, thinking of how I rarely post about family out of respect for their privacy, and recently took several road trips without a single post.

Closer to the machine

There is something about the AI moment that reminds me a lot of when the internet was new. A lot of what was imagined and promised about the internet was never realized. But much was.

I’ve been reading Ellen Ullman’s memoirs - “Life in Code” and “Close to the Machine” - and her observations about proximity to technology feel relevant here. Being close to the machine means understanding its actual capabilities and limitations apart from the prevailing sales narratives. It also means a kind of loneliness, because you are working in a space that others don’t yet see clearly or fully understand.

I suspect people thinking seriously about AI right now will experience something similar: a stretch of hostility and discomfort while the rest of the world catches up and the consumer market level-sets on the promises being made. In the interim, the hype will not match the reality, and the reality will sometimes exceed the hype in ways no one predicted. And for a while, how it works and why it matters won’t be legible to everyone at once.

Crunching for clarity

In 1999, academic and theorist Judith Butler famously won an award for the worst academic sentence, raising good questions about how we read difficult texts, who gets to access academic ideas, and the role of academic and plain language in and around the academy:

The move from a structuralist account in which capital is understood to structure social relations in relatively homologous ways to a view of hegemony in which power relations are subject to repetition, convergence, and rearticulation brought the question of temporality into the thinking of structure, and marked a shift from a form of Althusserian theory that takes structural totalities as theoretical objects to one in which the insights into the contingent possibility of structure inaugurate a renewed conception of hegemony as bound up with the contingent sites and strategies of the rearticulation of power.

This was my era in college, and I remember feeling challenged and overwhelmed as an undergrad trying to read theory for the first time. So today, I asked Claude.ai to put it in plain language, just to see what would happen. Here’s the after:

Scholars used to think that capitalism shaped society in consistent, predictable patterns. But newer thinking focuses on how power works—how it repeats, overlaps, and gets reshaped over time. This brought time into the picture and moved away from older theories that treated social structures as fixed. Instead, recognizing that structures can change opened up a new way of understanding power: it's tied to specific moments and strategies, and it's always being renegotiated.

The real problem is that it's not our quagmire

Tiktok is not much better or worse than other major social platforms, I say. The primary arguments against TikTok, including data collection, algorithmic manipulation, potential foreign government access, addiction and influence on public opinion, apply with equal or greater force to American platforms. Meta has faced billions in fines for allowing privacy violations, enabled documented election interference, and its algorithms have been linked to mental health harms and the amplification of extremist content globally, including perpetuating a genocide in Myanmar. Google and other domestic platforms vacuum up vastly more user data with fewer restrictions.

The distinguishing factor isn’t the behavior but the ownership: TikTok’s parent company ByteDance is subject to Chinese law and intelligence relationships, while Meta and Google are subject to U.S. law and intelligence relationships. That’s a legitimate policy distinction, but rarely articulated honestly. Instead, the debate has been framed around purportedly unacceptable harms that American tech companies perpetrate routinely, creating a kind of security theater that lets domestic platforms escape equivalent scrutiny while positioning a foreign competitor for a forced sale or ban.

Affinity as an organizing principle

Reading this blog post by a political scientist explaining the problem with our fractured information landscape, and how calls for more information and media literacy are not likely solutions:

“In short, decades of research have demonstrated that our political beliefs and behavior are thoroughly motivated and mediated by our social identities: i.e., the many cross-cutting social groupings we feel affinity with. And as long as we do not account for this profound and pervasive dependence, our attempts to address the epistemic failures threatening contemporary democracies will inevitably fall short. More than any particular institutional, technological, or educational reform, promoting a healthier democracy requires reshaping the social identity landscape that ultimately anchors other democratic pathologies.”

As always, this drives me back to Haraway’s cyborg, a useful metaphor for thinking about our political, environmental and social tangle and how it butts up against emerging tech and science. (In Haraway’s context, it was the rise of STEM as a driving force in academia at the dawn of the computer age.) Bagg’s argument lands in familiar territory for anyone who’s wrestled with the cyborg metaphor. Both reject the assumption that better information alone will save us from ourselves, whether from context collapse or the dualisms (binaries, heh) that structure how we think about technology, nature, humanity and politics.

Bagg arrives at something parallel from political science: We trust information that affirms the groups we belong to. (Business and marketing, for what it’s worth, tell us the same thing from a slightly different angle: you’re most likely to convert on a recommendation from a trusted friend. The next best thing in our current media landscape: a trusted influencer you identify with, which is why TikTok increasingly feels like QVC.) The problem isn’t that people lack access to truth, it’s that they’ve lost affinity with the experts, institutions and collaborative practices that produce expertise.

Both perspectives point toward the same conclusion: you have to recognize shared affinities through the slow work of creating conditions where people want to trust each other across differences.

The trust gap

I suspect these three trends are connected: Women reportedly use AI at significantly lower rates than men—25 percent lower on average—in part because they’re more concerned about ethics, including privacy, consent and intellectual property. At the same time, countries with more positive social media experiences tend to be more open to AI, while Americans’ distrust is shaped by years of watching tech platforms erode trust. Meanwhile, one of the largest social platforms has turned its AI chatbot into a harassment tool—generating roughly one nonconsensual sexualized deepfake image per minute, disproportionately targeting women and girls.

When platforms enable abuse at scale, it makes sense that people most likely to be harmed would be most attuned to ethical concerns, and would thus be the most cautious about AI adoption.

A meta lesson about AI assistance

I just completed my first attempt at coding using AI, in this case having Claude assist me with putting together a simple client-side OPML parser using Dave Winer’s Feedland service.

Winer’s original script is pretty slick, and includes a list of all my feeds with titles, URLs, and categories; click-to-expand functionality to see the 5 most recent posts from each feed; clickable post titles that open articles in new tabs; sort options (by title or by update); and automatic updates when I change my FeedLand subscriptions.

You can check it out here: Feeds

The official documentation method didn’t initially work because Hugo (the blogging software behind micro.blog) was wrapping client-side templates around the script. The toolkit requires server-side dependencies that don’t exist on static sites like micro.blog, and we hit a cascade of missing JavaScript dependencies (jsonStringify, servercall, etc.). Each fix revealed another dependency, leading to some “sunk cost” frustrations for me. I kept trying because I wanted to see if Claude could pull it together. Through trial and error, I got to a point where the OPML file was rendered correctly without server dependencies or complex external libraries.

Time invested: ~3 hours (including wrong turns)

Time it should take: 10 minutes

AI extended my code reach beyond my practical skillset by quite a lot. I now have a dynamic and dedicated place to read and share news feeds as I wish. Though even when generative AI works and works well, I have significant concerns about the intellectual property implications of AI, and this project brought those tensions into sharp focus. The AI could only help me because it was trained on documentation and intellectual work from the open source community, contributions made freely in the spirit of knowledge sharing, not to train commercial AI systems. I tapped into their expertise by paying Anthropic $15 a month. While I’m grateful for the accessibility this provides to non-developers like me, I recognize there’s an unresolved ethical question about whether this use respects the intent and labor of the original creators. The feat is incredible; the foundation it’s built on deserves careful consideration.

After the exercise was complete, I asked Claude how I could have improved my prompting to make this process easier, and in short, Claude said I could have been a web developer. But since I’m not, here’s what it recommended:

When the process isn’t working, question the process mid-stream. Most people either give up or keep following bad advice deeper into rabbit holes. Stop and question the LLM’s process and ask for alternatives to force a reset.

Push for usability. Keep bringing the conversation back to what you actually need the end result to do, not what’s technically impressive or “correct.” In my case, this meant repeatedly asking “can I click through to the articles?” rather than getting lost in discussions about CORS proxies or JavaScript syntax. Focus on outcomes, not implementation details.

Ask for complete solutions. Instead of trying to mentally patch together incremental changes across multiple responses, ask the LLM to provide fresh, complete code each time. This prevents copy-paste errors and ensures you’re always working with a coherent, tested solution. There’s more than one way to crack an egg, but you want the whole egg regardless.

After all that, I got it to work but can’t figure out how to make it show up in my header menu, with or without Claude. TBD.

When AI is wrong, who pays?

Rhetoric of intertextuality

“… every text is connected to other texts by citations, quotations, allusions, borrowings, adaptations, appropriations, parody, pastiche, imitation, and the like. Every text is in a dialogical relationship with other texts. In sum, intertextuality describes the relationships that exist between and among texts. What follows is a discussion of the strategies of intertextuality.”