A new paper in Science Magazine explains how AI now allows propaganda campaigns to reach previously unprecedented scale and precision. This gets into the implications for organizations, institutions and nations.
A new paper in Science Magazine explains how AI now allows propaganda campaigns to reach previously unprecedented scale and precision. This gets into the implications for organizations, institutions and nations.
Tiktok is not much better or worse than other major social platforms, I say. The primary arguments against TikTok, including data collection, algorithmic manipulation, potential foreign government access, addiction and influence on public opinion, apply with equal or greater force to American platforms. Meta has faced billions in fines for allowing privacy violations, enabled documented election interference, and its algorithms have been linked to mental health harms and the amplification of extremist content globally, including perpetuating a genocide in Myanmar. Google and other domestic platforms vacuum up vastly more user data with fewer restrictions.
The distinguishing factor isn’t the behavior but the ownership: TikTok’s parent company ByteDance is subject to Chinese law and intelligence relationships, while Meta and Google are subject to U.S. law and intelligence relationships. That’s a legitimate policy distinction, but rarely articulated honestly. Instead, the debate has been framed around purportedly unacceptable harms that American tech companies perpetrate routinely, creating a kind of security theater that lets domestic platforms escape equivalent scrutiny while positioning a foreign competitor for a forced sale or ban.
The TikTok deal means American users will see a US-only algorithm. Brands and creators will likely see smaller audiences and higher costs for domestic reach. ByteDance faces split algorithms, divided workforces and parallel governance, complicating product delivery across global markets.
The promise of AI is that it makes work more productive, but the reality is proving more complex and less rosy.
I’m generally skeptical of anyone selling a solution to a social problem that relies on individual abstinence, so I tend to be annoyed with many arguments about the attention economy. I more or less land here on the question of AI, which I know many of my contemporaries will find similarly annoying.
Searching for Suzy Thunder: In the ’80s, Susan Headley ran with the best of them—phone phreakers, social engineers, and the most notorious computer hackers of the era. Then she disappeared.
While conspiring with a friend about life and work in these trying times, both of us confessed that we believe, at the root, that reading and writing are ultimately the cure for everything that ails us: collectively, individually, epistemically, existentially. Maybe that’s naive, but I’ll take it.
Make Canadian TV weird again (sponsored by The Red Green Show, probably).
If you spend time around cycling and pedestrian advocates, the debate between bans and regulations is familiar territory. When I got deep into road biking, where I learned to ride long distance through a red state with almost no bike infrastructure outside tight urban and exurban areas, one of the best things I did was take road classes through the League of American Bicyclists. You learn the rules of the road from a cyclist’s perspective and practice skills like riding with car traffic under expert guidance, including how to change a flat on the side of the road in the height of summer, gritty with sweat and road grime.
The challenge is that bike education isn’t standardized, so most cyclists never learn the fundamentals anyway. Many of us learned as kids and haven’t had a refresh since. I get stomach pain when I see people riding at night without a light, going too fast on a dedicated path, and adults riding their bike on a pedestrian sidewalk. But when I think about e-bike bans and pedestrian right-of-way debates, it strikes me that outside of getting a driver’s permit for car drivers, there’s essentially no infrastructure for learning how to share roads and paths safely. We’re trying to regulate behavior most of us didn’t learn in earnest.
UW-Madison is among universities seeing federal terminations of international student visas. Public research universities have come to rely on these students to offset funding cuts. The losses are both financial and cultural.
404 Media on Wikipedia, reciprocity and collaboration online, and how to protect the public commons in the age of AI.
Wondering whether I want to switch to something more robust like Wordpress if I keep doing this thing, but at the same time I have enjoyed not working within and around the world of WP, which has dominated my CMS experience since the aughts.
This observation at the end of Manton’s post on AI and Wikipedia made me chuckle:
AI using Wikipedia reminds me of the FAQ on setting up a Little Free Library: _I think someone is stealing books from my library and selling them, what do I do?_ Remember that the purpose of a Little Free Library is to share books—you can’t really steal from it._
Woof: ads are coming to ChatGPT.
Reading this blog post by a political scientist explaining the problem with our fractured information landscape, and how calls for more information and media literacy are not likely solutions:
“In short, decades of research have demonstrated that our political beliefs and behavior are thoroughly motivated and mediated by our social identities: i.e., the many cross-cutting social groupings we feel affinity with. And as long as we do not account for this profound and pervasive dependence, our attempts to address the epistemic failures threatening contemporary democracies will inevitably fall short. More than any particular institutional, technological, or educational reform, promoting a healthier democracy requires reshaping the social identity landscape that ultimately anchors other democratic pathologies.”
As always, this drives me back to Haraway’s cyborg, a useful metaphor for thinking about our political, environmental and social tangle and how it butts up against emerging tech and science. (In Haraway’s context, it was the rise of STEM as a driving force in academia at the dawn of the computer age.) Bagg’s argument lands in familiar territory for anyone who’s wrestled with the cyborg metaphor. Both reject the assumption that better information alone will save us from ourselves, whether from context collapse or the dualisms (binaries, heh) that structure how we think about technology, nature, humanity and politics.
Bagg arrives at something parallel from political science: We trust information that affirms the groups we belong to. (Business and marketing, for what it’s worth, tell us the same thing from a slightly different angle: you’re most likely to convert on a recommendation from a trusted friend. The next best thing in our current media landscape: a trusted influencer you identify with, which is why TikTok increasingly feels like QVC.) The problem isn’t that people lack access to truth, it’s that they’ve lost affinity with the experts, institutions and collaborative practices that produce expertise.
Both perspectives point toward the same conclusion: you have to recognize shared affinities through the slow work of creating conditions where people want to trust each other across differences.
I too have been challenged by defining “federation” to non-technical audiences, so it was fun and instructive to see others give it a go.
The CEO of Instagram says that social media platforms will be under mounting pressure to help users tell the difference between human-made and AI content, and that going forward, it will be more practical to label real content over AI.
“It’s really a software maturity story,” Sag says. “But that’s not very sexy.”
I suspect these three trends are connected: Women reportedly use AI at significantly lower rates than men—25 percent lower on average—in part because they’re more concerned about ethics, including privacy, consent and intellectual property. At the same time, countries with more positive social media experiences tend to be more open to AI, while Americans’ distrust is shaped by years of watching tech platforms erode trust. Meanwhile, one of the largest social platforms has turned its AI chatbot into a harassment tool—generating roughly one nonconsensual sexualized deepfake image per minute, disproportionately targeting women and girls.
When platforms enable abuse at scale, it makes sense that people most likely to be harmed would be most attuned to ethical concerns, and would thus be the most cautious about AI adoption.
Folks are beginning to wonder why Twitter and Grok are still in the app stores, given the latest trend in using the LLM to generate non-consensual imagery of people (namely women and children) at alarming rates.