One of the more useful things you can do with an LLM is hold up a few ideas side by side and apply lenses to them. I know this history pretty well, so I asked a series of LLMs, why is Wisconsin’s cultural identity and cohesion stronger than Indiana’s, from a historical and business perspective?
Each LLM will give us more or less the same story, different flavor. Within the industry, the differences across the models reflect “model personality.” Asking “why” instead of “whether” will probably drive the answer to favor Wisconsin. Using multiple lenses (two states, historical + business, identity + cohesion) forces the LLM to cross-reference across more of its training data, which tends to produce a more comprehensive answer.
Functionally: all of them acknowledge hard historical truths within the subject matter and don’t shy away from critical perspectives, which is good. Both Gemini and Copilot include in-line links, which lets you judge the output’s authority in the moment as a reader. I liked Copilot’s more than I expected here. Claude’s answers are more lyrical and do provide more context, and yet do not encourage checking against outside sources by providing links within the output. And you can see that even with the good writing skill calling out hard bans on certain structure, Claude plows right through them.
Model personality: Claude favors sociological answers to Copilot’s economic answers. Claude is also highly intellectual and narrative by comparison, and that narrative style can mask nuance by sinking relative context within the storytelling. Gemini simplifies, boosts and cheerleads where the others don’t, and really goes hard on Wisconsin’s reputation as a drinking and Packers state when there are stronger structural arguments in play. Copilot is tricky because it looks authoritative like a briefing, which also makes it easily “extractible” for the user, but every citation requires authentication unless this is one of those “good enough” tasks.
As a writer, something I find annoying across the whole spread is the semantic reveal. LLMs are semantic machines, and it is persistently revealed in ways that are weird to the human ear. All of them go out of their way to describe things as “structural,” “connective” as in “connective tissue,” “load-bearing” and “legible.”
Finally, I included a second tab where I asked Claude for analysis across the four outputs, where it suggests that my framing of the question is altogether kind of problematic. It shows how a strong prompt is sometimes also a bad approach.
There are a lot of possible takeaways here, but I’d rather set aside the question of which tool is “good” or “bad” or “better” and think more about the patterns across the tools and their implications.
LLMs have a default house writing style with identifiable patterns: sentence fragments for emphasis, “not X, but Y” constructions, lots of hard contrast, atmospheric openings, heavy use of em dashes, and heavy use of marketing language. This reflects the semantic construction of an LLM. Custom instructions can override these defaults. A custom skill is a set of instructions within your account that modify how the model generates text. When you paste instructions into your profile settings, Claude reads them at the start of every conversation and adjusts its output accordingly.
I began using Claude daily for light writing tasks about six months ago, and over that time I started cataloging the patterns I was consistently editing out, including the terrible “not X, but Y” construction that showed up in nearly every response, and persistent em dashes used as all-purpose connectors when other punctuation is more appropriate.
I went through several iterations of bullying Claude into submission, narrowing the scope each time, before arriving at this version, which focuses specifically on writing mechanics and hard prohibitions.
You’ll need a paid Claude plan (Pro, Max, Team, or Enterprise). Free-tier accounts don’t have access to custom skills.
• Within the app, navigate to Customize > Skills and Create new skills
• Select add a new skill and Write skill instructions
• Copy and paste the copy from this file into the skill, making note of the name and description boxes. Feel free to tinker.
• Save your changes.
Note: The instructions in the linked file are Claude’s work, not mine. They came out of months of conversation, where Claude would analyze my style notes, and the file evolved from there. They read a little strangely because of that process. If I’d written them from scratch, they’d sound different. But looking at the file you can see what Claude responds to and how it works.
Claude will apply these instructions to every new conversation going forward. Existing conversations won’t pick up the change, so start a fresh chat to test it. If and when Claude struggles to apply the skill, call it out specifically in the prompt, such as, “Revise this for length using the good writing skill.”
The skill specifies constraints in a few categories and the instructions are plain text. As you go, you can also ask Claude to analyze previous conversations for suggested additions to the skill, which Claude will produce and implement within the chat. Each rule operates independently, so removing one doesn’t affect the others.
Claude processes custom instructions at the start of every conversation, before it generates any output. The instructions function as constraints on the model’s default behavior. The model doesn’t always follow every instruction perfectly and the results vary by task. You will still need to edit.
It’s an old idea, and one I’ve been drawing from while I tinker with Claude, which is purportedly the best in the game. The “god trick” is baked right into the AI interface: one input, one output, an authoritative-seeming answer, offered without named perspectives behind it, trained on text produced overwhelmingly by a narrow demographic who has historically had access to both literacy and publishing, by programmers and new media drawing from the same well. Smushed together, it gives the impression that consensus exists where there are in fact many, many loose ends.
I increasingly find it annoying that even “good” AI outputs seem fixed on phrases like “key,” “core,” “exist,” “actually,” “never,” and possibly the worst sentence structure of all time, “it’s not X, it’s Y” — and I’ve begun to recognize how LLMs work like autocorrect for phrases and ideas, drawing from ranked search sources first before fanning out to more obscure sources, trying to determine and assert what’s important to me, a user known by demographics and data. It feels like a big linguistics machine, which is pretty cool in some regards, but also aggressively semantic. The math doesn’t always work to connect me to what I want to find because I am situated in my individual context in ways LLMs are not able to understand, with my memory, in my body, with my unique experiences, which shape and translate meaning for me as I interact with the world (and the web).
And so for you, in your body and memory and experience. An LLM can approximate the outputs of an experience without having access to the experience itself. Sometimes this is useful, sometimes it’s reckless.
There is something about the AI moment that reminds me a lot of when the internet was new. A lot of what was imagined and promised about the internet was never realized. But much was.
I’ve been reading Ellen Ullman’s memoirs - “Life in Code” and “Close to the Machine” - and her observations about proximity to technology feel relevant here. Being close to the machine means understanding its actual capabilities and limitations apart from the prevailing sales narratives. It also means a kind of loneliness, because you are working in a space that others don’t yet see clearly or fully understand.
I suspect people thinking seriously about AI right now will experience something similar: a stretch of hostility and discomfort while the rest of the world catches up and the consumer market level-sets on the promises being made. In the interim, the hype will not match the reality, and the reality will sometimes exceed the hype in ways no one predicted. And for a while, how it works and why it matters won’t be legible to everyone at once.
The move from a structuralist account in which capital is understood to structure social relations in relatively homologous ways to a view of hegemony in which power relations are subject to repetition, convergence, and rearticulation brought the question of temporality into the thinking of structure, and marked a shift from a form of Althusserian theory that takes structural totalities as theoretical objects to one in which the insights into the contingent possibility of structure inaugurate a renewed conception of hegemony as bound up with the contingent sites and strategies of the rearticulation of power.
This was my era in college, and I remember feeling challenged and overwhelmed as an undergrad trying to read theory for the first time. So today, I asked Claude.ai to put it in plain language, just to see what would happen. Here’s the after:
Scholars used to think that capitalism shaped society in consistent, predictable patterns. But newer thinking focuses on how power works—how it repeats, overlaps, and gets reshaped over time. This brought time into the picture and moved away from older theories that treated social structures as fixed. Instead, recognizing that structures can change opened up a new way of understanding power: it's tied to specific moments and strategies, and it's always being renegotiated.
Tiktok is not much better or worse than other major social platforms, I say. The primary arguments against TikTok, including data collection, algorithmic manipulation, potential foreign government access, addiction and influence on public opinion, apply with equal or greater force to American platforms. Meta has faced billions in fines for allowing privacy violations, enabled documented election interference, and its algorithms have been linked to mental health harms and the amplification of extremist content globally, including perpetuating a genocide in Myanmar. Google and other domestic platforms vacuum up vastly more user data with fewer restrictions.
The distinguishing factor isn’t the behavior but the ownership: TikTok’s parent company ByteDance is subject to Chinese law and intelligence relationships, while Meta and Google are subject to U.S. law and intelligence relationships. That’s a legitimate policy distinction, but rarely articulated honestly. Instead, the debate has been framed around purportedly unacceptable harms that American tech companies perpetrate routinely, creating a kind of security theater that lets domestic platforms escape equivalent scrutiny while positioning a foreign competitor for a forced sale or ban.
If you spend time around cycling and pedestrian advocates, the debate between bans and regulations is familiar territory. When I got deep into road biking, where I learned to ride long distance through a red state with almost no bike infrastructure outside tight urban and exurban areas, one of the best things I did was take road classes through the League of American Bicyclists. You learn the rules of the road from a cyclist’s perspective and practice skills like riding with car traffic under expert guidance, including how to change a flat on the side of the road in the height of summer, gritty with sweat and road grime.
The challenge is that bike education isn’t standardized, so most cyclists never learn the fundamentals anyway. Many of us learned as kids and haven’t had a refresh since. I get stomach pain when I see people riding at night without a light, going too fast on a dedicated path, and adults riding their bike on a pedestrian sidewalk. But when I think about e-bike bans and pedestrian right-of-way debates, it strikes me that outside of getting a driver’s permit for car drivers, there’s essentially no infrastructure for learning how to share roads and paths safely. We’re trying to regulate behavior most of us didn’t learn in earnest.
“In short, decades of research have demonstrated that our political beliefs and behavior are thoroughly motivated and mediated by our social identities: i.e., the many cross-cutting social groupings we feel affinity with. And as long as we do not account for this profound and pervasive dependence, our attempts to address the epistemic failures threatening contemporary democracies will inevitably fall short. More than any particular institutional, technological, or educational reform, promoting a healthier democracy requires reshaping the social identity landscape that ultimately anchors other democratic pathologies.”
As always, this drives me back to Haraway’s cyborg, a useful metaphor for thinking about our political, environmental and social tangle and how it butts up against emerging tech and science. (In Haraway’s context, it was the rise of STEM as a driving force in academia at the dawn of the computer age.) Bagg’s argument lands in familiar territory for anyone who’s wrestled with the cyborg metaphor. Both reject the assumption that better information alone will save us from ourselves, whether from context collapse or the dualisms (binaries, heh) that structure how we think about technology, nature, humanity and politics.
Bagg arrives at something parallel from political science: We trust information that affirms the groups we belong to. (Business and marketing, for what it’s worth, tell us the same thing from a slightly different angle: you’re most likely to convert on a recommendation from a trusted friend. The next best thing in our current media landscape: a trusted influencer you identify with, which is why TikTok increasingly feels like QVC.) The problem isn’t that people lack access to truth, it’s that they’ve lost affinity with the experts, institutions and collaborative practices that produce expertise.
Both perspectives point toward the same conclusion: you have to recognize shared affinities through the slow work of creating conditions where people want to trust each other across differences.
I suspect these three trends are connected: Women reportedly use AI at significantly lower rates than men—25 percent lower on average—in part because they’re more concerned about ethics, including privacy, consent and intellectual property. At the same time, countries with more positive social media experiences tend to be more open to AI, while Americans’ distrust is shaped by years of watching tech platforms erode trust. Meanwhile, one of the largest social platforms has turned its AI chatbot into a harassment tool—generating roughly one nonconsensual sexualized deepfake image per minute, disproportionately targeting women and girls.
When platforms enable abuse at scale, it makes sense that people most likely to be harmed would be most attuned to ethical concerns, and would thus be the most cautious about AI adoption.
Winer’s original script is pretty slick, and includes a list of all my feeds with titles, URLs, and categories; click-to-expand functionality to see the 5 most recent posts from each feed; clickable post titles that open articles in new tabs; sort options (by title or by update); and automatic updates when I change my FeedLand subscriptions.
The official documentation method didn’t initially work because Hugo (the blogging software behind micro.blog) was wrapping client-side templates around the script. The toolkit requires server-side dependencies that don’t exist on static sites like micro.blog, and we hit a cascade of missing JavaScript dependencies (jsonStringify, servercall, etc.). Each fix revealed another dependency, leading to some “sunk cost” frustrations for me. I kept trying because I wanted to see if Claude could pull it together. Through trial and error, I got to a point where the OPML file was rendered correctly without server dependencies or complex external libraries.
Time invested: ~3 hours (including wrong turns)
Time it should take: 10 minutes
AI extended my code reach beyond my practical skillset by quite a lot. I now have a dynamic and dedicated place to read and share news feeds as I wish. Though even when generative AI works and works well, I have significant concerns about the intellectual property implications of AI, and this project brought those tensions into sharp focus. The AI could only help me because it was trained on documentation and intellectual work from the open source community, contributions made freely in the spirit of knowledge sharing, not to train commercial AI systems. I tapped into their expertise by paying Anthropic $15 a month. While I’m grateful for the accessibility this provides to non-developers like me, I recognize there’s an unresolved ethical question about whether this use respects the intent and labor of the original creators. The feat is incredible; the foundation it’s built on deserves careful consideration.
After the exercise was complete, I asked Claude how I could have improved my prompting to make this process easier, and in short, Claude said I could have been a web developer. But since I’m not, here’s what it recommended:
✅ When the process isn’t working, question the process mid-stream. Most people either give up or keep following bad advice deeper into rabbit holes. Stop and question the LLM’s process and ask for alternatives to force a reset.
✅ Push for usability. Keep bringing the conversation back to what you actually need the end result to do, not what’s technically impressive or “correct.” In my case, this meant repeatedly asking “can I click through to the articles?” rather than getting lost in discussions about CORS proxies or JavaScript syntax. Focus on outcomes, not implementation details.
✅ Ask for complete solutions. Instead of trying to mentally patch together incremental changes across multiple responses, ask the LLM to provide fresh, complete code each time. This prevents copy-paste errors and ensures you’re always working with a coherent, tested solution. There’s more than one way to crack an egg, but you want the whole egg regardless.
After all that, I got it to work but can’t figure out how to make it show up in my header menu, with or without Claude. TBD.
“… every text is connected to other texts by citations, quotations, allusions, borrowings, adaptations, appropriations, parody, pastiche, imitation, and the like. Every text is in a dialogical relationship with other texts. In sum, intertextuality describes the relationships that exist between and among texts. What follows is a discussion of the strategies of intertextuality.”
In recent history I stopped posting on most social media and moved to the fediverse. I still browse the social platforms to keep up with trends and friends, but I only post on my private IG and here.
What I share here is separate from but related to my professional life — I’m thinking out loud and making room for rough, unfinished ideas. I write mainly for myself, but if others find it useful, that’s great. The practice of reading and reflecting makes your thinking stick, and I am from a certain time and place, so this is how I approach learning and communicating about what I’m learning. It’s a habit.
While this is my preferred approach, I acknowledge that sharing unfinished ideas publicly is risky and you have to accept accountability for the messiness that comes with that. But I also know that working through your vulnerability through the act of writing lets you tap into your most creative, innovative self and test your ideas against an evolving sense of what’s good. The potential for an audience, however real or implied, keeps you more honest and less self-indulgent. Despite the trade offs, I think it’s worthwhile.
As I add to this page, I’ll be thinking out loud about digital rhetoric and communication alongside emerging technology, and linking back to foundational ideas I see reflected online today. Occasionally I’ll say something longer.
While I’m cleaning up the cruft around my social presence, I’m finding more references to the heyday of blogging that explain how people organized online (Web 1.0) before the era of platforms (Web 2.0). One is this interview with Jill Filipovic, my one-time co-blogger and comrade at Feministe, with the folks at LGM who interviewed me on the subject a few years ago. Jill and I differed (and still do) on the meat of many issues, but have a lot of our thinking in common about how to handle disagreement and advance our ideas in common (and risky!) spaces. She’s welcome in my foxhole anytime.
I appreciate this oral history project by LGM because the articulation of our processes then (before automation) provides a lot of instruction about how to organize and think about communication outside of increasingly toxic and irresponsible social platforms today. One idea that is relative today is around pluralism. In our case, we practiced pluralism on subjects and perspectives within a defined domain, “feminism.” In practice, making it work before true automation meant we lived with a lot of decision points around moderating a raucous community. Bask then, we had our CMS and set up a list of community expectations, iterating as we went, then let the community rip. Collectively, this approach created a robust and vibrant interactive community of peers and moved our interests forward as a cohort. Between her interview, where she talks through the decision points we faced, and mine, where I talk more about the tools and their limitations, there is a lot for people who want to organize people digitally while thinking outside of the algorithm.
There are certain pieces of writing I return to when thinking about our relationship with technology. Donna Haraway’s “A Cyborg Manifesto,” published in 1985, is one of them. Despite being nearly forty years old, it continues to offer insights into how we understand technology’s role in our lives.
Haraway used the cyborg, a hybrid of machine and organism, as a metaphor for understanding identity in an increasingly technological and scientific world. Her central argument was this: because the traditional boundaries we’ve relied on are breaking down with the rise of STEM, computers and factory automation, a tenuous new order is emerging, blurring the lines between human and machine, physical and digital, natural and artificial, gender and biology, moral and immoral. Many women in academia resisted the political push for STEM, concerned about the impact on the humanities. This was critical stuff in the mid-1980s, especially in a global (and academic) context of collective civil rights struggle across very different coalitions, amid the science and all of its implications. She wrote it as a salvo appealing to fellow academic feminists not to be so skeptical of new and emerging computer technology that they lose on emerging opportunities. To her, the implication of these new technologies meant new political landscapes and platforms for discussion and iteration.
Haraway challenged the either/or categories that dominate these debates: online versus offline, human versus machine, authentic versus artificial, even good and bad. Instead, she proposed we’re already living in a world of hybrids and overlaps and contingencies and compromises, where identity and experience are shaped by our relationships with technology and science and capitalism rather than existing separately from it. Whether you wear glasses, take daily medicine, strum a guitar, drive a car, or regularly log into a device for work or leisure, our lives are heavily augmented by layers of tech already. Your cyborg self is already here. We are already deeply technical creatures, living in concert with machines.
Haraway invites us to dabble in the Matrix, to take off your trench coat and stay awhile, to see what it feels like in the moral relativism and ambiguity. Within this web of complexity lies a lot of opportunity.
Tl;dr: the cyborg metaphor is a permission structure and a thought exercise: Instead of asking if or whether to accept tech, she asks you to consider something more pragmatic, how your cyborg self might use and shape technology to assert your particular existence, politics and interests across the network. This is an if/then that is simultaneously empowering, cynical, dystopian, ironic and futurist, but allows us to set aside some limiting binaries and narratives when thinking about the specter of new technology.
Purdue Exponent students distributed 3,000 copies of a special “solidarity edition” newspaper in Bloomington after IU spanked their student paper for insubordinance, ending the IDS print edition and firing their director. The media landscape in Indiana is bleak, generally, after years of disinvestment, so student reporters fill a social and political gap that the free market left behind. Given those conditions, the wider community depends on student media, much like public radio, to fill the information gaps. Also, these campuses are situated in communities where it can be very socially uncomfortable to be a squeaky wheel. So. As alum, I’m proud of the Exponent for this brave and newsworthy show of heart. 💐
I graduated from college right before the 2008 recession and bounced through some unpromising temp jobs until an opportunity emerged for a permanent position. Sometimes you just need to get in where you fit in, and so I did. That’s how I came to work for a regional cable company that used federal money to expand the new national broadband network, extending out to the rural communities dotting central Indiana.
It was a front row seat to the national broadband expansion efforts of the early 2000s. Our business ran right across the state, spanning the 80 or so miles from Attica to Kokomo, which included several small cities with large manufacturers, two public research universities, and several community and liberal arts colleges. The strip of broadband fiber at the core of our service followed existing highways and electrical lines that split the corn and soybean fields from town to town, feathering out to more rural areas from there.
I worked a variety of roles there that put me face-to-face with a classic technical problem: the last mile. On many occasions, someone would come in looking pensive, and explain that the fiber had been extended all the way from town to their hamlet, and yet there was no plan to connect their property to the pole. Over time the pattern was clear: while the network was expanded, the cost of running a physical line to each individual property was too expensive and specific at scale. These customers often left without a path forward despite all their efforts and ours.
The “last mile problem” refers to the logistical challenges and high costs associated with the final leg of delivering goods or services to the end customer. It’s often the most difficult and expensive part of the supply chain, despite being a relatively short distance. The pattern shows up everywhere: public transit can get commuters most of the way most of the time, but that final leg of the journey remains specific and individual and problematic. E-commerce companies promise drone delivery solutions, scooter and bike-share apps claim to solve urban mobility gaps, but these technological optimizations remain persistently stubborn at scale, running up against the messy realities of sidewalks, intersections, and actual human behavior.
Tl;dr: I’ve been turning over this suspicion that AI automation will hit a classic “last mile problem,” especially in the public sector.
AI systems, particularly LLMs, are like those systems—they work incredibly well in their intended domain, processing and manipulating information. But because they’re fundamentally an information-only approach, that creates their own last mile problem when we try to implement them in physical and context-specific environments. Public institutions are uniquely specific — they are often the originators and producers of knowledge and the keepers of original policy, tasked with making the rubber hit the road. Additionally, the need for comprehensive data protection required by public workers and institutions fundamentally hamstring potential applications.
Actually implementing recommendations is where you hit the last mile. This is the work of public administration.
Imagine an AI application trained on every facilities management manual ever written and tuned to synthesize best practices for HVAC optimization. It can analyze years of energy usage data and recommend precise temperature adjustments for different zones of a building, but can’t feel that the third floor is always stuffy, or know that the facilities manager retired last year and took decades of institutional knowledge with him. Building A’s HVAC system was installed in 1987 and has a manual keypad. Building B’s system interfaces with the campus-wide monitoring system, but unreliably, and investigation is slated for later, someday, when resources allow. Professor Smith has taught in Room 204 for 25 years and will blow up your spot before moving to a different classroom for maintenance. Your engineers who manage these spaces are balancing human teams, who have time off and training and other priorities they manage in life. So, you need staff who understand the quirks of each area, the history of each system, the politics of which departments will accept changes and which will flood your inbox with complaints. You need someone who knows that the third floor always runs hot because of a design flaw from 1974, and that the solution isn’t more precise control but a $50,000 renovation that’s been deferred for a decade because a glittering new project across campus takes priority.
Imagine this tangle of questions and contingencies times infinity on every university campus in existence. Universities are like cities—they’ve been built and rebuilt over decades or centuries, with layers of systems and fiefdoms that weren’t designed to work together. AI recommendations assume a level of standardization that simply doesn’t exist. Every AI implementation in higher ed requires navigating multiple constituencies with different priorities and power structures. It’s like trying to redesign traffic patterns in a neighborhood where the residents, business owners, commuters, and city planners all have veto power and conflicting interests.
So. When looking at efficiency efforts spinning up across the education sector, I’m feeling pensive, trying to understand how exactly the house gets connected to the pole.
The promise of new tech in higher ed needs to more deeply consider the translation costs: the human labor, institutional knowledge, knowledge documentation and local adaptation required to bridge between the usefulness of tech and specific realities of public university work. Public employees want modernization and don’t want to fall behind. We want systems that work. We are also balancing a great deal of change and pressure as a sector, with fewer material resources than ever. We need less marketing and more right-sizing in the claims around AI against the political and tech realities of public administration.
This disconnect between technological promise and implementation reality becomes even more critical as higher education faces increased political scrutiny. When tech vendors promise that AI will solve efficiency problems or reduce administrative costs, institutions are under immense pressure to deliver measurable results quickly. But the translation costs we experience don’t disappear just because the political pressure to modernize increases.
The institutions that thread this needle will be the ones that accurately assess these translation costs upfront and set expectations accordingly—not the ones that assume the technology will magically bridge the gap between digital and physical, abstract and specific.
As communications professionals in higher education, we work for institutions built on the pursuit of knowledge and innovation, yet many of us feel uncertain about how to thoughtfully integrate one of the most significant technological advances of our time: artificial intelligence.
Over the past year, my team has wrestled with questions that didn’t exist in our profession just a few years ago. Should we use AI to draft articles and email copy? How do we disclose AI-generated content, or do we? When does AI assistance cross the line from helpful tool to ethical concern?
These aren’t abstract questions any longer. Over the last year, I had to overcome AI resistance of my own to develop practical, hands-on approaches to AI use that align with our institutional values while acknowledging the realities of modern communications work (more on that below). What I’ve learned is that the answers aren’t found in blanket policies or rules, but in applying our existing professional ethics to these new tools. Here is where I am today on the journey from AI praxis to practice.
Mission first
The foundation of responsible AI use in our field starts with a principle we already know: everything we do should advance our institution’s educational mission. Higher education exists to create, share, and preserve knowledge while fostering critical thinking and diverse perspectives, in service of students, faculty, researchers, workers and the world.
The bulk of our work comes from conversations with colleagues, understanding of our campus dynamics and processes, and professional judgment about what our community needs to hear. This inevitably means more work upfront, but it maintains the authenticity and institutional knowledge that our audience deserves, regardless of whether AI tools are part of the process.
Transparency without paranoia
Do I need to mention AI every time I use it? The answer isn’t simple, but I’ve found a helpful framework: consider whether your audience would feel misled if they knew how AI was involved in creating the content.
When I use AI to polish grammar and shape format, that feels similar to using spell-check – it’s helpful but not something that changes the fundamental nature of the content we wish to communicate. But when AI helps generate the main structure for a story about campus policy changes, that’s a different ball game. The audience expects those priorities and framing decisions to come from human judgment about what matters to our community.
Internally, we differentiate between the two by defining whether or not you are “automating” processes using AI, or “augmenting” processes using AI. Full disclosure, my area of experience is in augmentation, not automation. That said.
I’ve started recommending simple disclosures when AI plays a substantial role in content creation. A line like “This article was developed with AI assistance” maintains trust while allowing us to thoughtfully benefit from these tools. It’s not about being defensive, it’s about being transparent with the people we serve, especially as the tech and attitudes around it evolve over time. Additional qualifications can be included here, such as how the information was shaped and shared by AI or not (privacy implications abound).
Here, it’s important to remember to only use your university-approved tools, because university enterprise AI tools are modified to meet campus rules and requirements related to data handling.
The accuracy imperative
Perhaps nowhere are the stakes higher than with accuracy. In higher education communications, we’re not just sharing information—we’re stewarding public trust in our institutions and, by extension, in higher education itself. In addition, much of the information we are communicating is original, in that it’s new information that cannot be generated using the limitless soup of generative AI.
Every piece of AI-generated content requires human verification, especially anything involving numbers, research findings, or claims about institutional achievements. This means checking sources, confirming statistics, and ensuring that quotes are accurate and properly sourced. It’s more work, but the alternative—publishing incorrect information—could undermine years of relationship-building with community stakeholders and partners.
The promise of speed and efficiency that comes with generative AI must be balanced with the work of close reading, the skill and practice of carefully analyzing a passage’s language, content, structure, and patterns in order to understand what a passage means, what it suggests, and how it connects to our larger body of work. I firmly believe that close reading, learned in the Humanities and Social Sciences, will become increasingly important to understand, shape and steer AI output, especially with regard to public communication best practices.
Inclusion as a practice
AI bias isn’t an abstract concern—it shows up in subtle but significant ways in the work. I’ve noticed that AI tools often default to formal, academic language that might exclude first-generation college students, or suggest examples and metaphors that assume certain cultural backgrounds, for example.
This has made me more intentional about prompt engineering—the way I request AI assistance. I build digital accessibility and plain language best practices into my prompts, in alignment with institutional best practices. One tip is to draft the original using my chosen, intentional language, then ask for revisions using as much of the original verbiage as possible. The difference in output is significant and it allows me to focus on higher-order communication strategy while demonstrating both accuracy and inclusive values in our output.
Privacy and the long view
Working at a public university means balancing transparency with appropriate privacy protections. We work within strict guidelines about what information can be included in AI prompts, particularly around student data, personnel information, and strategic planning discussions. Again, it’s important to only use your university-approved tools, because university enterprise AI tools are modified to meet campus rules and requirements related to data handling.
The challenge is that AI tools work best with context, but providing that context can sometimes mean sharing information inappropriately. I’ve learned to be creative about how I frame requests to AI tools—giving enough context for useful output while protecting sensitive information about individuals and institutional operations.
I focus AI prompts on publicly available information rather than including details from internal planning discussions or individual faculty concerns. It requires more thoughtful preparation, but it ensures we’re protecting appropriate confidentiality.
Speed vs. strategy
The efficiency of AI is seductive, especially when facing tight deadlines and endless communication requests. But I’ve learned that speed can’t come at the expense of quality or authenticity.
Authentic institutional voice and authority doesn’t emerge from algorithms—it requires the deliberate application of human judgment to ensure our plans and communications reflect our campus culture, embody our values, and resonate with our specific audiences. The strategic thinking we bring—our ability to read context, navigate relationships, and understand the subtle dynamics of higher education communication—cannot be automated.
Consider my own practice: I frequently engage AI as a collaborative thinking tool, particularly for structural planning and format development. However, AI’s default tendency toward comprehensive, multi-layered approaches often produces unnecessarily complex frameworks for university communication realities. This is where professional judgement becomes critical. Strong strategic foundations and institutional knowledge allow us to right-size AI’s expansive suggestions into focused, contextually appropriate communication plans that actually serve our goals and communities.
Looking ahead
What I’ve learned over this past year is that responsible AI use isn’t about following a rigid set of rules. It’s about applying the professional ethics we already have to new technological capabilities. The core principles that guide good communications work—accuracy, transparency, service to mission, respect for audience—remain the same.
What’s different is that we now have tools that can enhance our ability to live up to those principles, if we use them thoughtfully. AI can help us communicate more clearly, research more efficiently, and reach broader audiences. But only if we maintain our professional judgment about when, how, and why to use these tools.
As our field continues to evolve, I’m convinced that the communications professionals who thrive will be those who can harness the power of AI while maintaining the human insight, ethical judgment, and institutional knowledge that define excellence in our profession. The technology will keep changing, but our commitment to serving our institutions and communities through ethical, effective communication remains constant. That’s the foundation we build on, whether we’re writing with pen and paper, collaborating in a digital document, or prompting the most sophisticated AI tool on campus.
*This post reflects my ongoing learning about AI ethics in communications practice and was generated using the assistance of AI (Claude, Gemini). Cross-posted on LinkedIn.
While institutions are backing away from making position statements, news outlets are making lengthy statements of intent. This one from Madison is interesting because it also acknowledges that a portion of their audience may be intentionally avoiding news about the White House. This is a traditional trust-building strategy applied in a new way: explaining how the news outlet plans to allocate their resources in an era of “flooding the zone.”
Posting about my latest quilt for posterity. This is a “wonky star” quilt using 10” precuts, measuring about 60 X 70”. I completed the sides last night and will hand quilt all of it with a mix of big stitch quilting.
Been playing with micro.blog for a bit now and very much recommend the experience. The interface is friendly and the team behind the app is ultra responsive. My fav features are Bookshelves - and the ability to push new posts to Bluesky and Mastodon.
It’s soup season - and I’m a newly-minted member of the bean club. This is more or less my favorite lentil soup recipe, except I deglaze the pot with a healthy dose of dry red wine before adding the lentils and stock, and I stew it with a big sachet of fresh thyme.
This butchery on the east side is owner-operated, and features meats only from local, humane farms. In addition, they are a “whole animal” butcher, in that they buy and break down a whole animal at a time, thus offering an array of extremely fresh products from sausage to steaks to specialty cuts (picanha, anyone?). One of the owners was working the counter this Saturday when we came in and greeted us with some suggestions for the day. We got a few pounds of pork shoulder to make carnitas at home, and while they were preparing our order, I perused the grocery area.
It’s clear that whoever curates their grocery knows their stuff. They had great fixings to pair with their meats, including a selection of curated wines, hot sauces and other sundry that reflect thoughtful food experience, travel and research. I got some wine, a series of hot sauces, and some tinned fish. Our tacos were delicious.
Last week I joined an old friend on a road trip through Wisconsin. We saw a folk show, stayed at a vintage motel, and camped in the crook of Green Bay.
In our early years, we were both shit-kicking dirtbags, rebels, people who thumbed our noses at convention and were told (and fully believed) we wouldn’t amount to much. Today, we’re regular middle-aged ladies secure in our work, home and ambition, figuring out what the rest of our lives will look like. In that way, this trip was a fun reset: a reminder of where I’ve been and where I’m going.
I’m grateful to be surrounded by so many incredible women.
A quick read by a singular voice, heavy on descriptions of the New York art and music scene of the 90s. Like many punk memoirs, it’s a tribute to the many names that made the movement and a memorial for a city that no longer exists. Gordon’s voice is kind and bold, curious and smart. Her descriptions of growing up in LA and coming of age in New York are painterly and poetic. Her takes are generous for all but one person: her ex-husband.
All my respect to her for opening and closing the books with her raw reflections on Thurston’s mundane and deeply uncool betrayal.
The Bookshelves feature of Micro.blog is easily my favorite of this platform. It sits right at the intersection of medium and function: as a reader I want to keep track of things, but I don’t need so much infrastructure around it. Just some checkmarks and a place to dash off my immediate thoughts.
I’ve been keeping my virtual bookshelf up to date while pushing myself to take on a bunch of literary fiction, but I’m tired, reader. So instead of laboring on with a pile of good books I didn’t really want to read, I lined up a bunch of rock n’ roll memoirs for the summer while I manifest camping, hammocks and time otherwise spent by a lake.
The internet says Joann fabrics is going to declare bankruptcy, putting a huge market of individual crafters without access to in-person retail craft spaces into a tailspin. It’s likely they will ask their creditors to restructure their debt, making them able to keep some stores open. The whole market relies heavily on in-person shopping (it’s a textural and sensory shopping experience, which is the point!) and hasn’t pivoted well to e-commerce.
This is one of my favorite pet subjects. Globally, the arts and crafts market overwhelmingly caters to women and children and it’s HUGE, commanding a very dedicated and loyal customer base. And still, it struggles.
Despite an influx of crafters during the shutdown, retail craft stores have struggled to strike a balance between sustainable e-commerce and in-person retail strategies. Other issues: For months after the pandemic, the Joann’s in my neighborhood struggled to keep the place stocked and staffed, exacerbated by skyrocketing shipping costs and shifts in the retail worker market after the shutdowns. Kids went back to school, cooling the market for arts and crafts activities on which to spend their time. And with lagging incomes and cost of living increases eating into people’s spending money, customers just don’t have the bandwidth they may be used to.
In my experience, customers don’t love shopping at a Michaels or a Joann’s, but they appreciate the ability to get what they need, mostly on demand, and to do so in-person where you can handle the materials before you buy them. Fiber arts people, for example, put a lot of importance on the weight, texture and color of their tools and materials - and for good reason! Pleasant tools make for a pleasant experience - and for pleasant outcomes. Indie retailers corner this market by keeping inventory low, building relationships with customers, creating affinity using social media marketing and by nurturing community with digital learning and forums. Crafters from around the world can share tricks, tools, patterns and finished items with like-minded people. The large-scale retailers can’t compete with that and haven’t really tried.
It’s unclear what’s next, but I’m thinking of all the people who live in places that can’t sustain a standalone fabric or yarn store. Rural makers can sometimes find tools and materials in resale markets like Facebook Marketplace, and sometimes you can find decent stuff at the local flea, or at specialty shop, such as a small machine repair shop that works on sewing machines. A lot of those folks won’t have a store to go to, and will have to travel to shop in person or resort to online retailers that don’t meet their needs.
I’ve been waiting for this one because it’s a story I know well. It is an impeccably reported book covering how young women navigated a compromised, stigmatized, coercive landscape around unplanned pregnancies in the late 20th century. Sisson is a comprehensive writer whose reporting is deeply empathetic, based on her personal experience as an activist and academic working in reproductive justice alongside extensive research. She discusses the history of the adoption movement at length, connecting it to other institutional family separation movements, and considers it alongside the choice to abort unplanned pregnancies and against the decision to parent anyway, often in a deeply compromised social and political climate.
My deep appreciation to her for telling these stories and telling them well.
Currently reading: Relinquished by Gretchen Sisson 📚
I finished my latest quilt on my birthday, a Halloween quilt made from precuts (no pattern) that I started in 2021? 2022? I don’t know how old it is. It’s been languishing.
I don’t love shoving a huge quilt through my lil sewing machine, so I still hand quilt these puppies using 6-strand embroidery floss. It’s hard on my hands but gives the final quilt so much texture and weight.
These photos are of the quilt fresh out of the washing machine. Super crinkly and cat-approved.