⚡ AI's last mile problem in higher ed

I graduated from college right before the 2008 recession and bounced through some unpromising temp jobs until an opportunity emerged for a permanent position. Sometimes you just need to get in where you fit in, and so I did. That’s how I came to work for a regional cable company that used federal money to expand the new national broadband network, extending out to the rural communities dotting central Indiana.

It was a front row seat to the national broadband expansion efforts of the early 2000s. Our business ran right across the state, spanning the 80 or so miles from Attica to Kokomo, which included several small cities with large manufacturers, two public research universities, and several community and liberal arts colleges. The strip of broadband fiber at the core of our service followed existing highways and electrical lines that split the corn and soybean fields from town to town, feathering out to more rural areas from there.

I worked a variety of roles there that put me face-to-face with a classic technical problem: the last mile. On many occasions, someone would come in looking pensive, and explain that the fiber had been extended all the way from town to their hamlet, and yet there was no plan to connect their property to the pole. Over time the pattern was clear: while the network was expanded, the cost of running a physical line to each individual property was too expensive and specific at scale. These customers often left without a path forward despite all their efforts and ours.

The “last mile problem” refers to the logistical challenges and high costs associated with the final leg of delivering goods or services to the end customer. It’s often the most difficult and expensive part of the supply chain, despite being a relatively short distance. The pattern shows up everywhere: public transit can get commuters most of the way most of the time, but that final leg of the journey remains specific and individual and problematic. E-commerce companies promise drone delivery solutions, scooter and bike-share apps claim to solve urban mobility gaps, but these technological optimizations remain persistently stubborn at scale, running up against the messy realities of sidewalks, intersections, and actual human behavior.

Tl;dr: I’ve been turning over this suspicion that AI automation will hit a classic “last mile problem,” especially in the public sector.

AI systems, particularly LLMs, are like those efficient highway systems—they work incredibly well in their intended domain, processing and manipulating information. But they’re fundamentally an information-only approach, and that creates their own last mile problem when we try to implement them in physical and context-specific environments. Public institutions are uniquely specific — they are often the originators and producers of knowledge and the keepers of original policy, tasked with making the rubber hit the road. Additionally, the need for comprehensive data protection required by public workers and institutions fundamentally hamstring potential applications.

AI systems excel at processing information about the physical world, but they have no direct experience with how things actually work in space, time and human context.

Actually implementing recommendations is where you hit the last mile. This is the work of public administration.

Imagine an AI application trained on every facilities management manual ever written and tuned to synthesize best practices for HVAC optimization. It can analyze years of energy usage data and recommend precise temperature adjustments for different zones of a building, but can’t feel that the third floor is always stuffy, or know that the facilities manager retired last year and took decades of institutional knowledge with him. Building A’s HVAC system was installed in 1987 and has a manual keypad. Building B’s system interfaces with the campus-wide monitoring system, but unreliably, and investigation is slated for later, someday, when resources allow. Professor Smith has taught in Room 204 for 25 years and will blow up your spot before moving to a different classroom for maintenance. Your engineers who manage these spaces are balancing human teams, who have time off and training and other priorities they manage in life.

So, you need staff who understand the quirks of each area, the history of each system, the politics of which departments will accept changes and which will flood your inbox with complaints. You need someone who knows that the third floor always runs hot because of a design flaw from 1974, and that the solution isn’t more precise control but a $50,000 renovation that’s been deferred for a decade because a glittering new project across campus takes priority.

Imagine this tangle of questions and contingencies times infinity on every university campus in existence. Universities are like cities—they’ve been built and rebuilt over decades or centuries, with layers of systems and fiefdoms that weren’t designed to work together. AI recommendations assume a level of standardization that simply doesn’t exist. Every AI implementation in higher ed requires navigating multiple constituencies with different priorities and power structures. It’s like trying to redesign traffic patterns in a neighborhood where the residents, business owners, commuters, and city planners all have veto power and conflicting interests.


So. When looking at efficiency efforts spinning up across the education sector, I’m feeling pensive, trying to understand how exactly the house gets connected to the pole.

The promise of new tech in higher ed needs to more deeply consider the translation costs: the human labor, institutional knowledge, knowledge documentation and local adaptation required to bridge between the usefulness of tech and specific realities of public university work. We want modernization and don’t want to fall behind. We want systems that work. We are also balancing a great deal of change and pressure as a sector, with fewer material resources than ever. We need less marketing and more right-sizing in the claims around AI against the political and tech realities of higher ed administration.

This disconnect between technological promise and implementation reality becomes even more critical as higher education faces increased political scrutiny. When tech vendors promise that AI will solve efficiency problems or reduce administrative costs, institutions are under immense pressure to deliver measurable results quickly. But the translation costs we experience don’t disappear just because the political pressure to modernize increases.

The institutions that thread this needle will be the ones that accurately assess these translation costs upfront and set expectations accordingly—not the ones that assume the technology will magically bridge the gap between digital and physical, abstract and specific.

Comms Newsletter Longform Tech Higher Ed