🚀 Checkpoint at the Edge

In economics there is a concept called the production possibilities frontier. It is a curve that represents the maximum output an economy can produce given its resources and technology. Every point on the curve is efficient. Every point inside the curve is underperforming. The curve itself is the boundary of what is possible.

I keep thinking about this model, except applied to technical knowledge instead of economic output. There is a frontier of human capability in any given domain—the edge of what the most skilled practitioners can do. For most of history, reaching that edge required one of two paths: work at a company operating at that frontier for years, absorbing institutional knowledge through osmosis and production fires, or dedicate yourself to the domain independently for a very long time, building up the skill tree node by node.

Both paths were slow. Both were bottlenecked by access. You needed to be at the right company, on the right team, working on the right problems. Or you needed years of uninterrupted focus and the resources to sustain it.

AI has changed the speed of placement.

Before AI most people far from the frontier After AI everybody gets placed near the frontier

If you sit down today with Claude Code or a similar tool and decide you want to understand, say, how a real-time collaborative text editor works, you do not need to get hired at Google Docs. You do not need to spend two years reading papers on CRDTs and operational transforms. You can build one. You will hit the same problems that the production teams hit—conflict resolution edge cases, cursor position synchronization, undo history across concurrent edits—because those problems are inherent to the domain, not to the company. The AI does not skip the problems. It walks you through them, or it walks into them and you watch.

This does not mean you solve them at the same level. The Google Docs team solved cursor synchronization for a hundred million concurrent users with five-nines reliability. You solved it for a demo running on localhost. The problems are the same in kind, not in degree. But that distinction matters less than you might think, because the frontier of understanding is not about scale—it is about having encountered the problem at all. The physics of the problem space do not change just because you are working with an AI. Most people never get far enough into a domain to see where the hard problems live. AI gets you there.

Auto-Play

It feels like those electric pianos that play themselves. You press a key and the piano takes over, rolling through the piece on its own. You can sit there and watch the keys move. AI-assisted development has that same quality. You can tell it to continue, to keep building, to auto-play through the next hundred decisions, and what comes out looks like the software is supposed to look.

The thing that surprised me is how forgiving the long run is, as long as you are steering. Individual commits go sideways all the time. A bad abstraction gets introduced. A dependency gets pulled in that should not be there. An edge case gets handled in a way that creates three more edge cases. If you leave auto-play running with no hand on the wheel, those mistakes compound and the codebase turns into something that passes its tests but fights you on every change. But if you are paying attention—catching the bad abstractions early, nudging the direction when it drifts—then 1,000 commits down the line, the short-term mistakes wash out. The trajectory converges. The models are good enough now that with a human applying judgment at the architectural level, the long-run shape of a project lands in roughly the right place.

And if you do trace the trajectory far enough, you end up operating the thing. Building a collaborative text editor is one kind of education. Running it for six months while users find the edge cases you never imagined is another. The build phase is where AI compresses the timeline most dramatically, but the operate phase follows naturally from it—and that phase is where a different kind of understanding develops. The 3am debugging session where a user’s document silently drops three paragraphs and you have no reproduction case. The slow realization that your conflict resolution strategy breaks under a network condition you never tested for. AI can help you build through those problems faster, but the operational intuition—the scar tissue—still accumulates at the speed of real usage. The difference is that more people now get far enough to start accumulating it.

The Skill Tree Collapses

programming systems graphics networking kernels compilers databases shaders raytracing protocols distributed p2p before: months to years after: days traversal time from trunk to any leaf

The old model of technical skill acquisition looked like a tree. You start at the trunk—basic programming, data structures, algorithms—and you branch outward. Each branch is a specialization. Database internals. Compiler design. Graphics programming. Network protocol design. Operating system kernels. Each branch has sub-branches, and the further you go, the fewer people have been there, and the longer it took them to get there.

What AI does is compress the traversal time for those branches. Not to zero. You still need taste, judgment, the ability to evaluate whether the AI’s output is good or garbage. But the time between “I know nothing about GPU shader programming” and “I have a working compute shader that does particle simulation” has gone from months to days. Maybe hours.

There is a tension here that I do not want to gloss over. In the old model, the slow traversal was part of what built the taste. The months you spent debugging a shader by hand were what gave you the intuition to recognize when a shader is wrong. Compress that traversal to a weekend and you arrive at the destination without having built the same evaluative capacity along the way. You can see the frontier. You cannot always tell whether what you are looking at is the real edge or a convincing mirage.

This is a real cost. But I think it is a cost that shows up at the individual level more than the collective level. Any single person who speed-runs a domain with AI will have shallower intuition than someone who walked the whole path. But across thousands of people exploring thousands of branches, the ones who stick around—who keep building, who hit the same problems repeatedly, who develop taste through volume if not through slowness—will sort themselves to the front. The filter has changed from “who had access” to “who kept going.”

This means more people end up near the tips of more branches. Someone who spent their career in web development can now credibly explore compiler internals over a weekend. Not at the level of someone who has spent a decade on it. They are at base camp, not the summit. But they can see the summit. They can see what the open problems look like. And some of them will stay long enough to climb.

Here is a concrete example. In December 2025, Apple ML Research released SHARP, an open-source model that turns a single 2D photo into a 3D Gaussian splat in under a second on a standard GPU. Another developer built an iOS app around it called SHARP Memories, which required a server backend and a user account because the full model is too large to run on-device. I saw it, thought about the problem differently, and built Hologramm—an app that runs a quantized version of the SHARP model entirely on an iPhone 17 Pro Max, no server required. It processes a photo in about 90 seconds on-device, comparable to what the server-based app was achieving remotely. It also converts photos into Apple spatial scenes that load almost immediately.

I had no prior experience with 3D Gaussian splatting, neural radiance fields, or on-device ML inference at this scale. The model came out two months ago. The app is already on the App Store. That timeline would have been unthinkable without AI collapsing the distance between “I have no idea how this works” and “I shipped a product that uses it.”

Crowding the Frontier

old frontier new frontier

Here is the thought I keep coming back to. If AI places everyone within striking distance of the frontier in their domain of interest, and if more people are exploring more branches of the skill tree than ever before, then the frontier itself should start expanding faster.

This is not guaranteed. Placement near the frontier is not the same as extending it. There is a real risk that the frontier just gets more populated at the same level—a crowd of people who arrived by teleport, holding the artifacts of frontier work without the depth to push past it. If everyone takes the same AI-assisted path to the same problems, they may all share the same blind spots. Diversity of approach has always mattered for innovation, and a thousand people who all learned CRDTs from the same model may generate less novelty than ten people who each struggled through it differently over a decade.

But this concern is not new to AI. Every generation of educational technology has faced the same monoculture worry. Textbooks were supposed to homogenize thinking—everyone learning from the same Knuth or the same SICP. YouTube tutorials were supposed to produce a generation of developers who all solve problems the same way. In practice, the material is a starting point, not a ceiling. People diverge because they apply what they learn to different problems, in different contexts, under different constraints. The models themselves have randomness built in. Two people who prompt the same question get different code, explore different rabbit holes, hit different walls. I saw this firsthand when I gave Claude an open-ended prompt and compared the results with the developer who inspired the experiment—same model, same instruction, completely different output. The path is less uniform than it looks from the outside.

I do not think monoculture is the likely outcome. The frontier of human knowledge has always been expanded by relatively small numbers of people who were in the right place with the right preparation at the right time. The bottleneck was never talent. It was access. The old system filtered for persistence and institutional luck in roughly equal measure. AI does not remove the need for persistence—you still have to keep going after the initial placement, still have to develop real depth, still have to do the sustained work that turns base camp into a summit attempt. What it removes is the institutional luck part. You no longer need to be on the right team at the right company to encounter the right problems.

The production possibilities frontier in economics expands when there is a technological improvement or an increase in resources. The technical knowledge frontier works the same way. AI is the technological improvement. The increase in resources is all the people who now have a checkpoint near the edge, who previously would have been stuck somewhere in the interior of the curve, constrained not by ability but by access and time.

We went from a world where reaching the frontier in niche software domains required years of specialized experience at the right institution, to a world where anyone with curiosity and a terminal can simulate that experience by building through the problem space themselves. The number of domains where this does not work yet is shrinking fast.

My bet is that the frontier expands, and that it expands faster than it did under the old system. Not because everyone who arrives at base camp will push further, but because the number who do will be larger than the number who made it to base camp at all under the old rules. The crowd at the frontier will be noisy. Most of it will be tourists. But the fraction that stays will be enough.

Projects · GitHub · 𝕏 · Instagram · TikTok · Spotify · LinkedIn · Buy me a coffee