Artificial General Software

We’ve spent years debating artificial general intelligence. Meanwhile, something quieter happened: artificial general software. Computing is moving from clicking through menus to stating goals. In that world, switching costs fall, because the interface is no longer a product-specific workflow but ordinary language. Once that happens, the defensible advantage shifts away from features and toward trust, rooted in control and provenance.

Consider how much of modern work is really translation. You have an outcome in your head. You break it into app-shaped pieces. You learn each app’s vocabulary, its hierarchy of menus and abstractions, its particular way of carving up the problem. Then you stitch the pieces back together. Very little of that effort goes into the underlying problem. Most of it goes into accommodating software built around older constraints. Rich Sutton’s bitter lesson applies here 1 Rich Sutton, The Bitter Lesson (2019).: general, scalable methods outlast the handcrafted. Every bespoke workflow is a liability waiting to be absorbed by something more general.

I see this constantly while building Careswitch. Customers ask for the same thing over and over: one platform that does everything. Export to ADP for payroll. Sync invoices to QuickBooks for accounting. Pull candidates from Indeed for recruiting. Their instinct makes sense. Every seam between systems is a place where work gets lost, where someone has to manually translate one app’s output into another app’s input. Vertical integration has always been the traditional answer to that pain. But it solves the wrong problem. The issue isn’t simply that a business uses several systems. It’s that someone on the team has to carry context across each boundary and re-enter it in the form the next system expects. An agent that works across systems dissolves the seam without needing to own both sides of it.

Language is what makes that absorption possible. “Remove the background.” “Draft a Q4 plan from last year’s deck.” “Refactor the module and add tests.” Each is a goal with context, the sort of thing you’d say to a capable colleague. The gap between intent and execution has always been where software lives; what’s changing is that the gap is shrinking to nearly nothing.

Copilots inside apps will give way to agents that work across them. That part is predictable. The harder, more interesting question is what happens to trust. Menu-driven software has a built-in form of accountability: you see each step because you’re the one taking it. When the interface becomes language and the system handles the rest, that accountability has to be reintroduced deliberately. The builders who win will be the ones who do that with previews, versioning, and rollbacks, letting people verify outcomes they didn’t manually produce.

Artificial general software reframes computing: from feature lists to outcomes you can verify. The tools that win will be the ones that make verification easy and honesty the default.