Artificial General Software
We’ve spent years debating artificial general intelligence. Meanwhile, artificial general software is arriving. As language models mature, the interface to computing shifts from clicking through menus to stating goals and letting an agent act. Switching costs fall. Trust—rooted in control and provenance—becomes the defensible advantage.
Work sprawls across a dozen tools that fracture attention. We steer software more than we ship results because each app imposes its own abstractions. The bitter lesson from machine learning applies to tools: general, scalable methods outlast the handcrafted. The more general the system, the less translation between your intent and an app’s grammar—and the lower the cognitive tax of modes and file types.
Language makes intent the interface. Say “remove the background,” “draft a Q4 plan from last year’s deck,” or “refactor the module and add tests.” This is a change of mode, not another feature. When goals can be stated plainly, scaffolding becomes optional. You don’t learn the tool; the tool learns you—absorbing preferences, constraints, and history.
From here, in-app copilots give way to delegated workflows and then to hubs that orchestrate across data and domains. A capable agent proposes a plan, calls the right tools, shows previews, requests approval, and logs sources.
As the interface opens, priorities shift. The test is simple: the system should infer intent, stay reliable under variation, and keep you in control with previews, versioning, rollbacks, and clear explanations. Governance moves to the foreground, with policies and audit trails that span tools. Builders move from feature factories to stewardship—grounding and evaluation, guardrails that travel with context, supervised orchestration, and crisp interfaces for intent and review. Less wiring, more judgment.
Artificial general software reframes computing—from operating tools to expressing intent, from feature lists to outcomes you can trust. We are early, but the direction is clear.