Skip to main content

Article

Stewardship in the Age of Autonomous Systems

Why building durably is not a constraint on AI-born enterprises but a structural requirement of them

Mehran Granfar|Co-Founder & General Partner|December 20, 2025|13 min

The dominant framing of AI-born enterprise economics emphasizes efficiency, cost reduction, and competitive advantage. These are genuine properties of AI-born design. But they do not constitute a complete account of what durable AI-born institutions require. Enterprises that optimize autonomous systems for extraction — concentrating value, externalizing costs, treating stakeholder impact as incidental — are not merely acting unethically. They are creating the structural conditions for their own failure.

What extraction does to a system

An extractive approach to enterprise design is one that optimizes for maximum value capture by the smallest possible group of stakeholders, treating the interests of all others as costs to be minimized or externalities to be ignored. In the industrial era, extraction was constrained by practical limits: the need to maintain employee relationships, the visibility of impacts in physical communities, the negotiating power of labor and suppliers. These constraints were imperfect, often unjust, and frequently circumvented — but they existed and modulated the worst outcomes. Autonomous systems alter the extraction dynamic in two significant ways. First, they dramatically reduce the human-to-human relationships through which social accountability has historically operated. When most institutional work is done by autonomous agents rather than employees, the workforce whose interests previously modulated extractive incentives is largely absent. The social feedback mechanisms that tempered extraction in human-intensive enterprises — attrition, resistance, collective action, the accumulated moral weight of treating people as instruments — operate weakly or not at all in AI-born enterprises that minimize human participation. Second, autonomous systems amplify the consequences of whatever optimization objectives they are given. A system designed to minimize cost will find paths to cost minimization that its designers did not anticipate, including paths through the interests of those who depend on the institution. At machine speed and scale, the consequences of these optimization paths arrive faster and at greater magnitude than the equivalent human behavior would produce.

The failure mechanics of extractive AI-born enterprises

The claim that stewardship is structurally necessary — rather than merely ethically desirable — requires demonstrating the failure mechanics of extraction. The argument is not that extractive AI-born enterprises are wrong, though we believe they are. The argument is that they are unstable. Regulatory exposure is the most visible failure mechanism. The political economy of AI governance is responding, with unusual speed, to the concern that AI-driven productivity gains will concentrate at the top of the economic distribution while distributing costs broadly — job displacement, reduced social mobility, algorithmic harm. Enterprises that design explicitly for this concentration are positioned directly in the crosshairs of regulatory intervention. The EU AI Act, the UK's AI Safety Institute, emerging AI governance frameworks in the US, UAE, Singapore, and elsewhere are being designed by regulators who are watching the distribution of AI's benefits and costs with close attention. Enterprises that anticipate this regulatory trajectory and design for broad distribution are building regulatory resilience into their foundations. Those that optimize for extraction are accumulating regulatory exposure that will compound as governance frameworks mature. Talent risk is the second failure mechanism. Even in a 5:100 organization with a small human team, the quality of that team is the primary determinant of institutional performance. The best available judgment — the category of human capability that AI-born design places at the center of the Human Cortex — is held by people who have choices about where to exercise it. Organizations whose values are extraction and whose culture is optimized for financial returns at the expense of everything else will find it structurally difficult to attract and retain the kind of human judgment that AI-born enterprises require most. This is not a soft claim about culture. It is an observation about the market for the specific kind of human capability that AI-born design depends on: thoughtful, values-oriented people who could work anywhere and choose to work where they believe the enterprise is worth caring about. Social license erosion is the third mechanism. AI-born enterprises operate at the intersection of significant public interest — their products and services affect large populations, their economic model shapes labor markets, their governance choices influence how AI develops as a social technology. Institutions that accumulate social license through demonstrated stewardship can weather the inevitable controversies, regulatory challenges, and competitive attacks of a rapidly developing field. Those that deplete social license through demonstrated extraction will find that their technical capabilities are insufficient protection when the institutional environment turns against them.

Encoding stewardship architecturally

If stewardship is structurally necessary, the practical question is how it is implemented — not as a values statement or a CSR programme, but as an architectural property of the AI-born enterprise. The VP-Agent Model provides the primary mechanism: values, in this framework, are not adjustable preferences but architectural constraints. An AI-born enterprise that encodes stewardship as a value — specifying, with operational precision, what the enterprise's obligations to non-shareholder stakeholders are and what behaviors are prohibited because they violate those obligations — has made stewardship a property of every autonomous agent's operational parameters. The cost-minimizing agent in this enterprise cannot pursue paths that harm stakeholders in prohibited ways, because those paths are not available to it. The constraint is not enforced by human supervision of individual agent decisions; it is built into the architecture. This is categorically different from CSR approaches that add stewardship as a layer on top of extraction-optimized systems. Layered stewardship is always vulnerable to the pressure of operational performance metrics: when the extractive system produces good numbers and the stewardship layer is absorbing costs, the organization faces constant pressure to thin the layer. Architectural stewardship does not face this pressure in the same way, because the stewardship constraints are not costs layered on top of the value-creation system — they are parameters within which the value-creation system operates. The revenue model, the cost structure, and the growth strategy are all designed within the stewardship constraints from the beginning, not in spite of them. The honest acknowledgment here is that architectural stewardship does constrain some profitable paths. An enterprise that will not exploit certain labor cost structures, will not externalize certain environmental costs, and will not optimize for certain extraction dynamics will forgo revenue opportunities that less-constrained enterprises will capture. This is true. The argument for stewardship is not that it maximizes short-term financial returns. It is that it maximizes the durability and social sustainability of the enterprise over a longer time horizon — and that the forgone short-term revenues are a reasonable price for avoiding the compounding failures of extractive design.

The Widening of We as design principle

FTLAB's stewardship principle finds its most complete expression in the concept we call the Widening of We: the deliberate expansion of the institution's consideration set to include all whose interests are materially affected by its operations. This is not unlimited stakeholder inclusion — practical institutional design requires prioritization, and the interests of those with direct relationships to the enterprise weigh more heavily than those with peripheral ones. But it is a structural refusal to confine the institution's responsibility to its shareholders or even to its immediate commercial relationships. For an AI-born enterprise, the Widening of We has specific operational implications. The labor market effects of its autonomous systems — the displacement of work that human beings would otherwise have performed — are within its consideration set, not external to it. The data practices of its autonomous systems — what information is used, how it is used, whose interests are served by its use — are within its governance architecture, not left to be determined by whatever is technically possible. The governance of its autonomous systems — whether they are operating within values that protect the people they interact with — is a responsibility that the enterprise acknowledges and designs for, not a secondary concern. These implications have different weights in different venture contexts, and the specific designs that honor them will vary. What does not vary is the commitment to expanding the consideration set and encoding that expansion into operational architecture. AI-born enterprises that do this consistently are not merely being ethical — they are aligning their institutional design with the conditions under which AI-born enterprise will prove viable at social scale. The alternative — AI-born design deployed for concentrated extraction — is a path toward the regulatory, political, and social backlash that would constrain the possibilities of AI-born enterprise for everyone, including those who design it responsibly. Stewardship, in this sense, is not only good institutional practice. It is the condition of the field's continued viability.

The long case for durable institutions

The case for stewardship over extraction ultimately rests on a claim about time horizons. Extraction optimizes for short-term financial returns at the cost of long-term institutional stability. Stewardship optimizes for long-term institutional stability, accepting some constraint on short-term returns. The question for any AI-born enterprise is which time horizon it is designing for. FTLAB's answer, as a thesis-driven institution and venture architect, is unambiguous: we are designing for generational durability. This is not a statement of altruism. It is a statement about what kind of institution we are trying to create and what we believe the evidence says about which approach produces it. The enterprises that have proven most durable over long periods — the institutions that compound value for all stakeholders across decades rather than extracting it across quarters — have been characterized by genuine alignment between their stated purposes and their operational designs. When values and operating models are coherent, when institutional behavior is consistent with institutional commitment, organizations develop the trust-based relationships with stakeholders that are extraordinarily difficult to replicate and correspondingly difficult to attack. Trust is a moat. Stewardship is how it is dug. In the AI-born era, the stakes of this choice are higher than they have been before. The productivity amplification of autonomous systems means that both the value available to be distributed and the harm available to be done are larger than in previous institutional eras. The enterprises that design for stewardship at this scale will be building something genuinely valuable — not just financially, but institutionally and socially. That ambition is consistent with, and indeed required by, the serious pursuit of AI-born enterprise as FTLAB understands it.