
Themes
Areas of Inquiry
Seven research streams that define our intellectual programme and shape every venture we build.
The Research Programme
Every venture we build, every framework we develop, and every engagement we undertake is shaped by a coherent research programme. These seven streams represent the questions we believe are most consequential for the design of AI-born institutions — questions that are simultaneously theoretical and urgently practical.
Each stream generates knowledge that informs the others. Research on autonomy boundaries shapes how we design governance. Understanding of economics influences how we structure ventures. Knowledge methodology informs how we validate findings across all streams. The interconnection is not incidental — it is architectural.
These are not closed bodies of knowledge to be mastered and applied. They are active frontiers — domains where our understanding evolves with every venture we build, every pattern we observe, every assumption we are forced to revise. The research programme is alive because the world it investigates is still forming.
Research Streams
The 5:100 Ratio
Small teams, autonomous scale
What does it mean for a team of five to achieve the output of one hundred? This research stream investigates the specific dynamics of small human teams coordinating autonomous systems — the organisational physics of AI-born enterprises. It examines where traditional assumptions about headcount, hierarchy, and coordination break down, and what emerges in their place.
Key Questions
Where are the boundaries of the small-team thesis — at what scale or complexity does the model require modification?
What team compositions produce the strongest outcomes when coordinating autonomous systems?
How do communication, trust, and decision-making differ in 5:100 configurations versus traditional structures?
What institutional functions, if any, remain resistant to this ratio?
Autonomy Boundaries
Where human judgment overrides machine capability
The governance challenge at the heart of AI-born enterprises: determining where human judgment must override machine capability. This is not a static boundary but a dynamic, context-dependent frontier that each venture must navigate. As autonomous systems become more capable, the question intensifies — not whether to draw the line, but how to draw it wisely.
Key Questions
What decision types benefit most from human override, and which are better delegated to autonomous systems?
How should autonomy boundaries evolve as systems mature and trust accumulates?
What governance mechanisms ensure boundaries are respected under pressure?
How do cultural and regulatory contexts shape appropriate boundary placement?
The Economics of AI-Born Enterprises
New economic logic for new institutional forms
How does economic logic differ when enterprises are designed from inception around autonomous systems? Traditional assumptions about unit economics, capital requirements, scaling dynamics, and value capture may not hold. This stream investigates the financial architecture of AI-born ventures — from cost structures to revenue models, from capital efficiency to the distribution of value created.
Key Questions
How do unit economics differ in AI-born versus traditional enterprises?
What capital structures best support AI-born ventures through their development phases?
How should productivity gains be distributed among stakeholders?
What new metrics are needed to assess the financial health of AI-born enterprises?
Governance of Autonomous Systems
Institutional stewardship in the age of machine agency
How should autonomous systems be governed in institutional contexts? This stream examines governance architectures that maintain human agency and institutional coherence as systems become more capable. The New Triumvirate model — Intent-Setter, Guardian, Architect — provides one framework, but governance at scale across portfolios of AI-born ventures raises questions that extend beyond any single model.
Key Questions
How does governance scale across a portfolio of AI-born ventures without creating new bureaucracy?
What accountability structures work when autonomous systems make consequential decisions?
How do we prevent governance from becoming either too rigid (inhibiting the system) or too loose (losing control)?
What role should external oversight play in the governance of institutional autonomous systems?
Values Alignment in Practice
From aspirational principles to architectural constraints
How are values effectively encoded into the operating models of autonomous systems? This moves beyond the theoretical discourse on AI alignment into the practical challenge of building institutions whose autonomous systems reflect specific, articulable human values. The VP-Agent Model provides a framework — distinguishing between non-negotiable values and adjustable preferences — but implementation at institutional scale remains a frontier.
Key Questions
How do you verify that autonomous system behavior reflects encoded values across all operating conditions?
How should conflicts between values be resolved when they arise in practice?
How do institutional values evolve, and how do you update the values encoded in running systems?
What is the relationship between individual values, institutional values, and the values embedded in autonomous systems?
Institutional Design at the Frontier
New organisational forms for a new era
What organisational forms emerge when industrial-era assumptions about intelligence, coordination, and capability are released? This is the longest time-horizon research stream — examining not just how to build better enterprises, but what enterprise itself becomes when its foundational constraints change. It draws on institutional theory, organisational science, and the emerging practice of AI-born design.
Key Questions
What entirely new institutional forms become possible when coordination no longer requires hierarchy?
How do concepts like culture, identity, and purpose manifest in organisations where most work is done by autonomous systems?
What can we learn from non-Western and historical models of institutional organisation?
How might the distinction between institution, market, and network blur in AI-born contexts?
The AI-Born Knowledge Methodology
How institutions learn when machines learn too
How should knowledge be generated, validated, and applied in the context of AI-born enterprises? This epistemological stream examines how institutional learning changes when autonomous systems are both subjects and instruments of knowledge creation. The Knowledge Flywheel — where research informs ventures, ventures generate evidence, consulting reveals patterns, and licensing validates generalisability — provides the operating model, but the methodology itself continues to evolve.
Key Questions
How do you distinguish between genuine institutional learning and mere data accumulation?
What role does tacit knowledge play in AI-born enterprises, and how is it preserved?
How should findings from one venture context be generalised to others?
What constitutes evidence that the core thesis is being validated or falsified?
How the Streams Connect
The seven streams are not independent silos of inquiry. They form a web of mutual implication — each stream shaping, and being shaped by, the others. Understanding the connections between them is as important as understanding any single stream in isolation.
The 5:100 Ratio establishes the organisational premise that makes the other six streams necessary. If small teams are to coordinate autonomous systems at scale, then questions of autonomy boundaries, governance, values alignment, and institutional design become urgent rather than theoretical. Economics provides the viability test. Knowledge methodology provides the epistemological foundation.
Foundation Layer
The 5:100 Ratio, Economics
The structural and economic premises that define the possibility space. What is organisationally possible, and what is economically viable?
Governance Layer
Autonomy Boundaries, Governance, Values Alignment
The control and coherence mechanisms that ensure human agency is maintained. How do we govern systems that are more capable than any individual?
Horizon Layer
Institutional Design, Knowledge Methodology
The long-range inquiry into what institutions become and how they learn. What forms emerge when foundational constraints change?
A finding in one stream is never isolated. It ripples through the others, refining questions, challenging assumptions, and opening new lines of inquiry. This interconnection is what makes the programme a programme — not merely a collection of interests.
What We Do Not Yet Know
A thesis-driven institution must articulate not only what it believes, but what it does not yet know. These research streams are not closed bodies of knowledge — they are active frontiers where our understanding is tested, refined, and sometimes overturned through encounter with reality.
We maintain these open questions not as a confession of weakness, but as a commitment to intellectual honesty. The most dangerous institution is one that mistakes its frameworks for certainties. We hold our frameworks firmly enough to act on them, and loosely enough to revise them when evidence demands it.
The questions listed in each stream above are not rhetorical. They represent genuine frontiers of our understanding — places where our research is most active, our assumptions most contested, and our learning most rapid. We invite collaborators, critics, and fellow researchers to engage with them.
Falsifiability
Every thesis we hold is articulated clearly enough to be tested and potentially disproven. We design our ventures and research to generate evidence that could contradict our assumptions.
Transparency
We publish what we learn — including findings that challenge our own frameworks. Negative results are as valuable as positive ones for advancing understanding.
Humility
We recognise that the field of AI-born institutional design is nascent. Our frameworks are working hypotheses, not settled doctrine. We hold them with conviction but without dogma.
The research programme is not an abstraction. It is the engine that drives every venture, every framework, and every engagement we undertake.
We are looking for researchers, practitioners, and institutional partners who share our commitment to rigorous, applied inquiry at the frontier of AI-born institutional design.