<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[John's Substack]]></title><description><![CDATA[My personal Substack]]></description><link>https://essays.jdthorpe.com</link><generator>Substack</generator><lastBuildDate>Thu, 30 Apr 2026 03:36:55 GMT</lastBuildDate><atom:link href="https://essays.jdthorpe.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[John Thorpe]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[jdthorpe@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[jdthorpe@substack.com]]></itunes:email><itunes:name><![CDATA[John Thorpe]]></itunes:name></itunes:owner><itunes:author><![CDATA[John Thorpe]]></itunes:author><googleplay:owner><![CDATA[jdthorpe@substack.com]]></googleplay:owner><googleplay:email><![CDATA[jdthorpe@substack.com]]></googleplay:email><googleplay:author><![CDATA[John Thorpe]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[On Organizational Selection]]></title><description><![CDATA[How hybrid workflows stabilize, spread, and quietly reshape the operating logic of AI-enabled organizations]]></description><link>https://essays.jdthorpe.com/p/on-organizational-selection</link><guid isPermaLink="false">https://essays.jdthorpe.com/p/on-organizational-selection</guid><dc:creator><![CDATA[John Thorpe]]></dc:creator><pubDate>Wed, 18 Mar 2026 11:32:44 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/eef73a0e-8497-4c95-9b69-e3d8a127c336_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If the first three essays described the conditions under which AI-era organizations are changing, this fourth one asks a narrower but more consequential question: <strong>which of those changes actually endure?</strong></p><p>By this point in the argument, the underlying sequence looks something like this:</p><ol><li><p><strong>Molting</strong> describes how inherited organizational structures come under pressure as capability expands and historical boundaries fit the work less well</p></li><li><p><strong>Memory</strong> describes the continuity layer beneath that change: what allows adaptation to become cumulative rather than forgetful</p></li><li><p><strong>Agency</strong> describes the redistribution of initiation, routing, interpretation, and action across humans and systems</p></li><li><p><strong>Selection</strong> is what happens next: once hybrid human-system workflows begin to proliferate, some stabilize, some are abandoned, and some linger in an ambiguous half-life, never fully trusted yet never fully removed</p></li></ol><p>Over time, organizations do not merely accumulate new capabilities. They select among arrangements of people, agents, tools, workflows, and judgments. They decide, explicitly or not, which patterns deserve to survive.</p><p>That is the deeper point. The real question is not simply what AI systems can do. It is which human-AI configurations become institutionalized.</p><p>Capability is abundant compared to selection. Many things can be made to work, at least in some provisional sense. <em>Many organizations are now experiencing a proliferation (perhaps a blizzard) of point tools, local automations, and ad hoc agentic routines generated by individuals or teams to solve small or large problems, often without much structure in place to maintain coherence.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://essays.jdthorpe.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading John's Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>A model can retrieve, summarize, route, classify, recommend, draft, trigger, escalate, and increasingly act across connected systems. But the existence of a capability does not by itself determine whether it should become a routine, whether it should remain exploratory, whether it should be bounded tightly, or whether it should be suppressed altogether. In organizations, especially real ones with regulatory exposure, operational history, political constraints, and uneven tolerance for error, the harder question is not emergence. It is survival.</p><p><strong>Shifting the focus from capability to institutionalization</strong></p><p>Much discussion still treats implementation as though the organization were simply choosing whether to &#8220;use AI&#8221; in a workflow. But that framing is too coarse.</p><p>What actually emerges in practice is usually a proliferation of micro-arrangements: a model drafts the first pass but a human signs off; a model classifies documents and initiates a check before a reviewer intervenes; a model suggests an escalation but cannot execute it; a model operates autonomously in one narrow corridor but only in recommendation mode in another. These are not all the same thing. They are different architectures of judgment, accountability, speed, and memory. And once they appear, the organization begins, whether consciously or not, to sort among them.</p><p>Selection is therefore not <em>only</em> about performance. It is about fit.</p><p>A workflow survives when it is not merely possible, but tolerable, legible, economical, and sufficiently trusted inside the institution that surrounds it. Some workflows survive because they are genuinely robust. Others survive because they reduce cycle time enough that no one wants to give them up. Others because they align with managerial incentives, or because they offload labor in a way that appears efficient from a distance. Some survive because they are easy to measure. Some die because they are too politically uncomfortable, even if technically promising. Some die because they ask too much of the surrounding memory and governance architecture. And some, perhaps most dangerously, survive because they are just competent enough to avoid immediate scrutiny while quietly degrading judgment over time.</p><p>That last category deserves more attention than it usually receives.</p><p>In many organizations, the greatest danger is not catastrophic failure. It is the institutionalization of &#8220;good enough&#8221; in places where &#8220;good enough&#8221; gradually compounds into something much worse. A workflow that performs adequately at low volume, under the supervision of unusually attentive people, can become something very different once it is normalized, scaled, and passed into the hands of a broader organization with thinner oversight and different incentives. What was once an exploratory shortcut becomes a standard operating assumption. The organization no longer experiences the workflow as a choice. It begins to experience it as part of reality.</p><p><strong>Selection is not just a local implementation issue. It becomes part of how the institution evolves</strong></p><p>This is where the connection to the earlier essays becomes more important.</p><ul><li><p>Molting created the conditions under which old boundaries softened.</p></li><li><p>Memory determined whether the new patterns were cumulative or forgetful.</p></li><li><p>Agency redistributed the practical locus of action and interpretation.</p></li></ul><p>But once these elements begin interacting, selection determines the trajectory of the institution. The organization is not merely adapting. It is choosing its future operating logic, often through a long sequence of local stabilizations that no one ever quite names as strategy.</p><p>In that sense, selection is partly ecological. Workflows compete for survival within a constrained environment shaped by time pressure, tool availability, staffing, trust, auditability, management attention, and local norms. The workflows most likely to spread are not necessarily those with the deepest strategic value. They are often those that fit most easily into the surrounding substrate.</p><p>A workflow that saves fifteen minutes for a busy team and produces outputs that appear plausible may spread faster than one that preserves subtle institutional knowledge but requires careful review. A workflow that creates clean dashboards may win over one that captures richer rationale in messy form. A workflow that offers the appearance of standardization may be favored over one that exposes ambiguity more honestly. Selection does not always reward epistemic quality. Often it rewards ease of adoption under organizational constraints.</p><p>This matters especially in enterprise settings, where the path from experiment to routine is often less governed than leaders imagine.</p><p>A useful pattern may begin with a single team. Someone builds a prompt workflow, or an extraction layer, or a recommendation engine around a narrow task. It works well enough. It spreads informally. Another team adapts it. A third team operationalizes it through a connected tool surface. After a few months, an exploratory behavior has become an institutionally consequential one without ever passing through a clean moment of explicit design. By the time leadership notices, the workflow is already embedded in expectations, timelines, and dependencies. Selection has already occurred.</p><p><strong>At that point, another distinction becomes necessary: the difference between what a system can do and what an organization should stabilize</strong></p><p>That is one reason emergent capability and selected capability need to be distinguished clearly. Emergent capability refers to what a system can do under certain conditions, including things not explicitly designed into it. Selected capability refers to what the organization decides, intentionally or otherwise, to stabilize into routine use.</p><p>The distinction matters because not every emergent behavior should be operationalized.</p><p>Some should remain exploratory because they are informative but not yet trustworthy. Some should be pressure-tested further to understand their boundary conditions. Some should be instrumented heavily before any wider use. Some may reveal useful latent potential that deserves deliberate cultivation. Others may be precisely the sort of seductive but brittle behavior that should never cross into production.</p><p>Organizations are not always adept at making these distinctions with consistency or rigor. Historically, those judgments unfolded more slowly because the cost of building the underlying software infrastructure was high, creating natural friction in the form of review, approval, and governance layers. <em>AI is rapidly dissolving that friction.</em> As experimentation becomes cheaper and deployment faster, organizations can move from capability discovery to operational use before they have adequately determined whether a behavior is trustworthy, bounded, and suitable for institutional adoption.</p><p>When a system demonstrates a surprising capability, the natural reaction is often one of excitement or opportunism: if it can do this, perhaps we should put it to work. But capability discovery is not operational validation. The fact that a reasoning model can generalize across a tool environment, infer structure, or produce an apparently sophisticated recommendation does not tell us enough about when it will fail, how it will degrade under scale, whether it will amplify bias in the surrounding workflow, or what kinds of organizational reasoning it may quietly displace.</p><p>Selection requires a more disciplined posture than discovery. It requires asking not only whether a system can do something, but what kind of institutional consequences follow if it does that thing repeatedly, at volume, under ordinary rather than ideal conditions.</p><p><strong>This becomes even sharper in regulated and judgment-heavy environments, where selection is never just technical</strong></p><p>Consider domains where documentation quality varies, exceptions matter, and the costs of drift are not always immediately visible. A model may appear highly effective at synthesizing supplier information, surfacing risk patterns, or initiating downstream checks. In many cases it may be effective. But the key question is not whether it performs impressively in isolated cases. The key question is what kind of routine is being selected.</p><p>Is the organization selecting a pattern in which the model handles triage and humans calibrate the ambiguous edge cases? Is it selecting a pattern in which human review exists mostly as a thin ceremonial layer over model-generated interpretation? Is it selecting for speed over articulation, standardization over nuance, or action over reflection? These are not technical details. They are institutional design choices, even when no one names them that way.</p><p>This also means selection is not reducible to trust in the model itself. Organizations often speak as though the central question is whether the system is trustworthy enough. But workflows are not selected on the basis of model quality alone. They are selected through a bundle of interacting considerations: the reversibility of the action, the visibility of the failure mode, the availability of human expertise, the economic pressure to compress cycle time, the tolerance for false positives versus false negatives, the clarity of escalation paths, and the extent to which the organization can capture and learn from mistakes.</p><p>A brittle workflow may be acceptable in a reversible, low-stakes domain with rich feedback. A more accurate workflow may be unacceptable if its reasoning cannot be inspected or if failures are too costly to diagnose. The real unit of selection is not &#8220;the model.&#8221; It is the entire human-system arrangement.</p><p><strong>That in turn leads to a governance question: not how to add oversight after the fact, but how to create conditions in which better workflows are more likely to survive.</strong></p><p>This is where oversight needs to be understood differently. Oversight is often imagined as a layer placed on top of an automated process, a final checkpoint between system action and institutional risk. But that image is too static. In practice, oversight is part of the selective environment itself.</p><p>Workflows that require an unrealistic amount of review will not survive, regardless of principle. Workflows that cannot expose enough rationale for meaningful inspection may survive for a while, but they do so by externalizing hidden risk. Workflows that allow selective, high-leverage review at the right points in the process are more likely to stabilize well.</p><p>The challenge, then, is not merely to add humans back into the loop. It is to design selective conditions in which the right kinds of workflows are more likely to survive.</p><p>Memory matters here again. The organization cannot select well if it cannot remember why a workflow was adopted, where it performs poorly, what exceptions repeatedly arise, or which human interventions meaningfully improved the output. Without this, selection becomes path dependent in the worst way.</p><p>The workflow that spreads first or integrates most easily becomes the default, regardless of whether it is actually the best arrangement. Weak institutional memory turns early convenience into long-term lock-in. Strong memory allows the organization to revisit and refine what it has selected. It makes stabilization more intelligent and less accidental.</p><p><strong>And selection does not merely preserve workflows. It changes the environment in which future workflows will be judged</strong></p><p>This is particularly important because selection is not a one-time event. It is recursive. Once a workflow is selected, it changes the conditions under which future workflows are evaluated. A model that takes over first-pass triage changes what humans pay attention to, what gets documented, what skills remain sharp, and what the organization begins to treat as normal response time.</p><p>A workflow that compresses articulation into summary may increase throughput, but it may also reduce the stock of interpretable rationale available for future training, governance, or learning. A workflow that routes fewer edge cases to experienced humans may appear efficient while quietly eroding the embodied expertise that once made escalation meaningful. Selection does not merely choose among workflows. It reshapes the environment in which subsequent selection occurs.</p><p>That, in turn, creates a more sobering possibility. Some workflows may be selected not because they preserve or improve institutional judgment, but because they consume the conditions necessary for better alternatives to emerge. Once a certain arrangement has scaled, it may crowd out the slower, richer, more interpretive patterns from which a more resilient system might have been built. The organization becomes adapted to a thinner form of reasoning because it is cheaper, faster, and easier to distribute. Over time, what is lost is not only quality in any immediate sense, but the institutional capacity to recognize what kind of quality has been lost.</p><p><strong>Selection should be treated as a governance question at least as much as an operational one</strong></p><p>Organizations need a way to distinguish between workflows that are merely efficient and workflows that are worth stabilizing. They need to ask which human-system arrangements improve judgment, which merely accelerate action, and which quietly relocate risk into less visible places. They need to decide what remains exploratory, what graduates into bounded operational use, and what should be actively prevented from hardening into routine. And they need to do so with some humility, because the selection pressures inside real institutions are rarely clean. Economics, politics, fatigue, tool design, and managerial appetite all shape what survives.</p><p>One useful way to think about this is to borrow, carefully, from evolutionary language without pretending the analogy is complete. Variation is now abundant. AI systems make it easier to generate many possible workflow configurations, some explicitly designed, others discovered in use. Selection determines which of these become durable. Retention occurs through memory, process, tooling, training, incentive structures, and integration into broader operating routines.</p><p>In that sense, organizations are not just adopting AI. They are evolving new composite forms of work. The danger is that selection may optimize locally for speed, convenience, or optics while degrading the deeper qualities that make institutions resilient: judgment, interpretability, principled escalation, and the ability to learn from exceptions.</p><p>So the practical challenge is not to prevent selection. Selection is unavoidable. The challenge is to become more deliberate about it. That means:</p><ul><li><p>Treating exploratory workflows as provisional until they have been evaluated under realistic conditions</p></li><li><p>Instrumenting not only outputs but failure modes, reversibility, escalation behavior, and downstream memory effects</p></li><li><p>Distinguishing between places where autonomy is useful, places where recommendation is sufficient, and places where the primary role of the system should be to enrich human judgment rather than substitute for it</p></li><li><p>Noticing when &#8220;temporary&#8221; workarounds are becoming de facto policy</p></li><li><p>Recognizing that the workflows which most deserve to survive may not always be the ones that spread most naturally on their own</p></li></ul><p><strong>From there, the final implication is strategic</strong></p><p>The organizations that navigate this period best are unlikely to be those that simply expose the largest action surface to AI systems or operationalize every newly discovered capability as quickly as possible. They will be the ones that develop a disciplined selective logic: a way of deciding which forms of hybrid agency create cumulative advantage and which merely create hidden fragility.</p><p>In some cases that will mean accelerating aggressively. In others it will mean slowing down long enough to understand what is being stabilized. In still others it will mean preserving certain domains of human interpretation not because machines are incapable, but because the institutional costs of thinning that layer are too high.</p><p>The earlier essays argued that organizations must learn to molt without mistaking movement for adaptation, remember without turning memory into brittle centralization, and distribute agency without simply deferring judgment to probabilistic systems. Selection brings these concerns into a sharper frame. Once new workflows appear, the question becomes which arrangements of structure, memory, and agency will endure.</p><p>Some will become indispensable. Some will prove deceptively competent. Some should never have survived first contact with reality.</p><p>The future of AI-era organizations may depend less on whether they can generate new forms of work than on whether they can select among them wisely. Because once hybrid agency becomes real, the institution is no longer just changing. It is choosing what it becomes.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://essays.jdthorpe.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading John's Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[On Agency and Decision Routing]]></title><description><![CDATA[How judgment, feedback loops, and emergent capability reshape decision-making in AI-enabled organizations]]></description><link>https://essays.jdthorpe.com/p/on-agency-and-decision-routing</link><guid isPermaLink="false">https://essays.jdthorpe.com/p/on-agency-and-decision-routing</guid><dc:creator><![CDATA[John Thorpe]]></dc:creator><pubDate>Sun, 08 Mar 2026 13:04:35 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/81c95067-d507-4ca9-8c9f-bc0ae17c5781_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In my first two posts, I explored two layers of organizational adaptation under AI acceleration. The first was structural. I argued that SaaS organizations are beginning to undergo a kind of molting as role boundaries soften, execution compresses, and the historical shells that once organized software work no longer fit the pace or nature of capability now sitting at the fingertips of talent.</p><p>The second was more internal. I argued that under acceleration, organizational memory becomes more of a metabolic concern. Institutions do not metabolize change through structure alone. They metabolize it through memory, or through what they retain, reinterpret, and carry forward. The risk emerging here is a kind of divestiture of organizational knowledge into continuous AI-mediated decision-making, especially where automation or convenience short-circuits traditional routes of contextual judgment and replaces them with decontextualized &#8220;good enough&#8221; responses at scale.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://essays.jdthorpe.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading John's Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>But there is another layer sitting between structure and memory that feels increasingly important to understand, particularly as reasoning models begin to interact more directly with systems and workflows. That layer is agency. More specifically, it is the question of how agency gets distributed as AI moves from being a passive object of use to an increasingly active participant in the completion and routing of work. This becomes especially consequential in vertical software domains where decision points were never truly binary or deterministic to begin with &#8212; put differently, where judgment, interpretation, and contextual tradeoffs were always part of the workflow.</p><p>That sentence probably needs immediate qualification, because &#8220;agency&#8221; is one of those words that expands too quickly if left unattended. I am not using it here in the strongest philosophical sense, as though models possess intention, consciousness, moral standing, or some mystical inner life. I mean something narrower and more organizationally relevant. I mean the practical locus of initiation, routing, interpretation, escalation, and action.</p><ul><li><p>Who, or what, decides what happens next?</p></li><li><p>What is allowed to interpret signals?</p></li><li><p>What is allowed to initiate movement across systems?</p></li><li><p>What is allowed to decide that ambiguity is tolerable, or that it is time to escalate to a human?</p></li></ul><p>In the context of modern organizations, these questions matter more than whether one wants to grant AI &#8220;real&#8221; agency in some metaphysical sense. If a system can synthesize data, trigger a workflow, and selectively decide when to invoke a human network, then something meaningful about agency has shifted, even if one ultimately insists that humans remain legally, ethically, and institutionally responsible for the result. This is especially true for decisions that involve second- or third-order complexity, non-predictable cognition, and domains where there is neither a high concentration of standardized data nor the institutional scale required to build highly specialized automation architectures in every workflow.</p><p>Historically, this question was comparatively simple. Even in highly digitized enterprises, agency was still mostly human-routed. Humans initiated workflows. Humans interpreted context. Humans decided when a deviation mattered, when a supplier looked risky despite technically compliant documentation, when a formulation result was interesting enough to pursue, or when a quality signal merited waking up the right person. Software systems, even quite sophisticated ones, were generally subordinate in this arrangement. They stored artifacts. They executed deterministic logic. They preserved outcomes. They did not meaningfully participate in deciding which latent possibilities within the system were worth pursuing.</p><p>There are, of course, important exceptions. Data science and machine learning have long supported higher levels of automation through pattern recognition in domains such as credit risk or autonomous driving. But those are generally environments with enormous scale, highly specialized training regimes, and substantial confidence-building infrastructure. Many of the workflows now being automated with reasoning models are different. They are not narrowly machine-learned from massive scale in the classical sense. They rely more on probabilistic reasoning applied to messy, judgment-heavy contexts. In that sense, what is emerging here is not just automation. <em>It is applied judgment at scale</em>.</p><p>It is therefore incredibly important that companies have insight into how and why agents are making decisions, including what tools they are using and in what way. Otherwise, organizations are not really automating judgment so much as deferring it to probabilistic systems without meaningful oversight.</p><p>This distinction matters. Traditional enterprise software could contain an extraordinary amount of logic and still not really alter the locus of agency. A C# application with hundreds of layers of hardcoded business rules could encode institutional preferences, guardrails, and compliance logic, but the system still operated inside a narrow corridor of explicit design. It did what it had been told to do. It did not meaningfully discover new affordances inside its own architecture. The organization still supplied the routing intelligence.</p><p>That is part of what is beginning to change.</p><p>As reasoning models improve and, more importantly, as they gain access to tools, the system begins to participate in the routing layer itself. It no longer merely answers questions about work. It starts to influence the sequence of work. That influence may be modest at first. A model retrieves a set of documents, synthesizes the signal, and recommends a next step. But even that is already different from the historical pattern, because the first-order interpretive layer is no longer exclusively human. And once the model can call tools, query systems, invoke downstream workflows, extract data from documents, chain actions, or escalate selectively, the pattern becomes more consequential.</p><p>At that point, the workflow is no longer best described as human &#8594; system &#8594; human. In many cases, it is more accurate to describe it as AI &#8594; system &#8594; AI, or AI &#8594; human &#8594; system, or even AI &#8594; system &#8594; human &#8594; AI, with the human entering only where ambiguity, risk, or policy requires it.</p><p>In my world, for example, in some cases the human may disappear from the immediate loop altogether. A model analyzes incoming risk data and triggers a deterministic risk workflow before any operator reviews it. A document extraction pipeline interprets supplier data and initiates compliance checks. A system classifies signals, sequences actions, and then returns to the model for further synthesis. None of this requires a science-fiction notion of AI personhood. It only requires accepting that the practical routing of work is no longer fully human-mediated.</p><p>That is why I think the most interesting organizational shift is not simply that AI helps people think faster, but that AI is beginning to sit in the place where people historically mediated between systems, data, and each other.</p><p>Put more plainly, a new intermediary has entered the organization, and in many contexts it is becoming the default intermediary because it is simply more efficient and good enough, <em>or presumed good enough</em>.</p><p>That efficiency is not incidental to the risk. <em>It is the risk.</em> When something becomes the path of least resistance for interpretation and action, it quietly absorbs authority whether or not the organization has formally granted it.</p><p>The deeper problem is that once this intermediary becomes efficient enough, &#8220;probably good enough&#8221; can quietly become operationally sufficient. In systems with large action surfaces, large decision sets, and relatively small human oversight layers, there is often no efficient way to validate every judgment. Precision can degrade not because anyone explicitly accepted lower standards, but because the scale of action outpaces the scale of review. In regulated environments, that is not a trivial tradeoff. It creates a real need for guardrails, recursive QA mechanisms, and much better visibility into how and why agentic systems are making decisions.</p><p>This is one of the reasons I am increasingly interested in the concept of decision routing rather than the more generic language of assistance or copiloting. &#8220;Copilot&#8221; is too flattering and too vague. It implies a bounded and intelligible division of labor. But in practice, what we are seeing in many environments is less tidy. The model does not just sit beside the operator waiting to be consulted. It increasingly sits between people and systems, between systems and systems, and sometimes between people and each other. The model becomes part of the routing fabric through which knowledge, work, and judgment move.</p><p>Once that happens, the architecture of tools becomes inseparable from the architecture of agency. Whatever tools an agent can call defines, in practical terms, the surface area of its possible action. The tools are not just conveniences. They are the affordances from which latent capability emerges.</p><p>This is where some of our work in Formulation AI has become especially interesting to me. As we have been testing agents with tools, one of the more useful instincts has been not simply to validate the workflows we explicitly designed, but to pressure test what the system might do that we did not explicitly solve for.</p><p>That sounds trivial when stated abstractly, but it represents a very different epistemic posture than traditional enterprise software testing. Instead of asking whether a known function works correctly, the question becomes what general capabilities emerge when a very strong reasoning system is given access to these tools, data structures, and operational surfaces. I increasingly think these frontier models need safe environments in which both opportunities and risks can be explored before they are normalized into production behavior.</p><p>A simple example would be asking the system to find all ingredients in a recipe rather than naming one explicitly. Perhaps everything we originally built assumed single-ingredient replacement. But we never explicitly limited the agent in that respect. Once the tools are available, the scope of what the system can attempt becomes only partially deterministic.</p><p>In our case, superficially this looks like a small prompt variation. In practice, it is probing several deeper things at once. Can the system infer the structure of the recipe? Can it generalize beyond a named entity to a class of entities? Can it chain retrieval, parsing, and synthesis without being told exactly how? Can it use the tool environment as a substrate for abstraction rather than just execution? These are not just product questions. They are questions about the extent to which the system is behaving as a deterministic wrapper around known logic versus a reasoning actor operating within a data-rich and affordance-rich environment.</p><p>We have observed second-order behaviors as well. In some cases, the agent begins offering follow-on tasks for which it does not actually possess the tools &#8212; for example, suggesting that it contact a supplier and retrieve samples. Even where the action cannot yet be executed, the system is already reasoning one step beyond its formal affordance surface. That is useful as a signal, but it also reinforces the point: <em>intended capability and emergent capability are not the same thing.</em></p><p>In classic software, the scope of meaningful behavior was largely bounded by what had been explicitly engineered. With tool-using agents, it becomes increasingly possible for the system to exhibit useful behaviors that no one directly specified but that nevertheless become possible because of the interaction between model capability and tool surface. That is exciting, obviously. It is also a governance problem.</p><p><em>Once a capability can emerge from architecture rather than from a product requirement, then the question of what the system can do becomes partly empirical. One has to discover the capability surface, not just design it.</em></p><p>This is where I think a lot of current discourse still undershoots the real issue. Much of the debate about AI in organizations still assumes a relatively stable relationship between task, workflow, and system. Either the system can do the thing or it cannot. Either the human retains control or the model is &#8220;autonomous.&#8221; But in practice, a more interesting and difficult reality is emerging.</p><p>The system may be able to do something useful that no one explicitly planned for, but only under certain conditions, and only if the right tools are exposed, and only if the organizational tolerance for emergent action is sufficiently high. That is not a classic automation problem. It is closer to capability cartography. One is mapping the boundary of delegated agency inside a sociotechnical environment.</p><p>At that point, the governance questions become unavoidable.</p><ul><li><p>When should the system be permitted to act?</p></li><li><p>When should it escalate?</p></li><li><p>When should it recommend?</p></li><li><p>When should it be prohibited from generalizing?</p></li><li><p>Who owns the risk when a system-initiated action was logically available but organizationally undesirable?</p></li><li><p>Who decides whether an emergent capability should remain exploratory, become operationalized, or be actively suppressed?</p></li></ul><p>These are not just questions of safety or compliance. They are questions of institutional design.</p><p>They also require visibility. If organizations cannot inspect how agents are arriving at decisions, which tools they are invoking, and what reasoning paths they are implicitly following, then they are not really automating judgment. They are deferring it to probabilistic systems without meaningful oversight.</p><p>This is also where the relationship to the previous essay on memory becomes more than thematic. Agency and memory are not separate topics. They are entangled. Every routing decision has memory consequences. Every time an agent handles a task without human involvement, the organization gains efficiency but risks losing an opportunity for articulated human interpretation.</p><p>Every time an agent escalates to a human, the system has a chance to absorb not just an answer but a rationale. The real question is whether the architecture captures that rationale in a way that compounds institutional knowledge, or whether the insight disappears into ephemeral conversation and the system remains permanently dependent on intermittent human rescue.</p><p>In that sense, agency design and memory design are mutually reinforcing. The more the system routes, the more critical it becomes to determine what happens when the routing hits a human. Is that human reasoning merely instrumental, used to get through the moment? Or does it become part of the institution&#8217;s evolving memory substrate? If the latter, then the system can begin to compound judgment over time. If the former, then the system may accelerate output while quietly hollowing out the very context that made good decisions possible.</p><p>This is why I do not think the real question is whether AI will replace human judgment. That frame is too blunt, and in practice it obscures the more interesting design problem. The better question is where human judgment should sit inside increasingly automated routing systems.</p><p>Retrieval, synthesis, pattern detection, extraction, and workflow initiation can increasingly be handled by systems. Interpretation, exception handling, risk calibration, political sensitivity, patience, restraint, and broader contextual perspective remain much harder to formalize. It is not that humans should keep all the interesting work, nor that systems should inherit all the repetitive work.</p><p><em>It is that the organization has to decide where judgment has the highest leverage, and then architect feedback loops so that when judgment is invoked, the resulting insight does not vanish.</em></p><p>That is also why I think the language of &#8220;human in the loop&#8221; is starting to become insufficient. In some environments, especially highly automated ones, the human will not be continuously in the loop. The more useful question is whether the human is appropriately in the routing architecture, and whether the system knows when to surface embedded institutional knowledge, when to route to a human network, and when to capture the resulting reasoning back into memory.</p><p>In these environments, organizational performance may depend less on knowing who to ask and more on how well routing decisions are designed across humans and systems. That is not simply a retrieval problem. It is an agency design problem.</p><p>And once agency becomes distributed in this way, another process begins to matter. Some human-system arrangements will prove reliable and scalable. Others will prove fragile, noisy, or deceptively competent. Some patterns of routing will become institutionalized. Others will be discarded. In other words, once agency becomes hybrid, selection begins.</p><p>That is probably the next layer of the story. For now, what seems most important is recognizing that AI is not merely making individual workers faster. It is beginning to change how decisions move, how authority gets exercised, and how action gets initiated inside organizations.</p><p>The companies that navigate this well will not simply be those that automate the most tasks. They will be the ones that understand where agency is shifting, where it should remain human, where it can safely become systemic, <em>and how to ensure that the resulting flows of judgment, action, and memory actually compound over time.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://essays.jdthorpe.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading John's Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[On Institutional Knowledge and Metabolism]]></title><description><![CDATA[AI is not just accelerating execution; it is reshaping how institutional knowledge is retained, routed, and applied.]]></description><link>https://essays.jdthorpe.com/p/on-institutional-knowledge-and-metabolism</link><guid isPermaLink="false">https://essays.jdthorpe.com/p/on-institutional-knowledge-and-metabolism</guid><dc:creator><![CDATA[John Thorpe]]></dc:creator><pubDate>Sun, 01 Mar 2026 22:49:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/0eb91126-f545-496f-adbd-fc4803d16a79_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In my first post, I suggested that AI acceleration is pushing SaaS organizations into a kind of structural molting. Roles blur. Boundaries soften. Execution compresses. Institutions shed shells that no longer fit the pace of capability now at the fingertips of their talent.</p><p>But structure is only one part of the story.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://essays.jdthorpe.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading John's Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Beneath structure there are forces that are more difficult to characterize and quantify: memory, intuition, and what many organizations refer to as institutional knowledge (or &#8220;tribal knowledge&#8221;). These are the components of what often shows up as &#8220;good business sense&#8221; in practice.</p><p>If organizations are going to operate under continuous technical acceleration, the question is not <em>only</em> how they reorganize. It is how they remember, persist intuition, and apply tribal knowledge. Without developing mechanisms to continuously accrete these types of knowledge into LLM context or training, talent using AI tools operates increasingly in knowledge silos - making discrete decisions <em>or delegating discrete decisions to agents </em>that, in the aggregate, may erode historical advantages conferred by decisions informed by institutional knowledge.</p><p>When architected or applied poorly, AI can fundamentally erode institutional knowledge as reliance on agentic intermediaries increases.</p><p>However, when architected and applied right, AI holds the promise of significantly optimizing the management, preservation, and diffusion of organizational knowledge. Increasingly sophisticated agentic memory management and context injection tools can help ensure that knowledge doesn&#8217;t remain siloed, even when applied in more autonomous settings.  I think AI will also play a major role in both knowledge discovery and continuous characterization - or surfacing knowledge that appears to be emerging from patterns of interaction or large data sets that historically may have gone unnoticed or required significant investment in post-hoc data science analysis.</p><p>Importantly, like my previous note on molting, the key will be capturing, analyzing, and constantly refining over time the structures supporting how organizational knowledge <em>can be </em>both captured <em>and </em>diffused through AI.  It must emerge from everyday work and the systems we use, not from something that feels clunky or artificially imposed.</p><p>This manifests itself in important ways - there has already been a shift to a greater reliance on AI for decisionmaking in nearly all areas where people are making decisions in organizations.  As I mentioned in my previous post, this shift was largely organic - not mandated.  On-demand super-intelligence became simply <em>easier </em>to access and involved less friction. Anecdotally, this pattern appears to have accelerated as reasoning-focused models improved output quality and reduced iterative friction.</p><p>In practice, this inserts a new intermediary, AI, between traditional mechanisms of shared intuition and decision-making. Conversations, meetings, and Slack threads have historically routed experience directly between people and teams. Increasingly, that routing passes first, or entirely, through an AI model.  At the operator level, it is simply more efficient, and that efficiency is part of the risk.</p><p>So one of the questions becomes:</p><ul><li><p>How does historical and evolving institutional knowledge disperse into new forms of decisionmaking within new forms of organization?</p></li></ul><p>To see why this matters, it helps to look at examples of where institutional memory actually lives. I&#8217;ll provide examples in a context that I&#8217;m familiar with: Food and Beverage manufacturing.</p><p><em><strong>Procurement</strong></em></p><p>In the context of procurement, institutional knowledge has long resided in personal networks. A seasoned buyer&#8217;s &#8220;rolodex&#8221; was never just a list of suppliers. It encoded judgment. Who delivers under pressure. Who flexes on minimum order quantities. Who quietly substitutes spec-adjacent materials. Who passes audits but struggles when timelines tighten. Much of this never entered a formal system. It lived in accumulated experience.</p><p>At TraceGains, we built platforms like Gather Marketplace to try to formalize elements of the &#8220;rolodex&#8221;. Supplier documentation practices, performance signals, network visibility, capturing conversations, creating shared workspaces for shared analysis - all of these reduced reliance on purely personal recall.</p><p>These tools were, in part, mechanisms to support institutional knowledge gain and retention, but were never meant to replace it.  Institutional knowledge emerges from accumulated personal and team experience interacting with evolving market dynamics. Traditional SaaS platforms can only indirectly capture and surface fragments of this.</p><p>Procurement offers one lens, but formulation reveals the same dynamic.</p><p><em><strong>Formulation</strong></em></p><p>In food science, institutional memory often takes the form of knowledge derived from long histories of trial and error. A stabilizer that failed under shear in 2017. A protein substitution that technically worked but altered mouthfeel enough to trigger consumer complaints. A reformulation rejected not for technical reasons, but because it conflicted with a customer&#8217;s unspoken preference.</p><p>The spreadsheet records the outcome. The scientist remembers the rationale.  That memory is critical to reduce re-work and costly R&amp;D development errors.</p><p>Even when documentation exists, the layered reasoning behind acceptance or rejection is rarely captured in full. The &#8220;why&#8221; remains distributed across people, email threads, and informal discussions&#8230; In food science, in many cases literal paper notebooks.</p><p><em><strong>Quality and safety</strong></em></p><p>Quality leaders carry what might be called institutional texture. The auditor who fixates on supplier verification. The near miss that almost became a recall. The supplier whose documentation is technically compliant but consistently late. These experiences shape escalation thresholds and risk posture long after the event is archived.</p><p>Such knowledge is not easily reducible to fields in a database.</p><p><em>Historically, organizations have distributed institutional knowledge through people rather than silicon systems. Some information is encoded in databases, documentation, or conversations. But the deeper layers&#8230; the second- and third-order interpretations about what those facts mean, when they matter, and how they interact&#8230; tend to accumulate organically through lived experience. Even where formal data science insights exist, they typically inform institutional judgment rather than replace it. The interpretation, prioritization, and contextual weighting of information still emerges through human interaction over time.</em></p><p>There is a concept in organizational psychology known as transactive memory (a concept I encountered while researching this post). Very briefly, transactive memory asserts that teams do not store all knowledge within each individual. Instead, they maintain a shared understanding of who knows what. One person understands allergen regulation. Another remembers the failed co-man trial from five years ago. Another knows which supplier consistently underestimates lead times.</p><p>The strength of the organization lies not only in expertise, but in the reliability of this cognitive routing system.</p><p>AI complicates this arrangement by partially or entirely disintermediating traditional cognitive routing systems.</p><p>As I posited in my previous blog post, this disintermediation is happening whether organizations want to acknowledge it or not.  So, accepting that, the question becomes how to develop cognitive routing systems that <em>include </em>AI.</p><p>When models can retrieve, summarize, and synthesize institutional data, they begin to occupy space within that transactive memory network. The routing question subtly shifts. It is no longer only &#8220;Who knows this?&#8221; It becomes &#8220;Does the system know this?&#8221; And more importantly, &#8220;When should we trust it?&#8221;</p><p><em>To understand how AI affects organizational metabolism, we need to be more precise about what organizational memory actually is and where it resides.</em></p><p>Organizational memory is not a single thing. It has structure.</p><p>Some memory resides within individuals. This is experiential knowledge &#8212; the buyer who senses when a supplier&#8217;s tone signals risk, the scientist who remembers how a stabilizer behaved at scale, the quality lead who recalls the internal debate that nearly escalated into a recall. This form of memory is embodied and interpretive. It rarely appears in full documentation.</p><p>Some memory resides in systems. Databases, specification repositories, audit archives, supplier documentation portals. This is structured, queryable memory. It persists beyond tenure. It scales. It reflects what the organization chose to formalize.</p><p>Some memory resides in networks. The relationships between teams, suppliers, customers, and regulators. These networks encode knowledge that no single document contains. Knowing who to escalate to. Who can unblock a stalled project. Who interprets regulatory nuance conservatively.</p><p>And then there is a more diffuse layer: pattern memory. Observations that have not yet been synthesized into explicit policy. A sense that complaints spike after certain formulation adjustments. A quiet awareness that a supplier category consistently introduces documentation friction. These are emerging signals embedded in data and experience.</p><p>AI interacts differently with each layer.</p><p>It can retrieve structured system memory with precision. It can surface latent statistical patterns from data. It can approximate relational networks if sufficiently mapped. But embodied experiential knowledge remains difficult to capture unless it is deliberately externalized.</p><p><em>Once we recognize that organizational memory has structure, we can also see that it has failure modes.</em></p><p>Memory decays.</p><p>When knowledge resides primarily within individuals, it exits with them. Tenure becomes a proxy for continuity. Attrition becomes structural amnesia. The organization retains artifacts, but loses human or team interpretations of the meaning and significance of those artifacts.</p><p>Database memory persists longer. Specifications and audit records outlive people. But persistence is not coherence. Repositories accumulate artifacts without hierarchy. Information remains, but meaning fragments.  This is especially true in SaaS contexts where high degrees of configurability complicate the <em>software&#8217;s </em>ability to derive meaning across customer sites and contexts.</p><p>There is also distortion.</p><p>AI systems summarize. They compress. They may privilege patterns that are statistically &#8220;loud&#8221; over those that are strategically consequential. A near miss that shaped internal behavior may carry less weight in the data than a routine compliance cycle repeated dozens of times. What was emotionally formative for a team may be numerically insignificant for a model.</p><p>Summarization introduces bias not through malice, but through abstraction. What is numerically dominant is not always strategically decisive.</p><p>And then there is centralization.</p><p>Organizations often respond to knowledge challenges by attempting consolidation (e.g., a single system of record, a unified repository, a master knowledge graph). Centralization promises clarity, but, as many can attest, these systems can be exceedingly difficult to maintain over time. Further, distributed memory has advantages. When expertise is spread across people and teams, it introduces friction, but also redundancy and resilience.</p><p>In AI-augmented environments, this balance becomes architectural rather than accidental. Total centralization risks brittleness. Total distribution risks inconsistency. Most organizations operate somewhere between the two, often without recognizing that this equilibrium determines their adaptive capacity.</p><p><em>I think these failure modes matter more under periods of acceleration like we&#8217;re experiencing now with AI.</em></p><p>In a slow-moving organization, rediscovering lessons is inefficient but survivable. Mistakes repeat at tolerable intervals. Institutional memory can be informal and still functional.</p><p>In a high-acceleration environment, poor memory compounds.</p><p>As AI compresses production cycles and lowers the cost of experimentation, iteration increases. Workflows recombine. Decisions happen faster. Under these conditions, institutional memory ceases to be archival. It becomes metabolic.  Organizations do not metabolize change through structure alone. They metabolize it through memory; what they retain, reinterpret, and carry forward.</p><p>If structural molting changes the outer shell of an organization, institutional memory determines what survives the shed. Weak memory turns change into reset. Strong memory makes change cumulative.</p><p>AI does not eliminate the need for expertise. It redistributes it. It becomes part of the memory architecture itself &#8212; influencing what is recalled, how it is summarized, and which patterns are surfaced.</p><p>The leadership question, then, is not whether AI should participate in knowledge workflows. It already does (for better or worse). The question is how memory is designed for <em>agentic application </em>in sustainable ways that allow for continuous evolution based on learned <em>experiences</em>.</p><ul><li><p>What knowledge should be retained for context and training?</p></li><li><p>How can that knowledge be captured, interpreted, and/or codified?</p></li><li><p>What should remain distributed?</p></li><li><p>Where must humans validate interpretation?</p></li><li><p>How should AI systems be embedded into decision loops?</p></li></ul><p>These are not tooling decisions. They are institutional ones.</p><p>If organizational performance once depended, in part, on knowing who to ask, it may now depend on how well routing decisions are designed across humans and systems. In highly automated or co-piloted environments, the system itself increasingly determines when to leverage embedded institutional knowledge and when to surface or escalate to human networks. The challenge is ensuring that when human judgment is engaged, the resulting insight is incorporated back into institutional memory rather than remaining isolated.</p><p>That is not simply a retrieval problem.</p><p>It is a design problem. And under acceleration, it becomes decisive.</p><p><em>Poorly designed AI layers compress decision cycles but hollow out memory.</em></p><p><em>Well-designed AI layers compound institutional memory and increase signal quality.</em></p><p>The organizations that win will not be those that automate the fastest, but those that remember best while they accelerate.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://essays.jdthorpe.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading John's Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[On Organizational Molting]]></title><description><![CDATA[AI Acceleration and the Emergent Restructuring of Work]]></description><link>https://essays.jdthorpe.com/p/on-organizational-molting</link><guid isPermaLink="false">https://essays.jdthorpe.com/p/on-organizational-molting</guid><dc:creator><![CDATA[John Thorpe]]></dc:creator><pubDate>Sat, 28 Feb 2026 13:20:22 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/23b4ba2d-9f4b-40e4-b24c-98f2c8fa15d6_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I&#8217;m living in the chaotic middle of the so-called SaaSpocalypse, leading an AI and knowledge engineering team through the process of helping to evolve a legacy SaaS platform and, at the same time, developing new AI native platforms.</p><p>I&#8217;m starting to write more to force myself to step back from the day to day turbulence of innovating during what feels like a major technical phase change and take time to observe and reflect. How are jobs changing? How are teams changing? What does that tell us about emerging forms of organization around software development?</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://essays.jdthorpe.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading John's Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>To start, like everyone else, we are in the thick of the same difficult questions many teams are facing, e.g.:</p><ul><li><p>Which emerging AI capability sets fit into legacy platforms to transform existing workflows?</p></li><li><p>Which capabilities can only be realized on AI native architectures?</p></li><li><p>What AI tools should our teams deploy?  How does that affect historical processes? What level of trust can we put into them?</p></li></ul><p>I try to anchor my thinking in systems. What has been most interesting over the last couple of years is how the transformation of labor across traditional SaaS departments has not felt centrally mandated. It has felt local, tool driven, and person led.</p><p>Many teams are simplifying and piloting new workflows themselves, often starting at home or as pet projects before those workflows quietly migrate into the enterprise substrate.</p><p>There has been a real but largely unnamed emergence of new types of work and coordination. The value of these tools at the individual level was so obvious and transformative that organizations began changing whether it was formally acknowledged or not.</p><p>The word that keeps coming to mind is molting.</p><p>I suspect the metaphor slipped in subconsciously after weeks of hearing all of the hype about Moltbook, but I am keeping it. I am a faux naturalist, and this gives me an excuse to think about biology.</p><p>Invertebrates such as lobsters and spiders molt to grow larger, shedding rigid exoskeletons that no longer fit. Reptiles shed to expand. Mammals molt seasonally. In our house, this mostly means our floors are perpetually covered in fur from two dogs.</p><p>In biology, molting is not a strategic planning session. Spiders do not gather to debate timing. It is triggered. Growth pressure builds. The old shell tightens. The organism sheds.</p><p>The metaphor feels uncomfortably appropriate for SaaS right now.</p><p>A caveat, though. Molting does not necessarily imply headcount expansion or contraction. In biological systems, shedding can occur as a result of changing environmental conditions. The organism is not always becoming bigger or smaller. It may be adapting to a new scale <em>or to a new context.</em></p><p>Inside some of our teams, one source of molting pressure seems to be coming from expanding output. Teams using AI are producing materially more. Prototypes compress. Workflows accelerate. Certain historical bottlenecks dissolve (and new bottlenecks emerge). That increased productive capacity creates tension with legacy processes and role boundaries, while also introducing a new challenge: the time and discernment required to evaluate which outputs are actually valuable.</p><p>In some areas, we are actively expanding teams. In others, efficiencies are accelerating historical workflows. The question is not simply about headcount. It is also about what higher levels of output, faster iteration cycles, and broader individual capability mean for how work is coordinated.</p><p>The shell that tightens may not be headcount. It may be inherited structure.</p><p>In many places, especially in the early wave, organizations tried to bolt constrained AI tools such as chatbots and limited use wrappers onto workers who had already swallowed the red pill. The result was predictable. Dissonance. Resistance. Quiet workarounds.</p><p>The organism had already grown. The exoskeleton had not caught up.</p><p>What I am seeing now is less about AI adoption and more about structural shedding. Titles remain the same, but daily workflows, tools, and boundary lines have shifted materially. Product managers prototype - product or feature ideation is literally functional prototype development. Engineers write specifications and orchestrate coding agents. QA manages and audits continuous loop testing. Sales automates research. Customer success builds tailor-made internal onboarding tools.</p><p>The shell is cracking in places, even if the nameplate on the door has not changed.</p><p>Molting is not always pleasant. It is not euphoric. After shedding, organisms are soft. Vulnerable. Exposed.</p><p>That vulnerability maps cleanly to the present moment. Best practices shift weekly. Security risks are real. Data governance is complicated. Systems emergence, by definition, produces unexpected structures from complex substrates. Not all of them are elegant.  Many teams are effectively molting alongside rapidly improving coding tools.</p><p>Over the past year, I have been running a slightly nerdy experiment alongside all of this. Every couple of months, I rebuild roughly the same type of application using whatever AI stack is current at that moment. Usually it is something in food science, regulatory compliance, or risk modeling, which sounds dull until you try to encode it and discover how many edge cases the real world contains.</p><p>I am hardly the only one doing this. Plenty of people are stress testing these tools inside their own domains. The interesting part is not that each version improves. It is the magnitude. This has been widely documented and quietly felt.</p><p>The most recent cycle, using Antigravity, Codex, the latest GPT models, and Gemini, felt different immediately. Tasks that used to require careful sequencing and multiple layers of coordination began to compress. Context persisted longer. Iteration tightened. The system did not just answer isolated questions. It participated in the build.</p><p>If molting requires growth pressure, this feels like that pressure.</p><p>We are swimming in hype, no doubt. This is the sort of moment that produces sweeping declarations and commemorative hoodies. Slopes flatten. Technologies stall. Regulation appears precisely when it is least convenient. It would be unwise to assume inevitability.</p><p>Still, even discounting enthusiasm, the transformation gradient is real.</p><p>When the cost of producing structured thought declines, even modestly, the ecosystem responds. More prototypes appear. Internal tools emerge without ceremony. Someone casually mentions they automated a workflow over the weekend that once required a quarterly roadmap discussion.</p><p>These are small signals. But they accumulate.</p><p>This does not look like the death of software. If anything, it looks suspiciously like proliferation. As Steven Sinofsky has argued, when the cost of creation drops, the surface area expands. We Cheaper inputs tend to produce more complex ecosystems, not fewer organisms.</p><p>The more subtle shift is inside teams.</p><p>The once comfortable boundaries between product, design, engineering, and QA are becoming negotiable. An idea moves to prototype in hours. Specifications become collaborative loops. Testing is more embedded and tightly coupled with faster, AI-driven engineering cycles. The handoffs, once ritualized, grow shorter and occasionally vanish.</p><p>No one announces this. There is no End of Department X memo. It simply becomes less obvious where one function stops and another begins.</p><p>One caution here.</p><p>Acceleration does not imply an explosion of bespoke, user-specific apps and shadow agents running everywhere. Large organizations cannot function that way. Agreed processes, shared data models, and consistent systems remain foundational. You do not replace institutional workflow with improvisation. You do not replace Salesforce with a prompt.</p><p>But something more subtle may already be happening.</p><p>Much of the experimentation around AI tools is not happening in sanctioned architecture reviews. It is happening at the edges. In pet projects. In internal prototypes. In workflow automations that begin as personal efficiency hacks and quietly demonstrate value.</p><p>That bottom-up activity is not the end state. It is the discovery phase.</p><p>In prior eras, formalization preceded experimentation. Today, experimentation often precedes formalization. Teams are discovering what should become standardized by first building it informally.</p><p>The risk is uncontrolled sprawl. The opportunity is insight.</p><p>If anything, the challenge for leadership is not to suppress improvisation nor to canonize every experiment. It is to observe carefully which emergent workflows represent genuine institutional leverage and should be codified into shared systems.</p><p>Improvisation expands the frontier. Process consolidates the gains.</p><p>The two are not substitutes. They are phases.</p><p>And when boundaries blur, organizations adjust. Not dramatically. Gradually. Like furniture being rearranged in the dark while everyone pretends the room looks the same.</p><p>The louder prediction is that all of this ends in widespread professional redundancy. Perhaps some roles shrink. Historically, when production costs fall, effort tends to migrate rather than disappear entirely.</p><p>When compute became cheap, we did not stop building software. We built far more of it. When information became searchable, we did not stop researching. We researched differently.</p><p>The deterministic view that agents will produce a permanent class of unused white collar labor assumes institutions fail to redeploy attention. That seems less like a technological law and more like a management decision.</p><p>As execution compresses, differentiation shifts upward, toward systems design, coordination, incentive alignment, proprietary workflows, and the ability to make sense of historical data without drowning in it.</p><p>The spectacle will continue. Benchmarks will rise. Headlines will oscillate.</p><p>But the pressure to molt is quieter.</p><p>It shows up in how quickly something real can be built, how few approvals it requires, and how many teams begin to behave as though acceleration is normal.</p><p>Molting is not dramatic from the inside. It feels messy. Awkward. Slightly itchy.</p><p>But it is how organisms grow.</p><p>And if the pressure continues, even without theatrics, the larger story may not be machine intelligence itself as AI embeds further into the stack, but how institutions shed the shells that once made sense, and what new forms take their place.</p><p>If anything, we should take a humble and experimental approach. It feels like AI acceleration calls into question traditionally rigid organizational structures. If change becomes the norm, the goal should not be constructing the next hardened shell too quickly. It should be learning how to operate in softer states.</p><p>Historically, companies respond to disruption by re-orging. New titles. New reporting lines. A new box diagram that attempts to restore clarity. That instinct is understandable. Exoskeletons provide safety. They define roles, boundaries, and authority.</p><p>But if the environment itself is shifting quickly, prematurely hardening around a new model may simply create the next shell that will need to be shed.</p><p>Instead of asking &#8220;What is the new permanent structure?&#8221; it may be more useful to ask &#8220;What kinds of teams are adaptive under continuous acceleration?&#8221;</p><p>We are already seeing hints:</p><ul><li><p>Small cross-functional pods that own a problem end to end, rather than passing it through departments</p></li><li><p>Individuals who are less defined by title and more by surface area of agency</p></li><li><p>Teams that treat AI systems as collaborators embedded in the workflow rather than external tools bolted on top</p></li><li><p>Organizations that invest less in rigid process documentation and more in shared context and rapid feedback loops.</p></li></ul><p>In some cases, a product manager becomes part prototyper. An engineer becomes part systems integrator and auditor of model behavior. QA becomes continuous risk management embedded in build cycles. Customer success builds internal automation to serve clients better rather than escalating everything upstream.</p><p>These are not radical revolutions. They are quiet recombinations.</p><p>The common thread is not a new org chart. It is an increase in trust, communication density, and shared visibility into systems. When boundaries are fluid, coordination becomes more important, not less.</p><p>Core values. Clear communication. Distributed ownership. Comfort with experimentation. Psychological safety for trying things that may not harden into policy.</p><p>Those feel more durable than titles.</p><p>AI acceleration does not just test technical architecture. It tests institutional metabolism. How quickly can a team sense change, adapt, and integrate new capability without collapsing into chaos or calcifying into bureaucracy?</p><p>It may be that the organizations that thrive are not the ones that declare the cleanest new structure, but the ones that tolerate ambiguity long enough for a better one to emerge.</p><p>If molting is inevitable, the question is not how quickly we can construct the next hardened structure. It is how well we can operate while the structure is in motion.</p><p>There is a natural instinct, especially in moments of uncertainty, to respond with definition. New reporting lines. New titles. A new operating model. A clearer box diagram. Structure feels stabilizing. It gives shape to ambiguity.</p><p>But if the underlying capability environment continues to shift, prematurely freezing into the next design may simply create another shell that will need to be shed.</p><p>In systems domains, emergence is not managed by dictating form. It is shaped by adjusting conditions. Strong feedback loops. Clear interfaces. Shared context. Trust. Distributed competence. The ability for small experiments to occur without threatening the whole.</p><p>Applied to organizations, this suggests a quieter discipline. Invest in communication density. Increase visibility into workflows. Shorten feedback cycles. Allow teams to recombine around problems. Strengthen shared principles rather than rigid process. The goal is not structural perfection. It is adaptive capacity.</p><p>If acceleration continues, the advantage will not go to the organizations that define the cleanest new hierarchy. It will go to the ones that can sense change early, redistribute effort intelligently, and integrate new capability without collapsing into chaos or retreating into bureaucracy.</p><p>In other words, they will be the ones that metabolize change most effectively.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://essays.jdthorpe.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading John's Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>