Cognitive Cloaking & the COODA Loop
Cognitive Cloaking
Over the past 8 months humanity has gradually unlocked a new ability: Cognitive Cloaking. Each of us can now:
- Encapsulate our thinking processes, those of others, or a blend via digital documents
- Spawn tailored agents—autonomous computer programs—that leverage these documents to inform task completion
Stop and absorb that.
The single tap of a button paired with pre-defined document definitions and a natural language prompt is enough to spawn any number of agents to operate against one or more tasks. An agent when seeded with a particular set of digital documents and tools dons a particular Cognitive Cloak. Each agent is based on one or more LLM models, subagents, their toolset, skills, and system prompts. The cloak is the encapsulation, it's what disguises the underlying LLM model enabling it to take on a specific form.
It's worth noting the encapsulations aren't solely about today's typical listing-of-dos-and-don'ts—basic preference imbuing—but can be much richer:
- mental model imbuing
- worldview imbuing
- skillset imbuing
- tailored tooling
- subagents
At its core, an agent is a model with tools in a loop. A Cognitive Cloak wraps and imbues an agent with pre-seeded and jit-seeded:
- mental models - decision making frameworks to operate within
- worldviews - guiding principles, beliefs, and/or lack thereof
- skillsets - granular capabilities informed by specific documents
- tailored tools - rules to guide use of aforementioned imbuing in addition to custom tools
- subagents - autonomy mechanism leveraging above items
You get to decide which agent wears which cloak. Naturally they can be saved, shared, version controlled, and iterated on with ease. This new ability is massively underappreciated today. We have lightweight cognitive cloning that works today.
Anthropic—a leading AI lab—has recognized this as they've learned their Claude Code product is not just for code. They've begun to train toward and productize to specific use-cases outside Software Engineering such as:
- Finance (back in July 2025)
- Life Sciences (a few days ago on October 20th, 2025)
These are prime Cognitive Cloaks for Claude to don.
Software Engineers
Software engineers were the first—and currently primary—subset of humanity using and exploring this newfound ability. The majority who have adopted agents in their workflows are using the aforementioned basic preference imbuing. A subset of them—myself included—are exploring the richer variants.
If you can codify your synthesis of a particular topic or domain then there is high likelihood agent autonomy can directionally boost your production. Similarly, if you can digitize information and/or interface with the tangible world via technologies like Model Context Protocol (MCP) then a cloak can further boost production potentially crossing the chasm from digital to physical.
Emergence
This new ability emerged around the public release of Claude Code on Feb. 24th, 2025. Though software engineers were experimenting with agents prior to Claude Code's release, I anticipate it will be the primary historical marker looking back.
The buzz in 2024 predicted 2025 would be the "year of the agent" and indeed it is. Recently Andrej Karpathy—OpenAI founding member, ex-Tesla CTO, and respected AI educator—claimed we're not in the year of the agent but the decade of the agent. The models, agents, and cloaks will evolve and become immensely more powerful in this timeframe.
Instantiation Lever
A single cloak alone is powerful, but it is worth stressing the plurality unlock. This instantiation lever—like the core ability itself—is massively undervalued at present. Some software engineers—including myself—have been exploring and practicing over the past 8 months or so. Fleet management is a byproduct. The potential is huge.
The majority are mesmerized by vibe coding and digital artifact generation while missing the forest for the trees. Again, we have lightweight cognitive cloning that works today.
COODA Loop
The COODA Loop is a variant of the OODA Loop that emerged when I was developing the video game Bugg and the Bionic Seeds. The OODA Loop is a famous decision-making model where I've prepended a step that was originally useful during game development:
- Calibrate
- Observe
- Orient
- Decide
- Act
In simple terms game agents followed the OODA loop to interact in the world. The Calibrate abstraction effectively enabled or disabled parts of an agent's sense suite at runtime which directly impacted how it Observed. Richer gameplay resulted.
The OODA loop was designed for humans where sight was the primary and inferred sense of Observation. We don't have this constraint in video games or in autonomous computer agents. Thus it's another means by which we can shape a cloak.
Calibration occurs via the tools, commands, skills, and subagents the root agent—now cloaked agent—has access to.
Predictions
I predict instantiating, shepherding, and decommissioning cloaked agents will become standard practice in software development and white-collar work generally. If robotics makes the advancements it is poised to sooner rather than later then blue-collar shepherds will soon follow. Only the number of cloaked agents one is able to productively shepherd will vary. Fleet management software, cloaked agent providers, and real-time auditing applications will be vital where open source options will play their part. I anticipate a single company—though many will compete—to encompass and streamline these functions while making a pretty penny in the process.
Like all product and tool adoption life cycles, there will be many laggards. Some will opt out completely. Progress, contribution, creation, etc. will still manifest via these opt-outs but those leveraging cloaked agents will make astronomical progress comparatively. The instantiation lever is just too powerful. It is thus lucrative and I expect interesting exploits individually and organizationally in the coming months and years.
What a time to be alive.

