A personal operating system. Ten AI agents, one operator, one household. Built to support my life, my work, and four projects in parallel.
Persek OS started as a way to learn as much as possible, as fast as possible. I built things, found better ways to do them, rebuilt them, scrapped parts I no longer needed, and kept changing the system as the models got better.
Over time, it has shifted from pure learning mode into something more intentional: a set of components I can keep using as AI changes. The mechanics may change every few months. The outputs matter more. Intelligence briefings, durable project context, working memory, and decision history can keep carrying forward even when I swap the tools underneath them.
It is made of many components: the agent council, my local operating layer, coordination, memory, review loops, durable project artifacts, and the specific systems that produce briefs, project context, and reusable knowledge. I spent months working mostly through Claude Code. Now I am moving the workflow into Codex. That is the point: the system should grow with me, not depend on one model or one interface.
What sits around this panel is the rest of the canvas: the architecture, the agent roster, and seven subsystems with dedicated sub-canvases. Pan, click the minimap, or use Next to explore.
The AI discourse on X right now makes me want to close the tab. Make a million dollars overnight. Agents build everything while you sleep. One prompt, done. It's engagement bait, and most people know it.
Building is hard. Shipping is hard. Maintaining what you've shipped is harder. AI and agents help me do more than I could do alone, by a lot. But they are not a panacea, and anyone selling them that way is selling something.
I built this mostly because I wanted to learn. The more I learn, the more I realize I can do, and the cooler the things I can build. So I keep building. I am constantly changing the system, redoing things I thought were done, pulling out complexity I added a week ago. I spend a large amount of my time fixing things I broke.
This is where I am today. It may look completely different in two weeks, especially as the models keep getting better. I am not pretending this is optimal. I am not even sure it is good. I am not selling anything. I am having fun. I am learning. It is helping support my life. That is the whole pitch.
Operator at the top, a chief of staff underneath, a council of specialists below that, and my local operating layer underneath all of it. Claude Code and Codex are the active work surfaces; this layer is the connective tissue around them: memory, coordination, review loops, and artifacts that should survive when the active LLM or coding interface changes.
Persek OS is not one tool. It is a set of connected subsystems that each do one job: coordinate work, preserve memory, turn signal into action, research deeply, keep knowledge reusable, learn from repeated patterns, and check whether the whole thing is still healthy.
The individual cards around the canvas are the close-ups. This card is the zoomed-out view: each subsystem is useful alone, but the OS comes from the connections between them.
I have tried this a few ways: one general agent, many specialists with no coordinator, and a coordinator-first model. What has stuck is somewhere in the middle. Cal holds the cross-system view, but I often work directly with the specialist agents when the work calls for it.
Several knowledge surfaces, each with a clear boundary, plus recurring review that keeps them from drifting.
Different kinds of knowledge live in different places, with different rules for who writes, who reads, and how often things expire. The system doesn't treat memory as one bucket because real knowledge isn't one shape.
Collectors without readers rot. Every signal the system captures is paired with a review path, an owner, and a human approval point before durable changes are made.
Four learning surfaces, each with its own rhythm. Different signals move at different speeds. Some improve quickly, some need review over time, and durable rules require repeated evidence.
Iris triages the firehose. Cal converts it into tracked work. Many sources, a lot of daily noise, and a filter tuned around what I actually care about.
By month six, the same firehose is producing a brief that looks nothing like a stranger's. The system gets quieter and more pointed as it learns what I value.
Two shapes of research, one bridge between them. Rex for one-shot investigation with source-grounded synthesis. The LLM Wiki for accumulative topic knowledge that compounds over years.
Every Rex investigation makes the wiki richer. Every wiki article makes the next Rex run faster. The bridge is what keeps both sides honest.
Eight layers across two classes of failure. Static rot: the system's instructions drift. Operational silence: the system keeps running without producing useful output.
Each kind of failure is silent. Both compound. Both kill trust if caught after the fact. The layers catch each kind where it actually starts.
Two stores share the knowledge layer. The LLM Wiki is for topics: compiled articles built from many sources. gBrain is for entities: people, projects, companies, concepts that have a life of their own and accumulate timeline.
Topics aggregate; entities persist. They stay separate so each can do its job well, and a small set of bridges keeps them connected.