evbogue.com

Get evbogue.com in your inbox.

One essay, three stories, no noise.

Wednesday, May 13, 2026  ·  Independent publishing
AI

Agentic Harnesses and the Discipline of Attention

Yoga is not a productivity hack for LLMs. It is a discipline for paying attention before the machine runs away with your mind.


The funny thing about using AI all day is that it makes ancient philosophy feel less decorative.

Not because a language model is mystical. Please do not start a company around that sentence.

The reason yoga matters here is simpler and more annoying: LLMs expose how sloppy your attention already was.

The new productivity lie is that generated motion equals work.

You ask a question. The model answers. The answer is smooth. The answer is confident. The answer has headings. The answer has the sterile little rhythm of something that has never been embarrassed at a party.

Then your mind does what minds do.

It identifies.

It thinks: this sounds right, so it must be right. This sounds useful, so I must use it. This sounds like my idea, so maybe it is my idea. This sounds finished, so maybe the work is finished.

That is the trap.

Without a harness, an LLM session becomes a thought spiral with shell access.

It can keep generating. It can keep refining. It can keep explaining itself. It can turn one vague idea into eleven subtasks, then into a plan, then into a half-built framework for managing the plan, then into documentation for the framework, then into a cheerful summary explaining why the framework is almost ready.

Congratulations. You have invented samsara with a progress bar.

The yoga problem is not that thoughts exist. The yoga problem is that you mistake the thought for yourself.

The LLM problem is not that outputs exist. The LLM problem is that you mistake the output for the work.

Different century. Same stupid human.

The first useful idea in yoga is that the mind is not one clean thing. It is activity. It is weather. It is churn.

Patanjali opens the Yoga Sutras with the famous line that yoga is the stilling of the movements of the mind. The usual Sanskrit phrase is citta vritti nirodha. You do not need to become the kind of person who says this at dinner. You just need to understand the diagnosis.

The mind produces modifications. Waves. Patterns. Stories. Reactions.

Most people live inside those modifications and call it a personality.

LLMs produce modifications too.

Tokens. Drafts. Summaries. Plans. Edits. Fake certainty. Reasonable-sounding errors. Better versions of your lazy thought. Worse versions of your good thought. Ten names for the same product. A conclusion that feels inevitable because the paragraph before it walked there in a nice suit.

If you are not trained, you identify with the stream.

You become the person who believes the output because it arrived quickly. You become the person who ships the patch because the agent said tests passed. You become the person who posts the essay because the prose sounds like prose.

Yoga says: watch the movement before you obey it.

That is also the first rule of using agents.

An agentic harness is just a way to keep the machine from freelancing inside your life.

The model can draft. The model can edit. The model can search. The model can run tests. The model can open files, inspect diffs, propose patches, write summaries, and build little tunnels between tools that used to require three tabs and a bad mood.

Good.

Let it work.

But give it a harness.

A harness is not a vibe. It is not saying "be careful" in the system prompt and hoping the machine develops character.

A harness is structure.

It says what the agent can touch. It says what the agent must verify. It says what counts as done. It says when the agent has to stop and ask. It says which tools are allowed, which files are off limits, and what evidence has to exist before the human trusts the result.

This is where yoga becomes more useful than the usual AI discourse.

Most AI writing about agents is either sales copy or panic. The sales people want you to believe your company can be replaced by a workflow diagram. The panic people want you to believe the workflow diagram is about to replace civilization.

Both are boring.

The real question is smaller:

Can you hold attention steady while a machine produces plausible movement?

That is yoga.

Not the leggings version. Not the room-temperature cucumber water version. The old version. The discipline version.

The Yoga Sutras do not begin with peak experience. They begin with restraint.

Practice and detachment.

Abhyasa and vairagya.

Practice is returning to the object. Again. Again. Again. You keep coming back after the mind wanders. You do not make a religion out of the wandering. You do not write a memoir about it. You return.

Detachment is not apathy. It is not pretending you do not care. It is refusing to be owned by the thing that appears.

This is exactly how competent LLM usage feels.

You ask the model for a draft. It gives you a draft. You do not marry the draft.

You ask the agent to inspect the code. It gives you a theory. You do not confuse the theory with the bug.

You ask it to implement the fix. It edits files. You do not confuse changed files with solved behavior.

You return to the object.

The object is the actual post. The actual program. The actual reader. The actual user. The actual test. The actual thing in the world that either works or does not.

Everything else is mind movement.

This is why agentic harnesses matter.

The yogic answer is not to hate the mind.

The answer is to train it.

The agentic answer is not to hate the model.

The answer is to harness it.

That starts with restraint.

In yoga, the yamas and niyamas are the boring moral plumbing people skip because they want the fireworks. Non-harming. Truthfulness. Non-stealing. Discipline. Contentment. Study. Surrender.

For agents, the equivalent is permissions.

Do not let the agent write everywhere. Do not let it delete casually. Do not let it invent facts. Do not let it call network services because it feels lonely. Do not let it turn every task into a refactor. Do not let it publish without a human seeing the thing.

Truthfulness alone would fix half of AI usage.

Say what you know. Say what you inferred. Say what you did not verify. Say when the test did not run. Say when the source is weak. Say when the output is plausible but unsupported.

That is not just ethical. It is operational.

Then comes withdrawal.

Yoga has pratyahara, usually described as withdrawal of the senses. For agents, this means refusing to let every possible input drag the work by the collar.

The model will offer paths. Many paths. More paths than you need. It will happily expand the surface area of the work until the original task is buried under helpfulness.

Only inspect the files relevant to this bug. Only browse when current facts matter. Only run the tests that prove the change. Only summarize the diff that exists. Only create the post file, not a new publishing philosophy manifesto unless that was the assignment.

Attention gets stronger when the gates are not wide open all the time.

Then concentration.

Yoga calls this dharana: fixing attention on one object.

This may be the single hardest thing in agentic coding, because the machine makes context switching feel productive.

It is not always productive. Sometimes it is just loss of nerve with a nicer interface.

The harness should keep the session pointed at one object.

What is the bug? What is the file? What behavior must change? What evidence will prove it?

If the agent cannot answer those questions, it should not be coding yet. It should be reading.

This is where the human still matters.

I have watched Codex summarize a change with perfect confidence, then checked the diff and realized the real question had not moved an inch. The machine did work. The responsibility stayed exactly where it was.

The model can produce a hundred plausible next actions. The human has to sit with the work long enough to know which one is real.

This is taste.

Taste is not decoration. Taste is trained attention. It is knowing when the post has become fake. It is knowing when the code is too clever. It is knowing when the agent solved the local issue while creating a larger one three files away. It is knowing when the answer is fluent but dead.

The machine can accelerate labor.

It cannot care on your behalf.

Then non-attachment.

This is the one nobody wants because it ruins the dopamine.

Sometimes the model writes a beautiful paragraph that does not belong.

Delete it.

Sometimes the agent builds an elegant abstraction that makes the system worse.

Revert it.

Sometimes the answer you wanted is not in the context.

Admit it.

Sometimes the best use of AI is to ask one question, take one line, and throw the rest away.

That feels wasteful only if you identify with output volume.

Yoga would call that attachment.

Engineering would call it a bad merge.

The real harness is not just around the model. It is around the human impulse to believe that more generated material means more progress.

This is why the yoga analogy holds.

Yoga is not about becoming calm in the decorative sense. It is about seeing clearly enough that you stop being yanked around by every fluctuation.

Agentic work needs the same thing.

The model speaks. You watch.

The agent acts. You verify.

The system suggests. You choose.

The output appears. You do not bow.

That last part matters.

We are surrounded by machines that produce language with the posture of authority. They do not need to be conscious to distort consciousness. They only need to be fast, confident, and useful enough that we stop noticing the moment when our attention gives itself away.

A good agentic harness gives the attention back.

It slows the hand before the merge.

It demands evidence before belief.

It keeps the model inside a role.

It keeps the human inside responsibility.

That is not anti-AI. It is the only pro-AI position that does not turn into office cosplay for the apocalypse.

Use the machine.

Give it tools.

Let it draft, test, inspect, summarize, and automate the boring machinery.

Then stay awake enough to disagree with it.

If you are building an agentic harness, or if your AI workflow has already started to feel like a thought spiral with better typography, send me a note.

Email me at ev@evbogue.com or text 773-510-8601.