Last week I was in Sochi on a ski trip. Instead of skiing, I got sick.
So I spent a few days locked in a hotel room, doing what I always do when I can’t move much: working. Or at least what looks like work. In reality, it’s my hobby.
YouTube wasn’t working well there, so I downloaded a few episodes in advance. Most of them were about OpenClaw and its creator, Peter Steinberger — also known for building PSPDFKit.
What started as passive watching turned into one of those rare moments of clarity you only get when you’re forced to slow down.
Shipping Code You Don’t Read (In the Right Context)
In one of the interviews, Peter said something that immediately caught my attention: he ships code he doesn’t review.
At first that sounds reckless. But then I realized… I sometimes do the same.
However, context matters.
Most of my daily work is research and development. I build experimental systems, prototypes, and proofs of concept — either for our internal office or for exploring ideas with clients. A lot of what I write is not production software yet. It’s exploratory. It’s about testing possibilities.
In that environment, I don’t always need to read every line of generated code.
If the use case works and the tests pass, that’s often enough.
I work mainly with C#, ASP.NET, Entity Framework, and XAF from DevExpress. I know these ecosystems extremely well. So if something breaks later, I can go in and fix it myself. But most of the time, the goal isn’t to perfect the implementation — it’s to validate the idea.
That’s a crucial distinction.
When writing production code for a customer, quality and review absolutely matter. You must inspect, verify, and ensure maintainability. But when working on experimental R&D, the priority is different: speed of validation and clarity of results.
In research mode, not every line needs to be perfect. It just needs to prove whether the idea works.
Working “Without Hands”
My real goal is to operate as much as possible without hands.
By that I mean minimizing direct human interaction with implementation. I want to express intent clearly enough so agents can execute it.
If I can describe a system precisely — especially in domains I know deeply — then the agent should be able to build, test, and refine it. My role becomes guiding and validating rather than manually constructing everything.
This is where modern development is heading.
The Problem With Vibe Coding
Peter talked about something that resonated deeply: when you’re vibe coding, you produce a lot of AI slop.
You prompt. The AI generates. You run it. It fails. You tweak. You run again. Still wrong. You tweak again.
Eventually, the human gets tired.
Even when you feel close to a solution, it’s not done until it’s actually done. And manually pushing that process forward becomes exhausting.
This is where many AI workflows break down. Not because the AI can’t generate solutions — but because the loop still depends too heavily on human intervention.
Closing the Loop
The key idea is simple and powerful: agentic development works when the agent can test and correct itself.
You must close the loop.
Instead of: human → prompt → AI → human checks → repeat
You want: AI → builds → tests → detects errors → fixes → tests again → repeat
The agent needs tools to evaluate its own output.
When AI can run tests, detect failures, and iterate automatically, something shifts. The process stops being experimental prompting and starts becoming real engineering.
Spec-Driven vs Self-Correcting Systems
Spec-driven development still matters. Some people dismiss it as too close to waterfall, but every methodology has flaws.
The real evolution is combining clear specifications with self-correcting loops.
The human defines:
- The specification
- The expected behavior
- The acceptance criteria
Then the AI executes, tests, and refines until those criteria are satisfied.
The human doesn’t need to babysit every iteration. The human validates the result once the loop is closed.
Engineering vs Parasitic Ideas
There’s a concept from a book about parasitic ideas.
In social sciences, parasitic ideas can spread because they’re hard to disprove. In engineering, bad ideas fail quickly.
If you design a bridge incorrectly, it collapses. Reality provides immediate feedback.
Software — especially AI-generated software — needs the same grounding in reality. Without continuous testing and validation, generated code can drift into something that looks plausible but doesn’t work.
Closing the loop forces ideas to confront reality.
Tests are that reality.
Taking the Human Out of the Repetitive Loop
The goal isn’t removing humans entirely. It’s removing humans from repetitive validation.
The human should:
- Define the specification
- Define what “done” means
- Approve the final result
The AI should:
- Implement
- Test
- Detect issues
- Fix itself
- Repeat until success
When that happens, development becomes scalable in a new way. Not because AI writes code faster — but because AI can finish what it starts.
What I Realized in That Hotel Room
Getting sick in Sochi wasn’t part of the plan. But it forced me to slow down long enough to notice something important.
Most friction in modern development isn’t writing code. It’s closing loops.
We generate faster than we validate. We start more than we finish. We rely on humans to constantly re-check work that machines could verify themselves.
In research and experimental work, it’s fine not to inspect every line — as long as the system proves its behavior. In production work, deeper review is essential. Knowing when each approach applies is part of modern engineering maturity.
The future of agentic development isn’t just better models. It’s better loops.
Because in the end, nothing is finished until the loop is closed.