This week I was going to the university every day to study Russian.
Learning a new language as an adult is a very humbling experience. One moment you are designing enterprise architectures, and the next moment you are struggling to say:
me siento bien
which in Russian is: я чувствую себя хорошо
So like any developer, I started cheating immediately.
I began using AI for everything:
ChatGPT to review my exercises
GitHub Copilot inside VS Code correcting my grammar
Sometimes both at the same time
It worked surprisingly well. Almost too well.
At some point during the week, while going back and forth between my Russian homework and my development work, I noticed something interesting.
I was using several AI tools, but the one I kept returning to the most — without even thinking about it — was GitHub Copilot inside Visual Studio Code.
Not in the browser. Not in a separate chat window. Right there in my editor.
That’s when something clicked.
Two favorite tools
XAF is my favorite application framework. I’ve built countless systems with it — ERPs, internal tools, experiments, prototypes.
GitHub Copilot has become my favorite AI agent.
I use it constantly:
writing code
reviewing ideas
fixing small mistakes
even correcting my Russian exercises
And while using Copilot so much inside Visual Studio Code, I started thinking:
What would it feel like to have Copilot inside my own applications?
Not next to them. Inside them.
That idea stayed in my head for a few days until curiosity won.
The innocent experiment
I discovered the GitHub Copilot SDK.
At first glance it looked simple: a .NET library that allows you to embed Copilot into your own applications.
My first thought:
“Nice. This should take 30 minutes.”
Developers should always be suspicious of that sentence.
Because it never takes 30 minutes.
First success (false confidence)
The initial integration was surprisingly easy.
I managed to get a basic response from Copilot inside a test environment. Seeing AI respond from inside my own application felt a bit surreal.
For a moment I thought:
Done. Easy win.
Then I tried to make it actually useful.
That’s when the adventure began.
The rabbit hole
I didn’t want just a chatbot.
I wanted an agent that could actually interact with the application.
Ask questions. Query data. Help create things.
That meant enabling tool calling and proper session handling.
And suddenly everything started failing.
Timeouts. Half responses. Random behavior depending on the model. Sessions hanging for no clear reason.
At first I blamed myself.
Then my integration. Then threading. Then configuration.
Three or four hours later, after trying everything I could think of, I finally discovered the real issue:
It wasn’t my code.
It was the model.
Some models were timing out during tool calls. Others worked perfectly.
The moment I switched models and everything suddenly worked was one of those small but deeply satisfying developer victories.
You know the moment.
You sit back. Look at the screen. And just smile.
The moment it worked
Once everything was connected properly, something changed.
Copilot stopped feeling like a coding assistant and started feeling like an agent living inside the application.
Not in the IDE. Not in a browser tab. Inside the system itself.
That changes the perspective completely.
Instead of building forms and navigation flows, you start thinking:
What if the user could just ask?
Instead of:
open this screen
filter this grid
generate this report
You imagine:
“Show me what matters.”
“Create what I need.”
“Explain this data.”
The interface becomes conversational.
And once you see that working inside your own application, it’s very hard to unsee it.
Why this experiment mattered to me
This wasn’t about building a feature for a client. It wasn’t even about shipping production code.
Most of my work is research and development. Prototypes. Ideas. Experiments.
And this experiment changed the way I see enterprise applications.
For decades we optimized screens, menus, and workflows.
But AI introduces a completely different interaction model.
One where the application is no longer just something you navigate.
It’s something you talk to.
Also… Russian homework
Ironically, this whole experiment started because I was trying to survive my Russian classes.
Using Copilot to correct grammar. Using AI to review exercises. Switching constantly between tools.
Eventually that daily workflow made me curious:
What happens if Copilot is not next to my application, but inside it?
Sometimes innovation doesn’t start with a big strategy.
Sometimes it starts with curiosity and a small personal frustration.
What comes next
This is just the beginning.
Now that AI can live inside applications:
conversations can become interfaces
tools can be invoked by language
workflows can become more flexible
We are moving from:
software you operate
to:
software you collaborate with
And honestly, that’s a very exciting direction.
Final thought
This entire journey started with a simple curiosity while studying Russian and writing code in the same week.
A few hours of experimentation later, Copilot was living inside my favorite framework.
And now I can’t imagine going back.
Note: The next article will go deep into the technical implementation — the architecture, the service layer, tool calling, and how I wired everything into XAF for both Blazor and WinForms.
Last week I was in Sochi on a ski trip. Instead of skiing, I got sick.
So I spent a few days locked in a hotel room, doing what I always do when I can’t move much: working. Or at least what looks like work. In reality, it’s my hobby.
YouTube wasn’t working well there, so I downloaded a few episodes in advance. Most of them were about OpenClaw and its creator, Peter Steinberger — also known for building PSPDFKit.
What started as passive watching turned into one of those rare moments of clarity you only get when you’re forced to slow down.
Shipping Code You Don’t Read (In the Right Context)
In one of the interviews, Peter said something that immediately caught my attention: he ships code he doesn’t review.
At first that sounds reckless. But then I realized… I sometimes do the same.
However, context matters.
Most of my daily work is research and development. I build experimental systems, prototypes, and proofs of concept — either for our internal office or for exploring ideas with clients. A lot of what I write is not production software yet. It’s exploratory. It’s about testing possibilities.
In that environment, I don’t always need to read every line of generated code.
If the use case works and the tests pass, that’s often enough.
I work mainly with C#, ASP.NET, Entity Framework, and XAF from DevExpress. I know these ecosystems extremely well. So if something breaks later, I can go in and fix it myself. But most of the time, the goal isn’t to perfect the implementation — it’s to validate the idea.
That’s a crucial distinction.
When writing production code for a customer, quality and review absolutely matter. You must inspect, verify, and ensure maintainability. But when working on experimental R&D, the priority is different: speed of validation and clarity of results.
In research mode, not every line needs to be perfect. It just needs to prove whether the idea works.
Working “Without Hands”
My real goal is to operate as much as possible without hands.
By that I mean minimizing direct human interaction with implementation. I want to express intent clearly enough so agents can execute it.
If I can describe a system precisely — especially in domains I know deeply — then the agent should be able to build, test, and refine it. My role becomes guiding and validating rather than manually constructing everything.
This is where modern development is heading.
The Problem With Vibe Coding
Peter talked about something that resonated deeply: when you’re vibe coding, you produce a lot of AI slop.
You prompt. The AI generates. You run it. It fails. You tweak. You run again. Still wrong. You tweak again.
Eventually, the human gets tired.
Even when you feel close to a solution, it’s not done until it’s actually done. And manually pushing that process forward becomes exhausting.
This is where many AI workflows break down. Not because the AI can’t generate solutions — but because the loop still depends too heavily on human intervention.
Closing the Loop
The key idea is simple and powerful: agentic development works when the agent can test and correct itself.
You must close the loop.
Instead of: human → prompt → AI → human checks → repeat
You want: AI → builds → tests → detects errors → fixes → tests again → repeat
The agent needs tools to evaluate its own output.
When AI can run tests, detect failures, and iterate automatically, something shifts. The process stops being experimental prompting and starts becoming real engineering.
Spec-Driven vs Self-Correcting Systems
Spec-driven development still matters. Some people dismiss it as too close to waterfall, but every methodology has flaws.
The real evolution is combining clear specifications with self-correcting loops.
The human defines:
The specification
The expected behavior
The acceptance criteria
Then the AI executes, tests, and refines until those criteria are satisfied.
The human doesn’t need to babysit every iteration. The human validates the result once the loop is closed.
Engineering vs Parasitic Ideas
There’s a concept from a book about parasitic ideas.
In social sciences, parasitic ideas can spread because they’re hard to disprove. In engineering, bad ideas fail quickly.
If you design a bridge incorrectly, it collapses. Reality provides immediate feedback.
Software — especially AI-generated software — needs the same grounding in reality. Without continuous testing and validation, generated code can drift into something that looks plausible but doesn’t work.
Closing the loop forces ideas to confront reality.
Tests are that reality.
Taking the Human Out of the Repetitive Loop
The goal isn’t removing humans entirely. It’s removing humans from repetitive validation.
The human should:
Define the specification
Define what “done” means
Approve the final result
The AI should:
Implement
Test
Detect issues
Fix itself
Repeat until success
When that happens, development becomes scalable in a new way. Not because AI writes code faster — but because AI can finish what it starts.
What I Realized in That Hotel Room
Getting sick in Sochi wasn’t part of the plan. But it forced me to slow down long enough to notice something important.
Most friction in modern development isn’t writing code. It’s closing loops.
We generate faster than we validate. We start more than we finish. We rely on humans to constantly re-check work that machines could verify themselves.
In research and experimental work, it’s fine not to inspect every line — as long as the system proves its behavior. In production work, deeper review is essential. Knowing when each approach applies is part of modern engineering maturity.
The future of agentic development isn’t just better models. It’s better loops.
Because in the end, nothing is finished until the loop is closed.
It’s Sunday — so maybe it’s time to write an article to break the flow I’ve been in lately. I’ve been deep into researching design patterns for Oqtane, the web application framework created by Shaun Walker.
Today I woke up really early, around 4:30 a.m. I went downstairs, made coffee, and decided to play around with some applications I had on my list. One of them was HotKey Typer by James Montemagno.
I ran it for the first time and instantly loved it. It’s super simple and useful — but I had a problem. I started using glasses a few years ago, and I generally have trouble with small UI elements on the computer. I usually work at 150% scaling. Unfortunately, James’s app has a fixed window size, so everything looked cut off.
Since I’ve been coding a lot lately, I figured it would be an easy fix. I tweaked it — and it worked! Everything looked better, but a bit too large, so I adjusted it again… and again… and again. Before I knew it, I had turned it into a totally different application.
I was vibe coding for four or five hours straight. In the end, I added a lot of new functionality because I genuinely loved the app and the idea behind it. I added sets (or collections) — basically groups of snippets you can assign to keys 1–9. Then I added autosave, a settings screen, and a reset option for the collections. Every time I finished one feature, I said, “Just one more thing.” Five minutes turned into five hours.
When I was done, I recorded a demo video. It was a lot of fun — and the result was genuinely useful. I even want to create an installer for myself so I can easily reinstall it if I ever reformat my computer. (I used to be that guy who formatted his PC every month. Not anymore… but you never know.)
Lessons From Vibe Coding
I learned a lot from this little experiment. I’ve been vibe coding nonstop for about three months now — I’ve even used up all my Copilot credits before the 25th of the month more than once! Vibe coding is a lot of fun, but it can easily spiral out of control and take you in the wrong direction.
Next week, I want to change my approach a bit — maybe follow a more structured pattern.
Another thing this reminded me of is how important it is to work in a team. My business partner, José Javier Columbie, has always helped me with that. We’ve been working together for about 10 years now. I’m the kind of developer who keeps rewriting, refactoring, optimizing, making things faster, reusable, turning them into plugins or frameworks — and sometimes the original task was actually quite small.
That’s where Javier comes in. He’s the one who says, “José, it’s done. This is what they asked for, and this is what we’re delivering.” He keeps me grounded. Every developer needs that — or at least needs to learn how to set that boundary for themselves.
Final Thoughts
So that’s my takeaway from today’s vibe coding session: have fun, but know when to stop.
I’ll include below the links to:
James Montemagno’s original HotKey Typer repository
A New Era of Computing: AI-Powered Devices Over Form Factor Innovations
In a recent Microsoft event, the spotlight was on a transformative innovation that highlights the power of AI over the constant pursuit of new device form factors. The unveiling of the new Surface computer, equipped with a Neural Processing Unit (NPU), demonstrates that enhancing existing devices with AI capabilities is more impactful than creating entirely new device types.
The Microsoft Event: Revolutionizing with AI
Microsoft showcased the new Surface computer, integrating an NPU that enhances performance by enabling real-time processing of AI algorithms on the device. This approach allows for advanced capabilities like enhanced voice recognition, real-time language translation, and sophisticated image processing, without relying on cloud services.
Why AI Integration Trumps New Form Factors
For years, the tech industry has focused on new device types, from tablets to foldable screens, often addressing problems that didn’t exist. However, the true advancement lies in making existing devices smarter. AI integration offers:
Enhanced Productivity: Automating repetitive tasks and providing intelligent suggestions, allowing users to focus on more complex and creative work.
Personalized Experience: Devices learn and adapt to user preferences, offering a highly customized experience.
Advanced Capabilities: NPUs enable local processing of complex AI models, reducing latency and dependency on the cloud.
Seamless Integration: AI creates a cohesive and efficient workflow across various applications and services.
Comparing to Humane Pin and Rabbit AI Devices
While devices like the Humane Pin and Rabbit AI offer innovative new form factors, they often rely heavily on cloud connectivity for AI functions. In contrast, the Surface’s NPU allows for faster, more secure local processing. This means tasks are completed quicker and more securely, as data doesn’t need to be sent to the cloud.
Conclusion: Embracing AI-Driven Innovation
Microsoft’s AI-enhanced Surface computer signifies a shift towards intelligent augmentation rather than just physical redesign. By embedding AI within existing devices, we unlock new potentials for efficiency, personalization, and functionality, setting a new standard for future tech innovations. This approach not only makes interactions with technology smarter and more intuitive but also emphasizes the importance of on-device processing power for a faster and more secure user experience.