From Vibe Coding to Vibe Documenting: How I Turned 6 Hours of Chaos into 8 Minutes of Clarity

From Vibe Coding to Vibe Documenting: How I Turned 6 Hours of Chaos into 8 Minutes of Clarity

Most of us have fallen into the trap of what I like to call vibe coding. It’s that moment when you’re excited about an idea, you open your editor, call on your favorite AI assistant, and just… vibe. You throw half-baked requirements at the model, it spits out a lot of code, and for a while, it feels like progress.

The problem is, vibe coding usually leads to garbage code, wasted time, and mounting frustration. I know this because I recently spent six hours vibe coding a feature I could have completed in under ten minutes—once I stopped vibing and started documenting.

What Is Vibe Coding?

Vibe coding is coding without a plan. It’s asking an AI to build something from incomplete context, hoping it magically fills in the blanks.

It can look like:

  • Pasting vague prompts into an LLM: “Build me an activity stream module.”
  • Copy-pasting stack overflow snippets without really understanding them.
  • Letting AI hallucinate structures, dependencies, and business rules you never specified.

And it feels productive, because you see code flying across your screen. But what’s really happening is that the AI is guessing. It compiles imaginary versions of your system in its “head,” tries different routes, and produces lots of words that look like solutions but don’t actually fit your framework or needs. The result: chaos disguised as progress.

My Oqtane Activity Stream Story

Here’s a concrete example.

I wanted to build an activity stream—basically, a social-network-style feed—on top of Oqtane, a .NET-based CMS. Now, I know the domain of activity streams really well, but I decided to test how far I could get if I let AI build an Oqtane module for me as if I knew nothing about the framework.

For six hours, I vibe coded. I kept prompting the AI with fragments like:

  • “Make an Oqtane module for an activity feed.”
  • “Add a timeline of user events.”
  • “Hook this up to Oqtane’s structure.”

And the AI did what it does best: it generated code. Lots of it. But the code didn’t fit the Oqtane module lifecycle. It missed important patterns, created unnecessary complexity, and left me stuck in a trial-and-error spiral.

Six hours later, I had nothing usable. Just a pile of messy code and a headache.

The Switch to Vibe Documenting

Then I stepped back. Instead of continuing to let the AI guess, I wrote down what I already knew:

  • How an Oqtane module is structured.
  • What the activity stream needed to display.
  • The key integration points with the CMS.

In other words, I documented the requirements as if I were teaching someone new to Oqtane. Then, I fed that documentation to the AI.

The result? In about eight minutes, I had a clean, working Oqtane module for my activity stream. No trial and error. No hallucinated patterns. Just code that fit perfectly into the framework.

Why Documentation Beats Guesswork

The lesson was obvious: the AI is only as good as the clarity of its input. Documentation gives it structure, reducing the entropy of the problem. Without it, you’re effectively asking the AI to be psychic. With it, you’re giving the AI a blueprint it can execute on with precision.

Think about it this way:

  • Vibe coding = lots of code, little progress.
  • Vibe documenting = clear plan, fast progress.

The irony is that documentation often feels slower up front—but it saves exponential time later. In my case, it turned six wasted hours into eight minutes of actual productivity.

The Human Programmer’s Role

This experience reinforced something important: the human programmer isn’t going anywhere. Our role is to act as the bridge between vague ideas and structured requirements.

We’re the ones who take messy, half-formed thoughts and turn them into clear steps. That’s not just busywork—that’s the essence of engineering. Once those steps exist, the AI can handle the grunt work of coding far more effectively than it can guess at our intentions.

In other words: humans reduce chaos; AI executes clarity.

The Guru Lesson

I like to think of it as a guru’s journey. On one side, the vibe coder sits cross-legged in front of a retro computer, letting chaotic lines of code swirl around them. On the other, the vibe documenter floats serenely, armed with neat stacks of documentation, watching clean code flow effortlessly.

The wisdom is simple: don’t vibe code. Vibe document. It’s the difference between six hours of chaos and eight minutes of clarity.

Conclusion

AI coding assistants are incredible, but they’re not mind readers. If you skip documentation, you’ll spend hours wrestling with hallucinated code. If you take the time to document, you’ll unlock the real power of AI: rapid, reliable execution.

So the next time you feel the urge to vibe code, pause. Write down your requirements. Document your framework. Then let the AI do what it does best: build from clarity.

Because vibe coding wastes time—but vibe documenting saves it.

Comparing OpenAI’s ChatGPT and Microsoft’s Copilot mobile apps

Comparing OpenAI’s ChatGPT and Microsoft’s Copilot mobile apps

OpenAI’s ChatGPT and Microsoft’s Copilot are two powerful AI tools that have revolutionized the way we interact with technology. While both are designed to assist users in various tasks, they each have unique features that set them apart.

OpenAI’s ChatGPT

ChatGPT, developed by OpenAI, is a large language model chatbot capable of communicating with users in a human-like way¹⁷. It can answer questions, create recipes, write code, and offer advice¹⁷. It uses a powerful generative AI model and has access to several tools which it can use to complete tasks²⁶.

Key Features of ChatGPT

  • Chat with Images: You can show ChatGPT images and start a chat.
  • Image Generation: Create images simply by describing them in ChatGPT.
  • Voice Chat: You can now use voice to engage in a back-and-forth conversation with ChatGPT.
  • Web Browsing: Gives ChatGPT the ability to search the internet for additional information.
  • Advanced Data Analysis: Interact with data documents (Excel, CSV, JSON).

Microsoft’s Copilot

Microsoft’s Copilot is an AI companion that works everywhere you do and intelligently adapts to your needs. It can chat with text, voice, and image capabilities, summarize documents and web pages, create images, and use plugins and Copilot GPTs

Key Features of Copilot

  • Chat with Text, Voice, and Image Capabilities: Copilot includes chat with text, voice, and image capabilities/
  • Summarization of Documents and Web Pages: It can summarize documents and web pages.
  • Image Creation: Copilot can create images.
  • Web Grounding: It can ground information from the web.
  • Use of Plugins and Copilot GPTs: Copilot can use plugins and Copilot GPTs.

Comparison of Mobile App Features

Feature OpenAI’s ChatGPT Microsoft’s Copilot
Chat with Text Yes Yes
Voice Input Yes Yes
Image Capabilities Yes Yes
Summarization No Yes
Image Creation Yes Yes
Web Grounding No Yes

What makes the difference, the action button for the iPhone

The action button on iPhones, available on the iPhone 15 Pro and later models, is a customizable button for quick tasks. By default, it opens the camera or activates the flashlight. However, users can customize it to perform various actions, including launching a specific app. When set to launch an app, pressing the action button will instantly open the chosen app, such as the ChatGPT voice interface. This integration is further enhanced by the new ChatGPT-4.0 capabilities, which offer more accurate responses, better understanding of context, and faster processing times. This makes voice interactions with ChatGPT smoother and more efficient, allowing users to quickly and effectively communicate with the AI.

 

 

 

 

The ChatGPT voice interface is one of my favorite features, but there’s one thing missing for it to be perfect. Currently, you can’t send pictures or videos during a voice conversation. The workaround is to leave the voice interface, open the chat interface, find the voice conversation in the chat list, and upload the picture there. However, this brings another problem: you can’t return to the voice interface and continue the previous voice conversation.

Microsoft Copilot, if you are reading this, when will you add a voice interface? And when you finally do it, don’t forget to add the picture and video feature I want. That is all for my wishlist.