From Airport Chaos to Spec Clarity: How Writing Requirements Saved My Sanity

From Airport Chaos to Spec Clarity: How Writing Requirements Saved My Sanity

I thought vibe coding was chaotic at home. Try doing it while traveling halfway across the world.

Between layovers, hotel lobbies, and unpredictable Wi-Fi, I convinced myself I could keep momentum by letting AI carry the weight. Just toss it some prompts, let it generate code, and keep vibing in transit. Sounds good, right?

It wasn’t. Instead of progress, I found myself trapped in the same entropy loop as before—except now with added airport noise and bad coffee. It finally hit me: coding wasn’t the hard part anymore. The real challenge was lowering the chaos of my ideas into clear, executable requirements.

The Travel Chaos of Vibe Coding

While bouncing from Saint Petersburg to El Salvador, I leaned on vibe coding like a crutch. I threw half-formed prompts at the AI:

  • “Build me a service that works offline.”
  • “Hook this into a booking flow.”
  • “Make it sync when online again.”

And, of course, the AI delivered: endless snippets, scaffolds, and fragments. But none of it fit together. It was like watching a band jam without ever agreeing on the key. Six hours in, all I had was a disjointed mess—again.

Enter GitHub Spec Kit and New Perspectives

Somewhere between flights, I stumbled on GitHub Spec Kit, thanks to a Visual Studio Code podcast episodeLet it Cook – Introducing Spec Kit for Spec-Driven Development! (Episode 13).

Not long after, I tuned into the Merge Conflict podcast: All in on Spec-Driven Development (Episode 479), where James Montemagno and Frank Kruger broke down what spec-driven workflows really mean for developers.

Spec Kit showed me a different angle: instead of treating the AI like a mind reader, treat it like a contractor. Write clear specs, break them down into tasks, and then let the AI handle execution.

James and Frank went further. They contrasted waterfall (where everything is specified upfront) with agile (where progress is iterative and requirements evolve). Their point was simple but profound: no matter the methodology, you can’t skip requirements. Even agile depends on clarity at each iteration.

The Programmer’s True Role

That’s when it clicked: my job as a human programmer isn’t to crank out lines of code anymore. The AI can do that faster than I ever could. My job is to reduce entropy.

I take vague ideas, half-baked business rules, and chaotic travel thoughts—and refine them into something structured. That’s the blueprint AI thrives on. Without it, I’m asking the model to improvise a symphony from random notes. With it, I get clean, working solutions in minutes.

Why Requirements Are the Real Magic

Spec Kit and similar tools are amazing, but they don’t remove the hardest part—they expose it. Writing good requirements is the bottleneck. Once that’s done, the rest flows.

Think of it this way:

  • Vibe coding while traveling = chaos squared.
  • Spec-driven clarity = progress even in noisy, unpredictable environments.

It’s not about choosing waterfall or agile. It’s about embracing the timeless truth that clarity upfront—whether in a full spec or a tight user story—is what makes AI effective.


Conclusion

My journey from vibe coding on the road to spec-driven clarity taught me that code is no longer the hardest problem. The real magic lies in writing requirements that reduce chaos and give AI a fighting chance to deliver.

So next time you feel tempted to vibe code—whether at home or 30,000 feet in the air—pause. Write the requirement. Structure the idea. Then let the AI do what it does best: execute clarity at scale.

Because in the end, humans reduce entropy. AI executes it.

Related Articles 

From Vibe Coding to Vibe Documenting: How I Turned 6 Hours of Chaos into 8 Minutes of Clarity | Joche Ojeda

From Vibe Coding to Vibe Documenting: How I Turned 6 Hours of Chaos into 8 Minutes of Clarity

From Vibe Coding to Vibe Documenting: How I Turned 6 Hours of Chaos into 8 Minutes of Clarity

Most of us have fallen into the trap of what I like to call vibe coding. It’s that moment when you’re excited about an idea, you open your editor, call on your favorite AI assistant, and just… vibe. You throw half-baked requirements at the model, it spits out a lot of code, and for a while, it feels like progress.

The problem is, vibe coding usually leads to garbage code, wasted time, and mounting frustration. I know this because I recently spent six hours vibe coding a feature I could have completed in under ten minutes—once I stopped vibing and started documenting.

What Is Vibe Coding?

Vibe coding is coding without a plan. It’s asking an AI to build something from incomplete context, hoping it magically fills in the blanks.

It can look like:

  • Pasting vague prompts into an LLM: “Build me an activity stream module.”
  • Copy-pasting stack overflow snippets without really understanding them.
  • Letting AI hallucinate structures, dependencies, and business rules you never specified.

And it feels productive, because you see code flying across your screen. But what’s really happening is that the AI is guessing. It compiles imaginary versions of your system in its “head,” tries different routes, and produces lots of words that look like solutions but don’t actually fit your framework or needs. The result: chaos disguised as progress.

My Oqtane Activity Stream Story

Here’s a concrete example.

I wanted to build an activity stream—basically, a social-network-style feed—on top of Oqtane, a .NET-based CMS. Now, I know the domain of activity streams really well, but I decided to test how far I could get if I let AI build an Oqtane module for me as if I knew nothing about the framework.

For six hours, I vibe coded. I kept prompting the AI with fragments like:

  • “Make an Oqtane module for an activity feed.”
  • “Add a timeline of user events.”
  • “Hook this up to Oqtane’s structure.”

And the AI did what it does best: it generated code. Lots of it. But the code didn’t fit the Oqtane module lifecycle. It missed important patterns, created unnecessary complexity, and left me stuck in a trial-and-error spiral.

Six hours later, I had nothing usable. Just a pile of messy code and a headache.

The Switch to Vibe Documenting

Then I stepped back. Instead of continuing to let the AI guess, I wrote down what I already knew:

  • How an Oqtane module is structured.
  • What the activity stream needed to display.
  • The key integration points with the CMS.

In other words, I documented the requirements as if I were teaching someone new to Oqtane. Then, I fed that documentation to the AI.

The result? In about eight minutes, I had a clean, working Oqtane module for my activity stream. No trial and error. No hallucinated patterns. Just code that fit perfectly into the framework.

Why Documentation Beats Guesswork

The lesson was obvious: the AI is only as good as the clarity of its input. Documentation gives it structure, reducing the entropy of the problem. Without it, you’re effectively asking the AI to be psychic. With it, you’re giving the AI a blueprint it can execute on with precision.

Think about it this way:

  • Vibe coding = lots of code, little progress.
  • Vibe documenting = clear plan, fast progress.

The irony is that documentation often feels slower up front—but it saves exponential time later. In my case, it turned six wasted hours into eight minutes of actual productivity.

The Human Programmer’s Role

This experience reinforced something important: the human programmer isn’t going anywhere. Our role is to act as the bridge between vague ideas and structured requirements.

We’re the ones who take messy, half-formed thoughts and turn them into clear steps. That’s not just busywork—that’s the essence of engineering. Once those steps exist, the AI can handle the grunt work of coding far more effectively than it can guess at our intentions.

In other words: humans reduce chaos; AI executes clarity.

The Guru Lesson

I like to think of it as a guru’s journey. On one side, the vibe coder sits cross-legged in front of a retro computer, letting chaotic lines of code swirl around them. On the other, the vibe documenter floats serenely, armed with neat stacks of documentation, watching clean code flow effortlessly.

The wisdom is simple: don’t vibe code. Vibe document. It’s the difference between six hours of chaos and eight minutes of clarity.

Conclusion

AI coding assistants are incredible, but they’re not mind readers. If you skip documentation, you’ll spend hours wrestling with hallucinated code. If you take the time to document, you’ll unlock the real power of AI: rapid, reliable execution.

So the next time you feel the urge to vibe code, pause. Write down your requirements. Document your framework. Then let the AI do what it does best: build from clarity.

Because vibe coding wastes time—but vibe documenting saves it.

DevExpress Documentations is now accessible as an MCP server

DevExpress Documentations is now accessible as an MCP server

Great News for DevExpress and GitHub Copilot Users!

I have exciting news for developers using DevExpress and GitHub Copilot together.

Lately, I’ve been writing a lot of code and absolutely love using GitHub Copilot for this work. I initially used it on VS Code – while I wasn’t a big fan of VS Code before, it’s always been Microsoft’s favorite child and consistently gets the newest and shiniest functionality first. Now Visual Studio (for serious development work, haha) is also getting love from Microsoft, and both IDEs have implemented agent mode.

Following up on this good news, today (August 5, 2025) I saw a post from Dennis Garavsky from DevExpress in our Facebook group (https://www.facebook.com/groups/701851593510732). He shared instructions on how to use the DevExpress Documentation MCP server – it’s basically just 3 simple steps:

Setup Instructions

1. Enable Agent Mode on GitHub Copilot

First, enable agent mode on GitHub Copilot. You can find detailed instructions here:
https://learn.microsoft.com/en-us/visualstudio/ide/copilot-agent-mode?view=vs-2022

2. Create the MCP Configuration File

Create a .mcp.json file in your user profile directory. You can find your user directory by pasting %USERPROFILE% into Windows Explorer.

Here’s the content for the .mcp.json file:

{
  "servers": {
    "dxdocs": {
      "url": "https://api.devexpress.com/mcp/docs",
      "type": "http"
    },
    "msdocs": {
      "url": "https://learn.microsoft.com/api/mcp",
      "type": "http"
    }
  }
}

3. Enable MCP in Your Tool

Enable the MCP in your development environment (see the attached screenshot for reference).

How to Use It

Now you can add the phrase “Use dxdocs” to your prompts and voilà! The magic happens.

Example prompt:

“Create a domain object for a customer, add a code name and address property and validate for null using XAF validation rules. Use dxdocs”

Important Disclaimer from DevExpress

The DevExpress MCP Documentation Server is currently available as a preview. Certain DevExpress-related coding tasks may still need further server and prompt fine-tuning. Please share your experience in the comments – both with and without this MCP. Let us know what needs improvement, what additional steps you took to get better results with AI tools in general, and any other feedback you have.


Happy coding with your new AI-powered DevExpress development experience!

Understanding the N+1 Database Problem using Entity Framework Core

Understanding the N+1 Database Problem using Entity Framework Core

What is the N+1 Problem?

Imagine you’re running a blog website and want to display a list of all blogs along with how many posts each one has. The N+1 problem is a common database performance issue that happens when your application makes way too many database trips to get this simple information.

Our Test Database Setup

Our test suite creates a realistic blog scenario with:

  • 3 different blogs
  • Multiple posts for each blog
  • Comments on posts
  • Tags associated with blogs

This mirrors real-world applications where data is interconnected and needs to be loaded efficiently.

Test Case 1: The Classic N+1 Problem (Lazy Loading)

What it does: This test demonstrates how “lazy loading” can accidentally create the N+1 problem. Lazy loading sounds helpful – it automatically fetches related data when you need it. But this convenience comes with a hidden cost.

The Code:

[Test]
public void Test_N_Plus_One_Problem_With_Lazy_Loading()
{
    var blogs = _context.Blogs.ToList(); // Query 1: Load blogs
    
    foreach (var blog in blogs)
    {
        var postCount = blog.Posts.Count; // Each access triggers a query!
        TestLogger.WriteLine($"Blog: {blog.Title} - Posts: {postCount}");
    }
}

The SQL Queries Generated:

-- Query 1: Load all blogs
SELECT "b"."Id", "b"."CreatedDate", "b"."Description", "b"."Title"
FROM "Blogs" AS "b"

-- Query 2: Load posts for Blog 1 (triggered by lazy loading)
SELECT "p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title"
FROM "Posts" AS "p"
WHERE "p"."BlogId" = 1

-- Query 3: Load posts for Blog 2 (triggered by lazy loading)
SELECT "p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title"
FROM "Posts" AS "p"
WHERE "p"."BlogId" = 2

-- Query 4: Load posts for Blog 3 (triggered by lazy loading)
SELECT "p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title"
FROM "Posts" AS "p"
WHERE "p"."BlogId" = 3

The Problem: 4 total queries (1 + 3) – Each time you access blog.Posts.Count, lazy loading triggers a separate database trip.

Test Case 2: Alternative N+1 Demonstration

What it does: This test manually recreates the N+1 pattern to show exactly what’s happening, even if lazy loading isn’t working properly.

The Code:

[Test]
public void Test_N_Plus_One_Problem_Alternative_Approach()
{
    var blogs = _context.Blogs.ToList(); // Query 1
    
    foreach (var blog in blogs)
    {
        // This explicitly loads posts for THIS blog only (simulates lazy loading)
        var posts = _context.Posts.Where(p => p.BlogId == blog.Id).ToList();
        TestLogger.WriteLine($"Loaded {posts.Count} posts for blog {blog.Id}");
    }
}

The Lesson: This explicitly demonstrates the N+1 pattern with manual queries. The result is identical to lazy loading – one query per blog plus the initial blogs query.

Test Case 3: N+1 vs Include() – Side by Side Comparison

What it does: This is the money shot – a direct comparison showing the dramatic difference between the problematic approach and the solution.

The Bad Code (N+1):

// BAD: N+1 Problem
var blogsN1 = _context.Blogs.ToList(); // Query 1
foreach (var blog in blogsN1)
{
    var posts = _context.Posts.Where(p => p.BlogId == blog.Id).ToList(); // Queries 2,3,4...
}

The Good Code (Include):

// GOOD: Include() Solution  
var blogsInclude = _context.Blogs
    .Include(b => b.Posts)
    .ToList(); // Single query with JOIN

foreach (var blog in blogsInclude)
{
    // No additional queries needed - data is already loaded!
    var postCount = blog.Posts.Count;
}

The SQL Queries:

Bad Approach (Multiple Queries):

-- Same 4 separate queries as shown in Test Case 1

Good Approach (Single Query):

SELECT "b"."Id", "b"."CreatedDate", "b"."Description", "b"."Title", 
       "p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title"
FROM "Blogs" AS "b"
LEFT JOIN "Posts" AS "p" ON "b"."Id" = "p"."BlogId"
ORDER BY "b"."Id"

Results from our test:

  • Bad approach: 4 total queries (1 + 3)
  • Good approach: 1 total query
  • Performance improvement: 75% fewer database round trips!

Test Case 4: Guaranteed N+1 Problem

What it does: This test removes any doubt by explicitly demonstrating the N+1 pattern with clear step-by-step output.

The Code:

[Test]
public void Test_Guaranteed_N_Plus_One_Problem()
{
    var blogs = _context.Blogs.ToList(); // Query 1
    int queryCount = 1;

    foreach (var blog in blogs)
    {
        queryCount++;
        // This explicitly demonstrates the N+1 pattern
        var posts = _context.Posts.Where(p => p.BlogId == blog.Id).ToList();
        TestLogger.WriteLine($"Loading posts for blog '{blog.Title}' (Query #{queryCount})");
    }
}

Why it’s useful: This ensures we can always see the problem clearly by manually executing the problematic pattern, making it impossible to miss.

Test Case 5: Eager Loading with Include()

What it does: Shows the correct way to load related data upfront using Include().

The Code:

[Test]
public void Test_Eager_Loading_With_Include()
{
    var blogsWithPosts = _context.Blogs
        .Include(b => b.Posts)
        .ToList();

    foreach (var blog in blogsWithPosts)
    {
        // No additional queries - data already loaded!
        TestLogger.WriteLine($"Blog: {blog.Title} - Posts: {blog.Posts.Count}");
    }
}

The SQL Query:

SELECT "b"."Id", "b"."CreatedDate", "b"."Description", "b"."Title", 
       "p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title"
FROM "Blogs" AS "b"
LEFT JOIN "Posts" AS "p" ON "b"."Id" = "p"."BlogId"
ORDER BY "b"."Id"

The Benefit: One database trip loads everything. When you access blog.Posts.Count, the data is already there.

Test Case 6: Multiple Includes with ThenInclude()

What it does: Demonstrates loading deeply nested data – blogs → posts → comments – all in one query.

The Code:

[Test]
public void Test_Multiple_Includes_With_ThenInclude()
{
    var blogsWithPostsAndComments = _context.Blogs
        .Include(b => b.Posts)
            .ThenInclude(p => p.Comments)
        .ToList();

    foreach (var blog in blogsWithPostsAndComments)
    {
        foreach (var post in blog.Posts)
        {
            // All data loaded in one query!
            TestLogger.WriteLine($"Post: {post.Title} - Comments: {post.Comments.Count}");
        }
    }
}

The SQL Query:

SELECT "b"."Id", "b"."CreatedDate", "b"."Description", "b"."Title",
       "p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title",
       "c"."Id", "c"."Author", "c"."Content", "c"."CreatedDate", "c"."PostId"
FROM "Blogs" AS "b"
LEFT JOIN "Posts" AS "p" ON "b"."Id" = "p"."BlogId"
LEFT JOIN "Comments" AS "c" ON "p"."Id" = "c"."PostId"
ORDER BY "b"."Id", "p"."Id"

The Challenge: Loading three levels of data in one optimized query instead of potentially hundreds of separate queries.

Test Case 7: Projection with Select()

What it does: Shows how to load only the specific data you actually need instead of entire objects.

The Code:

[Test]
public void Test_Projection_With_Select()
{
    var blogData = _context.Blogs
        .Select(b => new
        {
            BlogTitle = b.Title,
            PostCount = b.Posts.Count(),
            RecentPosts = b.Posts
                .OrderByDescending(p => p.PublishedDate)
                .Take(2)
                .Select(p => new { p.Title, p.PublishedDate })
        })
        .ToList();
}

The SQL Query (from our test output):

SELECT "b"."Title", (
    SELECT COUNT(*)
    FROM "Posts" AS "p"
    WHERE "b"."Id" = "p"."BlogId"), "b"."Id", "t0"."Title", "t0"."PublishedDate", "t0"."Id"
FROM "Blogs" AS "b"
LEFT JOIN (
    SELECT "t"."Title", "t"."PublishedDate", "t"."Id", "t"."BlogId"
    FROM (
        SELECT "p0"."Title", "p0"."PublishedDate", "p0"."Id", "p0"."BlogId", 
               ROW_NUMBER() OVER(PARTITION BY "p0"."BlogId" ORDER BY "p0"."PublishedDate" DESC) AS "row"
        FROM "Posts" AS "p0"
    ) AS "t"
    WHERE "t"."row" <= 2
) AS "t0" ON "b"."Id" = "t0"."BlogId"
ORDER BY "b"."Id", "t0"."BlogId", "t0"."PublishedDate" DESC

Why it matters: This query only loads the specific fields needed, uses window functions for efficiency, and calculates counts in the database rather than loading full objects.

Test Case 8: Split Query Strategy

What it does: Demonstrates an alternative approach where one large JOIN is split into multiple optimized queries.

The Code:

[Test]
public void Test_Split_Query()
{
    var blogs = _context.Blogs
        .AsSplitQuery()
        .Include(b => b.Posts)
        .Include(b => b.Tags)
        .ToList();
}

The SQL Queries (from our test output):

-- Query 1: Load blogs
SELECT "b"."Id", "b"."CreatedDate", "b"."Description", "b"."Title"
FROM "Blogs" AS "b"
ORDER BY "b"."Id"

-- Query 2: Load posts (automatically generated)
SELECT "p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title", "b"."Id"
FROM "Blogs" AS "b"
INNER JOIN "Posts" AS "p" ON "b"."Id" = "p"."BlogId"
ORDER BY "b"."Id"

-- Query 3: Load tags (automatically generated)
SELECT "t"."Id", "t"."Name", "b"."Id"
FROM "Blogs" AS "b"
INNER JOIN "BlogTag" AS "bt" ON "b"."Id" = "bt"."BlogsId"
INNER JOIN "Tags" AS "t" ON "bt"."TagsId" = "t"."Id"
ORDER BY "b"."Id"

When to use it: When JOINing lots of related data creates one massive, slow query. Split queries break this into several smaller, faster queries.

Test Case 9: Filtered Include()

What it does: Shows how to load only specific related data – in this case, only recent posts from the last 15 days.

The Code:

[Test]
public void Test_Filtered_Include()
{
    var cutoffDate = DateTime.Now.AddDays(-15);
    var blogsWithRecentPosts = _context.Blogs
        .Include(b => b.Posts.Where(p => p.PublishedDate > cutoffDate))
        .ToList();
}

The SQL Query:

SELECT "b"."Id", "b"."CreatedDate", "b"."Description", "b"."Title",
       "p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title"
FROM "Blogs" AS "b"
LEFT JOIN "Posts" AS "p" ON "b"."Id" = "p"."BlogId" AND "p"."PublishedDate" > @cutoffDate
ORDER BY "b"."Id"

The Efficiency: Only loads posts that meet the criteria, reducing data transfer and memory usage.

Test Case 10: Explicit Loading

What it does: Demonstrates manually controlling when related data gets loaded.

The Code:

[Test]
public void Test_Explicit_Loading()
{
    var blogs = _context.Blogs.ToList(); // Load blogs only
    
    // Now explicitly load posts for all blogs
    foreach (var blog in blogs)
    {
        _context.Entry(blog)
            .Collection(b => b.Posts)
            .Load();
    }
}

The SQL Queries:

-- Query 1: Load blogs
SELECT "b"."Id", "b"."CreatedDate", "b"."Description", "b"."Title"
FROM "Blogs" AS "b"

-- Query 2: Explicitly load posts for blog 1
SELECT "p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title"
FROM "Posts" AS "p"
WHERE "p"."BlogId" = 1

-- Query 3: Explicitly load posts for blog 2
SELECT "p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title"
FROM "Posts" AS "p"
WHERE "p"."BlogId" = 2

-- ... and so on

When useful: When you sometimes need related data and sometimes don’t. You control exactly when the additional database trip happens.

Test Case 11: Batch Loading Pattern

What it does: Shows a clever technique to avoid N+1 by loading all related data in one query, then organizing it in memory.

The Code:

[Test]
public void Test_Batch_Loading_Pattern()
{
    var blogs = _context.Blogs.ToList(); // Query 1
    var blogIds = blogs.Select(b => b.Id).ToList();

    // Single query to get all posts for all blogs
    var posts = _context.Posts
        .Where(p => blogIds.Contains(p.BlogId))
        .ToList(); // Query 2

    // Group posts by blog in memory
    var postsByBlog = posts.GroupBy(p => p.BlogId).ToDictionary(g => g.Key, g => g.ToList());
}

The SQL Queries:

-- Query 1: Load all blogs
SELECT "b"."Id", "b"."CreatedDate", "b"."Description", "b"."Title"
FROM "Blogs" AS "b"

-- Query 2: Load ALL posts for ALL blogs in one query
SELECT "p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title"
FROM "Posts" AS "p"
WHERE "p"."BlogId" IN (1, 2, 3)

The Result: Just 2 queries total, regardless of how many blogs you have. Data organization happens in memory.

Test Case 12: Performance Comparison

What it does: Puts all the approaches head-to-head to show their relative performance.

The Code:

[Test]
public void Test_Performance_Comparison()
{
    // N+1 Problem (Multiple Queries)
    var blogs1 = _context.Blogs.ToList();
    foreach (var blog in blogs1)
    {
        var count = blog.Posts.Count(); // Triggers separate query
    }

    // Eager Loading (Single Query)
    var blogs2 = _context.Blogs
        .Include(b => b.Posts)
        .ToList();

    // Projection (Minimal Data)
    var blogSummaries = _context.Blogs
        .Select(b => new { b.Title, PostCount = b.Posts.Count() })
        .ToList();
}

The SQL Queries Generated:

N+1 Problem: 4 separate queries (as shown in previous examples)

Eager Loading: 1 JOIN query (as shown in Test Case 5)

Projection: 1 optimized query with subquery:

SELECT "b"."Title", (
    SELECT COUNT(*)
    FROM "Posts" AS "p"
    WHERE "b"."Id" = "p"."BlogId") AS "PostCount"
FROM "Blogs" AS "b"

Real-World Performance Impact

Let’s scale this up to see why it matters:

Small Application (10 blogs):

  • N+1 approach: 11 queries (≈110ms)
  • Optimized approach: 1 query (≈10ms)
  • Time saved: 100ms

Medium Application (100 blogs):

  • N+1 approach: 101 queries (≈1,010ms)
  • Optimized approach: 1 query (≈10ms)
  • Time saved: 1 second

Large Application (1000 blogs):

  • N+1 approach: 1001 queries (≈10,010ms)
  • Optimized approach: 1 query (≈10ms)
  • Time saved: 10 seconds

Key Takeaways

  1. The N+1 problem gets exponentially worse as your data grows
  2. Lazy loading is convenient but dangerous – it can hide performance problems
  3. Include() is your friend for loading related data efficiently
  4. Projection is powerful when you only need specific fields
  5. Different problems need different solutions – there’s no one-size-fits-all approach
  6. SQL query inspection is crucial – always check what queries your ORM generates

The Bottom Line

This test suite shows that small changes in how you write database queries can transform a slow, database-heavy operation into a fast, efficient one. The difference between a frustrated user waiting 10 seconds for a page to load and a happy user getting instant results often comes down to understanding and avoiding the N+1 problem.

The beauty of these tests is that they use real database queries with actual SQL output, so you can see exactly what’s happening under the hood. Understanding these patterns will make you a more effective developer and help you build applications that stay fast as they grow.

You can find the source for this article in my here 

Day 4 (the missing day): Building Data Import/Export Services for Your ERP System

Day 4 (the missing day): Building Data Import/Export Services for Your ERP System

Welcome back to our ERP development series! In previous days, we’ve covered the foundational architecture, database design, and core entity structures for our accounting system. Today, we’re tackling an essential but often overlooked aspect of any enterprise software: data import and export capabilities.

Why is this important? Because no enterprise system exists in isolation. Companies need to move data between systems, migrate from legacy software, or simply handle batch data operations. In this article, we’ll build robust import/export services for the Chart of Accounts, demonstrating principles you can apply to any part of your ERP system.

The Importance of Data Exchange

Before diving into the code, let’s understand why dedicated import/export functionality matters:

  1. Data Migration – When companies adopt your ERP, they need to transfer existing data
  2. System Integration – ERPs need to exchange data with other business systems
  3. Batch Processing – Accountants often prepare data in spreadsheets before importing
  4. Backup & Transfer – Provides a simple way to backup or transfer configurations
  5. User Familiarity – Many users are comfortable working with CSV files

CSV (Comma-Separated Values) is our format of choice because it’s universally supported and easily edited in spreadsheet applications like Excel, which most business users are familiar with.

Our Implementation Approach

For our Chart of Accounts module, we’ll create:

  1. A service interface defining import/export operations
  2. A concrete implementation handling CSV parsing/generation
  3. Unit tests verifying all functionality

Our goal is to maintain clean separation of concerns, robust error handling, and clear validation rules.

Defining the Interface

First, we define a clear contract for our import/export service:

/// <summary>
/// Interface for chart of accounts import/export operations
/// </summary>
public interface IAccountImportExportService
{
    /// <summary>
    /// Imports accounts from a CSV file
    /// </summary>
    /// <param name="csvContent">Content of the CSV file as a string</param>
    /// <param name="userName">User performing the operation</param>
    /// <returns>Collection of imported accounts and any validation errors</returns>
    Task<(IEnumerable<IAccount> ImportedAccounts, IEnumerable<string> Errors)> ImportFromCsvAsync(string csvContent, string userName);

    /// <summary>
    /// Exports accounts to a CSV format
    /// </summary>
    /// <param name="accounts">Accounts to export</param>
    /// <returns>CSV content as a string</returns>
    Task<string> ExportToCsvAsync(IEnumerable<IAccount> accounts);
}

Notice how we use C# tuples to return both the imported accounts and any validation errors from the import operation. This gives callers full insight into the operation’s results.

Implementing CSV Import

The import method is the more complex of the two, requiring:

  1. Parsing and validating the CSV structure
  2. Converting CSV data to domain objects
  3. Validating the created objects
  4. Reporting any errors along the way

Here’s our implementation approach:

public async Task<(IEnumerable<IAccount> ImportedAccounts, IEnumerable<string> Errors)> ImportFromCsvAsync(string csvContent, string userName)
{
    List<AccountDto> importedAccounts = new List<AccountDto>();
    List<string> errors = new List<string>();

    if (string.IsNullOrEmpty(csvContent))
    {
        errors.Add("CSV content is empty");
        return (importedAccounts, errors);
    }

    try
    {
        // Split the CSV into lines
        string[] lines = csvContent.Split(new[] { "\r\n", "\r", "\n" }, StringSplitOptions.RemoveEmptyEntries);
        
        if (lines.Length <= 1)
        {
            errors.Add("CSV file contains no data rows");
            return (importedAccounts, errors);
        }

        // Assume first line is header
        string[] headers = ParseCsvLine(lines[0]);
        
        // Validate headers
        if (!ValidateHeaders(headers, errors))
        {
            return (importedAccounts, errors);
        }

        // Process data rows
        for (int i = 1; i < lines.Length; i++)
        {
            string[] fields = ParseCsvLine(lines[i]);
            
            if (fields.Length != headers.Length)
            {
                errors.Add($"Line {i + 1}: Column count mismatch. Expected {headers.Length}, got {fields.Length}");
                continue;
            }

            var account = CreateAccountFromCsvFields(headers, fields);
            
            // Validate account
            if (!_accountValidator.ValidateAccount(account))
            {
                errors.Add($"Line {i + 1}: Account validation failed for account {account.AccountName}");
                continue;
            }

            // Set audit information
            _auditService.SetCreationAudit(account, userName);
            
            importedAccounts.Add(account);
        }

        return (importedAccounts, errors);
    }
    catch (Exception ex)
    {
        errors.Add($"Error importing CSV: {ex.Message}");
        return (importedAccounts, errors);
    }
}

Key aspects of this implementation:

  1. Early validation – We quickly detect and report basic issues like empty input
  2. Row-by-row processing – Each line is processed independently, allowing partial success
  3. Detailed error reporting – We collect specific errors with line numbers
  4. Domain validation – We apply business rules from AccountValidator
  5. Audit trail – We set audit fields for each imported account

The ParseCsvLine method handles the complexities of CSV parsing, including quoted fields that may contain commas:

private string[] ParseCsvLine(string line)
{
    List<string> fields = new List<string>();
    bool inQuotes = false;
    int startIndex = 0;
    
    for (int i = 0; i < line.Length; i++)
    {
        if (line[i] == '"')
        {
            inQuotes = !inQuotes;
        }
        else if (line[i] == ',' && !inQuotes)
        {
            fields.Add(line.Substring(startIndex, i - startIndex).Trim().TrimStart('"').TrimEnd('"'));
            startIndex = i + 1;
        }
    }
    
    // Add the last field
    fields.Add(line.Substring(startIndex).Trim().TrimStart('"').TrimEnd('"'));
    
    return fields.ToArray();
}

Implementing CSV Export

The export method is simpler, converting domain objects to CSV format:

public Task<string> ExportToCsvAsync(IEnumerable<IAccount> accounts)
{
    if (accounts == null || !accounts.Any())
    {
        return Task.FromResult(GetCsvHeader());
    }

    StringBuilder csvBuilder = new StringBuilder();
    
    // Add header
    csvBuilder.AppendLine(GetCsvHeader());
    
    // Add data rows
    foreach (var account in accounts)
    {
        csvBuilder.AppendLine(GetCsvRow(account));
    }
    
    return Task.FromResult(csvBuilder.ToString());
}

We take special care to handle edge cases like null or empty collections, making the API robust against improper usage.

Testing the Implementation

Our test suite verifies both the happy paths and various error conditions:

  1. Import validation – Tests for empty content, missing headers, etc.
  2. Export formatting – Tests for proper CSV generation, handling of special characters
  3. Round-trip integrity – Tests exporting and re-importing preserves data integrity

For example, here’s a round-trip test to verify data integrity:

[Test]
public async Task RoundTrip_ExportThenImport_PreservesAccounts()
{
    // Arrange
    var originalAccounts = new List<IAccount>
    {
        new AccountDto
        {
            Id = Guid.NewGuid(),
            AccountName = "Cash",
            OfficialCode = "11000",
            AccountType = AccountType.Asset,
            // other properties...
        },
        new AccountDto
        {
            Id = Guid.NewGuid(),
            AccountName = "Accounts Receivable",
            OfficialCode = "12000",
            AccountType = AccountType.Asset,
            // other properties...
        }
    };

    // Act
    string csv = await _importExportService.ExportToCsvAsync(originalAccounts);
    var (importedAccounts, errors) = await _importExportService.ImportFromCsvAsync(csv, "Test User");

    // Assert
    Assert.That(errors, Is.Empty);
    Assert.That(importedAccounts.Count(), Is.EqualTo(originalAccounts.Count));
    
    // Check first account
    var firstOriginal = originalAccounts[0];
    var firstImported = importedAccounts.First();
    Assert.That(firstImported.AccountName, Is.EqualTo(firstOriginal.AccountName));
    Assert.That(firstImported.OfficialCode, Is.EqualTo(firstOriginal.OfficialCode));
    Assert.That(firstImported.AccountType, Is.EqualTo(firstOriginal.AccountType));
    
    // Check second account similarly...
}

Integration with the Broader System

This service isn’t meant to be used in isolation. In a complete ERP system, you’d typically:

  1. Add a controller to expose these operations via API endpoints
  2. Create UI components for file upload/download
  3. Implement progress reporting for larger imports
  4. Add transaction support to make imports atomic
  5. Include validation rules specific to your business domain

Design Patterns and Best Practices

Our implementation exemplifies several important patterns:

  1. Interface Segregation – The service has a focused, cohesive purpose
  2. Dependency Injection – We inject the IAuditService rather than creating it
  3. Early Validation – We validate input before processing
  4. Detailed Error Reporting – We collect and return specific errors
  5. Defensive Programming – We handle edge cases and exceptions gracefully

Future Extensions

This pattern can be extended to other parts of your ERP system:

  1. Customer/Vendor Data – Import/export contact information
  2. Inventory Items – Handle product catalog updates
  3. Journal Entries – Process batch financial transactions
  4. Reports – Export financial data for external analysis

Conclusion

Data import/export capabilities are a critical component of any enterprise system. They bridge the gap between systems, facilitate migration, and support batch operations. By implementing these services with careful error handling and validation, we’ve added significant value to our ERP system.

In the next article, we’ll explore building financial reporting services to generate balance sheets, income statements, and other critical financial reports from our accounting data.

Stay tuned, and happy coding!


About Us

YouTube

https://www.youtube.com/c/JocheOjedaXAFXAMARINC

Our sites

Let’s discuss your XAF

This call/zoom will give you the opportunity to define the roadblocks in your current XAF solution. We can talk about performance, deployment or custom implementations. Together we will review you pain points and leave you with recommendations to get your app back in track

https://calendly.com/bitframeworks/bitframeworks-free-xaf-support-hour

Our free A.I courses on Udemy

 

Building a Comprehensive Accounting System Integration Test – Day 5

Building a Comprehensive Accounting System Integration Test – Day 5

Integration testing is a critical phase in software development where individual modules are combined and tested as a group. In our accounting system, we’ve created a robust integration test that demonstrates how the Document module and Chart of Accounts module interact to form a functional accounting system. In this post, I’ll explain the components and workflow of our integration test.

The Architecture of Our Integration Test

Our integration test simulates a small retail business’s accounting operations. Let’s break down the key components:

Test Fixture Setup

The AccountingIntegrationTests class contains all our test methods and is decorated with the [TestFixture] attribute to identify it as a NUnit test fixture. The Setup method initializes our services and data structures:

[SetUp]
public async Task Setup()
{
    // Initialize services
    _auditService = new AuditService();
    _documentService = new DocumentService(_auditService);
    _transactionService = new TransactionService();
    _accountValidator = new AccountValidator();
    _accountBalanceCalculator = new AccountBalanceCalculator();
    
    // Initialize storage
    _accounts = new Dictionary<string, AccountDto>();
    _documents = new Dictionary<string, IDocument>();
    _transactions = new Dictionary<string, ITransaction>();
    
    // Create Chart of Accounts
    await SetupChartOfAccounts();
}

This method:

  1. Creates instances of our services
  2. Sets up in-memory storage for our entities
  3. Calls SetupChartOfAccounts() to create our initial chart of accounts

Chart of Accounts Setup

The SetupChartOfAccounts method creates a basic chart of accounts for our retail business:

private async Task SetupChartOfAccounts()
{
    // Clear accounts dictionary in case this method is called multiple times
    _accounts.Clear();
    
    // Assets (1xxxx)
    await CreateAccount("Cash", "10100", AccountType.Asset, "Cash on hand and in banks");
    await CreateAccount("Accounts Receivable", "11000", AccountType.Asset, "Amounts owed by customers");
    // ... more accounts
    
    // Verify all accounts are valid
    foreach (var account in _accounts.Values)
    {
        bool isValid = _accountValidator.ValidateAccount(account);
        Assert.That(isValid, Is.True, $"Account {account.AccountName} validation failed");
    }
    
    // Verify expected number of accounts
    Assert.That(_accounts.Count, Is.EqualTo(17), "Expected 17 accounts in chart of accounts");
}

This method:

  1. Creates accounts for each category (Assets, Liabilities, Equity, Revenue, and Expenses)
  2. Validates each account using our AccountValidator
  3. Ensures we have the expected number of accounts

Individual Transaction Tests

We have separate test methods for specific transaction types:

Purchase of Inventory

CanRecordPurchaseOfInventory demonstrates recording a supplier invoice:

[Test]
public async Task CanRecordPurchaseOfInventory()
{
    // Arrange - Create document
    var document = new DocumentDto { /* properties */ };
    
    // Act - Create document, transaction, and entries
    var createdDocument = await _documentService.CreateDocumentAsync(document, TEST_USER);
    // ... create transaction and entries
    
    // Validate transaction
    var isValid = await _transactionService.ValidateTransactionAsync(
        createdTransaction.Id, ledgerEntries);
        
    // Assert
    Assert.That(isValid, Is.True, "Transaction should be balanced");
}

This test:

  1. Creates a document for our inventory purchase
  2. Creates a transaction linked to that document
  3. Creates ledger entries (debiting Inventory, crediting Accounts Payable)
  4. Validates that the transaction is balanced (debits = credits)

Sale to Customer

CanRecordSaleToCustomer demonstrates recording a customer sale:

[Test]
public async Task CanRecordSaleToCustomer()
{
    // Similar pattern to inventory purchase, but with sale-specific entries
    // ...
    
    // Create ledger entries - a more complex transaction with multiple entries
    var ledgerEntries = new List<ILedgerEntry>
    {
        // Cash received
        // Sales revenue
        // Cost of goods sold
        // Reduce inventory
    };
    
    // Validate transaction
    // ...
}

This test is more complex, recording both the revenue side (debit Cash, credit Sales Revenue) and the cost side (debit Cost of Goods Sold, credit Inventory) of a sale.

Full Accounting Cycle Test

The CanExecuteFullAccountingCycle method ties everything together:

[Test]
public async Task CanExecuteFullAccountingCycle()
{
    // Run these in a defined order, with clean account setup first
    _accounts.Clear();
    _documents.Clear();
    _transactions.Clear();
    
    await SetupChartOfAccounts();
    
    // 1. Record inventory purchase
    await RecordPurchaseOfInventory();
    
    // 2. Record sale to customer
    await RecordSaleToCustomer();
    
    // 3. Record utility expense
    await RecordBusinessExpense();
    
    // 4. Create a payment to supplier
    await RecordPaymentToSupplier();
    
    // 5. Verify account balances
    await VerifyAccountBalances();
}

This test:

  1. Starts with a clean state
  2. Records a sequence of business operations
  3. Verifies the final account balances

Mock Account Balance Calculator

The MockAccountBalanceCalculator is a crucial part of our test that simulates how a real database would work:

public class MockAccountBalanceCalculator : AccountBalanceCalculator
{
    private readonly Dictionary<string, AccountDto> _accounts;
    private readonly Dictionary<Guid, List<LedgerEntryDto>> _ledgerEntriesByTransaction = new();
    private readonly Dictionary<Guid, decimal> _accountBalances = new();

    public MockAccountBalanceCalculator(
        Dictionary<string, AccountDto> accounts,
        Dictionary<string, ITransaction> transactions)
    {
        _accounts = accounts;
        
        // Create mock ledger entries for each transaction
        InitializeLedgerEntries(transactions);
        
        // Calculate account balances based on ledger entries
        CalculateAllBalances();
    }

    // Methods to initialize and calculate
    // ...
}

This class:

  1. Takes our accounts and transactions as inputs
  2. Creates a collection of ledger entries for each transaction
  3. Calculates account balances based on these entries
  4. Provides methods to query account balances and ledger entries

The InitializeLedgerEntries method creates a collection of ledger entries for each transaction:

private void InitializeLedgerEntries(Dictionary<string, ITransaction> transactions)
{
    // For inventory purchase
    if (transactions.TryGetValue("InventoryPurchase", out var inventoryPurchase))
    {
        var entries = new List<LedgerEntryDto>
        {
            // Create entries for this transaction
            // ...
        };
        _ledgerEntriesByTransaction[inventoryPurchase.Id] = entries;
    }
    
    // For other transactions
    // ...
}

The CalculateAllBalances method processes these entries to calculate account balances:

private void CalculateAllBalances()
{
    // Initialize all account balances to zero
    foreach (var account in _accounts.Values)
    {
        _accountBalances[account.Id] = 0m;
    }
    
    // Process each transaction's ledger entries
    foreach (var entries in _ledgerEntriesByTransaction.Values)
    {
        foreach (var entry in entries)
        {
            if (entry.EntryType == EntryType.Debit)
            {
                _accountBalances[entry.AccountId] += entry.Amount;
            }
            else // Credit
            {
                _accountBalances[entry.AccountId] -= entry.Amount;
            }
        }
    }
}

This approach closely mirrors how a real accounting system would work with a database:

  1. Ledger entries are stored in collections (similar to database tables)
  2. Account balances are calculated by processing all relevant entries
  3. The calculator provides methods to query this data (similar to a repository)

Balance Verification

The VerifyAccountBalances method uses our mock calculator to verify account balances:

private async Task VerifyAccountBalances()
{
    // Create mock balance calculator
    var mockBalanceCalculator = new MockAccountBalanceCalculator(_accounts, _transactions);
    
    // Verify individual account balances
    decimal cashBalance = mockBalanceCalculator.CalculateAccountBalance(
        _accounts["Cash"].Id, 
        _testDate.AddDays(15)
    );
    Assert.That(cashBalance, Is.EqualTo(-2750m), "Cash balance is incorrect");
    
    // ... verify other account balances
    
    // Also verify the accounting equation
    // ...
}

The Benefits of Our Collection-Based Approach

Our redesigned MockAccountBalanceCalculator offers several advantages:

  1. Data-Driven: All calculations are based on collections of data, not hardcoded values.
  2. Flexible: New transactions can be added easily without changing calculation logic.
  3. Maintainable: If transaction amounts change, we only need to update them in one place.
  4. Realistic: This approach closely mirrors how a real database-backed accounting system would work.
  5. Extensible: We can add support for more complex queries like filtering by date range.

The Goals of Our Integration Test

Our integration test serves several important purposes:

  1. Verify Module Integration: Ensures that the Document module and Chart of Accounts module work correctly together.
  2. Validate Business Workflows: Confirms that standard accounting workflows (purchasing, sales, expenses, payments) function as expected.
  3. Ensure Data Integrity: Verifies that all transactions maintain balance (debits = credits) and that account balances are accurate.
  4. Test Double-Entry Accounting: Confirms that our system properly implements double-entry accounting principles where every transaction affects at least two accounts.
  5. Validate Accounting Equation: Ensures that the fundamental accounting equation (Assets = Liabilities + Equity + (Revenues – Expenses)) remains balanced.

Conclusion

This integration test demonstrates the core functionality of our accounting system using a data-driven approach that closely mimics a real database. By simulating a retail business’s transactions and storing them in collections, we’ve created a realistic test environment for our double-entry accounting system.

The collection-based approach in our MockAccountBalanceCalculator allows us to test complex accounting logic without an actual database, while still ensuring that our calculations are accurate and our accounting principles are sound.

While this test uses in-memory collections rather than a database, it provides a strong foundation for testing the business logic of our accounting system in a way that would translate easily to a real-world implementation.

Repo

egarim/SivarErp: Open Source ERP

About Us

YouTube

https://www.youtube.com/c/JocheOjedaXAFXAMARINC

Our sites
Let’s discuss your XAF

This call/zoom will give you the opportunity to define the roadblocks in your current XAF solution. We can talk about performance, deployment or custom implementations. Together we will review you pain points and leave you with recommendations to get your app back in track

https://calendly.com/bitframeworks/bitframeworks-free-xaf-support-hour

Our free A.I courses on Udemy