This is a story about testing XAF applications — and why now is finally the right time to do it properly.
With Copilot agents and AI-assisted coding, writing code has become cheaper and faster than ever. Features that used to take days now take hours. Boilerplate is almost free.
And that changes something important.
For the first time, many of us actually have time to do the things we always postponed:
documenting the source code,
writing proper user manuals,
and — yes — writing tests.
But that immediately raises the real question:
What kind of tests should I even write?
Most developers use “unit tests” as a synonym for “tests”. But once you move beyond trivial libraries and into real application frameworks, that definition becomes fuzzy very quickly.
And nowhere is that more obvious than in XAF.
I’ve been working with XAF for something like 15–18 years (I’ve honestly lost count). It’s my preferred application framework, and it’s incredibly productive — but testing it “as-is” can feel like wrestling a framework-shaped octopus.
So let’s clarify something first.
You don’t test the framework. You test your logic.
XAF already gives you a lot for free:
CRUD
UI generation
validation plumbing
security system
object lifecycle
persistence
DevExpress has already tested those parts — thousands of times, probably millions by now.
So you do not need to write tests like:
“Can ObjectSpace save an object?”
“Does XAF load a View?”
“Does the security system work?”
You assume those things work.
Your responsibility is different.
You test the decisions your application makes.
That principle applies to XAF — and honestly, to any serious application framework.
The mental shift: what is a “unit”, really?
In classic theory, a unit is the smallest piece of code with a single responsibility — usually a method.
In real applications, that definition is often too small to be useful.
Sometimes the real “unit” is:
a workflow,
a business decision,
a state transition,
or a rule spanning multiple objects.
In XAF especially, the decision matters more than the method.
That’s why the right question is not “how do I unit test XAF?”
The right question is:
Which decisions in my app are important enough to protect?
The test pyramid for XAF
A practical, realistic test pyramid for XAF looks like this:
Fast unit tests for pure logic
Unit tests with thin seams around XAF-specific dependencies
Integration tests with a real ObjectSpace (confidence tests)
Minimal UI tests only for critical wiring
Let’s go layer by layer.
1) Push logic out of XAF into plain services (fast unit tests)
The adapter is not the brain.
The brain lives in services.
What should you test here?
Appearance Rules
Test the decision behind the rule (e.g. “Is this field editable now?”).
Then confirm via integration tests that the rule is wired correctly.
Validation Rules
Test the validation logic itself (conditions, edge cases).
Optionally verify that the XAF rule triggers when expected.
Calculated properties / non-trivial setters
Controller decision logic once extracted from the Controller
3) Integration tests with a real ObjectSpace (confidence tests)
Unit tests prove your logic is correct.
Integration tests prove your XAF wiring still behaves.
They answer questions like:
Does persistence work?
Do validation and appearance rules trigger?
Do lifecycle hooks behave?
Does security configuration work as expected?
4) Minimal UI tests (only for critical wiring)
UI automation is expensive and fragile.
Keep UI tests only for:
Critical actions
Essential navigation flows
Known production regressions
The key mental model
A rule is not the unit.
The decision behind the rule is the unit.
Test the decision directly.
Use integration tests to confirm the glue still works.
Closing thought
Test your app’s decisions, not the framework’s behavior.
That’s the difference between a test suite that helps you move faster
and one that quietly turns into a tax.
I recently listened to an episode of the Merge Conflict podcast by James Montemagno and Frank Krueger where a topic came up that, surprisingly, I had never explicitly framed before: greenfield vs brownfield projects.
That surprised me—not because the ideas were new, but because I’ve spent years deep in software architecture and AI, and yet I had never put a name to something I deal with almost daily.
Once I did a bit of research (and yes, asked ChatGPT too), everything clicked.
Greenfield and Brownfield, in Simple Terms
Greenfield projects are built from scratch. No legacy code, no historical baggage, no technical debt.
Brownfield projects already exist. They carry history: multiple teams, different styles, shortcuts, and decisions made under pressure.
If that sounds abstract, here’s the practical version:
Greenfield is what we want.
Brownfield is what we usually get.
Greenfield Projects: Architecture Paradise
In a greenfield project, everything feels right.
You can choose your architecture and actually stick to it. If you’re building a .NET MAUI application, you can start with proper MVVM, SOLID principles, clean boundaries, and consistent conventions from day one.
As developers, we know how things should be done. Greenfield projects give us permission to do exactly that.
They’re also extremely friendly to AI tools.
When the rules are clear and consistent, Copilot and AI agents perform beautifully. You can define specs, outline patterns, and let the tooling do a lot of the repetitive work for you.
That’s why I often use AI for greenfield projects as internal tools or side projects—things I’ve always known how to build, but never had the time to prioritize. Today, time is no longer the constraint. Tokens are.
Brownfield Projects: Welcome to Reality
Then there’s the real world.
At the office, we work with applications that have been touched by many hands over many years—sometimes 10 different teams, sometimes freelancers, sometimes “someone’s cousin who fixed it once.”
Each left behind a different style, different patterns, and different assumptions.
Customers often describe their systems like this:
“One team built it, another modified it, then my cousin fixed a bug, then my cousin got married and stopped helping, and then someone else took over.”
And yet—the system works.
That’s an important reminder.
The main job of software is not to be beautiful. It’s to do the job.
A lot of brownfield systems are ugly, fragile, and terrifying to touch—but they deliver real business value every single day.
Why AI Is Even More Powerful in Brownfield Projects
Here’s my honest opinion, based on experience:
AI is even more valuable in brownfield projects than in greenfield ones.
I’ve modernized six or seven legacy applications so far—codebases that everyone was afraid to touch. AI made that possible.
Legacy systems are mentally expensive. Reading spaghetti code drains energy. Understanding implicit behavior takes time. Humans get tired.
AI doesn’t.
It will patiently analyze a 2,000-line class without complaining.
Take Windows Forms applications as an example. It’s old technology, easy to forget, and full of quirks. Copilot can generate code that I know how to write—but much faster than I could after years away from WinForms.
Even more importantly, AI makes it far easier to introduce tests into systems that never had them:
Add tests class by class
Mock dependencies safely
Lock in existing behavior before refactoring
Historically, this was painful enough that many teams preferred a full rewrite.
But rewrites have a hidden cost: every rewritten line introduces new bugs.
AI allows us to modernize in place—incrementally and safely.
Clean Code and Business Value
This is the real win.
With AI, we no longer have to choose between:
“The code works, but don’t touch it”
“The code is beautiful, but nothing works yet”
We can improve structure, readability, and testability without breaking what already delivers value.
Greenfield projects are still fun. They’re great for experimentation and clean design.
But brownfield projects? That’s where AI feels like a superpower.
Final Thoughts
Today, I happily use AI in both worlds:
Greenfield projects for fast experimentation and internal tooling
Brownfield projects for rescuing legacy systems, adding tests, and reducing technical debt
AI doesn’t replace experience—it amplifies it.
Especially when dealing with systems held together by history, habits, and just enough hope to keep running.
And honestly?
Those are the projects where the impact feels the most real.
Async/await in C# is often described as “non-blocking,” but that description hides an important detail:
await is not just about waiting — it is about where execution continues afterward.
Understanding that single idea explains:
why deadlocks happen,
why ConfigureAwait(false) exists,
and why it *reduces* damage without fixing the root cause.
This article is not just theory. It’s written because this exact class of problem showed up again in real production code during the first week of 2026 — and it took a context-level fix to resolve it.
The Hidden Mechanism: Context Capture
When you await a task, C# does two things:
It pauses the current method until the awaited task completes.
It captures the current execution context (if one exists) so the continuation can resume there.
That context might be:
a UI thread (WPF, WinForms, MAUI),
a request context (classic ASP.NET),
or no special context at all (ASP.NET Core, console apps).
This default behavior is intentional. It allows code like this to work safely:
var data = await LoadAsync();
MyLabel.Text = data.Name; // UI-safe continuation
But that same mechanism becomes dangerous when async code is blocked synchronously.
The Root Problem: Blocking on Async
Deadlocks typically appear when async code is forced into a synchronous shape:
var result = GetDataAsync().Result; // or .Wait()
What happens next:
The calling thread blocks, waiting for the async method to finish.
The async method completes its awaited operation.
The continuation tries to resume on the original context.
That context is blocked.
Nothing can proceed.
💥 Deadlock.
This is not an async bug. This is a context dependency cycle.
The Blast Radius Concept
Blocking on async is the explosion.
The blast radius is how much of the system is taken down with it.
Full blast (default await)
Continuation *requires* the blocked context
The async operation cannot complete
The caller never unblocks
Everything stops
Reduced blast (ConfigureAwait(false))
Continuation does not require the original context
It resumes on a thread pool thread
The async operation completes
The blocking call unblocks
The original mistake still exists — but the damage is contained.
The real fix is “don’t block on async,”
but ConfigureAwait(false) reduces the blast radius when someone does.
What ConfigureAwait(false) Actually Does
await SomeAsyncOperation().ConfigureAwait(false);
This tells the runtime:
“I don’t need to resume on the captured context. Continue wherever it’s safe to do so.”
Important clarifications:
It does not make code faster by default
It does not make code parallel
It does not remove the need for proper async flow
It only removes context dependency
Why This Matters in Real Code
Async code rarely exists in isolation.
A method often awaits another method, which awaits another:
await AAsync();
await BAsync();
await CAsync();
If any method in that chain requires a specific context, the entire chain becomes context-bound.
That is why:
library code must be careful,
deep infrastructure layers must avoid context assumptions,
and UI layers must be explicit about where context is required.
When ConfigureAwait(false) Is the Right Tool
Use it when all of the following are true:
The method does not interact with UI state
The method does not depend on a request context
The method is infrastructure, library, or backend logic
The continuation does not care which thread resumes it
This is especially true for:
NuGet packages
shared libraries
data access layers
network and IO pipelines
What It Is Not
ConfigureAwait(false) is not:
a fix for bad async usage
a substitute for proper async flow
a reason to block on tasks
something to blindly apply everywhere
It is a damage-control tool, not a cure.
A Real Incident: When None of the Usual Fixes Worked
First week of 2026.
The first task I had with the programmers in my office was to investigate a problem in a trading block. The symptoms looked like a classic async issue: timing bugs, inconsistent behavior, and freezes that felt “await-shaped.”
We did what experienced .NET teams typically do when async gets weird:
Reviewed the full async/await chain end-to-end
Double-checked the source code carefully (everything looked fine)
Tried the usual “tools people reach for” under pressure:
.Wait()
.GetAwaiter().GetResult()
wrapping in Task.Run(...)
adding ConfigureAwait(false)
mixing combinations of those approaches
None of it reliably fixed the problem.
At that point it stopped being a “missing await” story. It became a “the model is right but reality disagrees” story.
One of the programmers, Daniel, and I went deeper. I found myself mentally replaying every async pattern I know — especially because I’ve written async-heavy code myself, including library work like SyncFramework, where I synchronize databases and deal with long-running operations.
That’s the moment where this mental model matters: it forces you to stop treating await like syntax and start treating it like mechanics.
The Actual Root Cause: It Was the Context
In the end, the culprit wasn’t which pattern we used — it was where the continuation was allowed to run.
This application was built on DevExpress XAF. In this environment, the “correct” continuation behavior is often tied to XAF’s own scheduling and application lifecycle rules. XAF provides a mechanism to run code in its synchronization context — for example using BlazorApplication.InvokeAsync, which ensures that continuations run where the framework expects.
Once we executed the problematic pipeline through XAF’s synchronization context, the issue was solved.
No clever pattern. No magical await. No extra parallelism.
Just: the right context.
And this is not unique to XAF. Similar ideas exist in:
Windows Forms (UI thread affinity + SynchronizationContext)
WPF (Dispatcher context)
Any framework that requires work to resume on a specific thread/context
Why I’m Writing This
What I wanted from this experience is simple: don’t forget it.
Because what makes this kind of incident dangerous is that it looks like a normal async bug — and the internet is full of “four fixes” people cycle through:
add/restore missing await
use .Wait() / .Result
wrap in Task.Run()
use ConfigureAwait(false)
Sometimes those are relevant. Sometimes they’re harmful. And sometimes… they’re all beside the point.
In our case, the missing piece was framework context — and once you see that, you realize why the “blast radius” framing is so useful:
Blocking is the explosion.
ConfigureAwait(false) contains damage when someone blocks.
If a framework requires a specific synchronization context, the fix may be to supply the correct context explicitly.
That’s what happened here. And that’s why I’m capturing it as live knowledge, not just documentation.
The Mental Model to Keep
Async bugs are often context bugs
Blocking creates the explosion
Context capture determines the blast radius
ConfigureAwait(false) limits the damage
Proper async flow prevents the explosion entirely
Frameworks may require their own synchronization context
Correct async code can still fail in the wrong context
Async is not just about tasks. It’s about where your code is allowed to continue.
OK, I’ve been wanting to write this article for a few days now, but I’ve been vibing a lot — writing tons of prototypes and working on my Oqtane research. This morning I got blocked by GitHub Copilot because I hit the rate limit, so I can’t use it for a few hours. I figured that’s a sign to take a break and write some articles instead.
Actually, I’m not really “writing” — I’m using the Windows dictation feature (Windows key + H). So right now, I’m just having coffee and talking to my computer. I’m still in El Salvador with my family, and it’s like 5:00 AM here. My mom probably thinks I’ve gone crazy because I’ve been talking to my computer a lot lately. Even when I’m coding, I use dictation instead of typing, because sometimes it’s just easier to express yourself when you talk. When you type, you tend to shorten things, but when you talk, you can go on forever, right?
Anyway, this article is about Oqtane, specifically something that’s been super useful for me — how to set up a silent installation. Usually, when you download the Oqtane source or use the templates to create a new project or solution, and then run the server project, you’ll see the setup wizard first. That’s where you configure the database, email, host password, default theme, and all that.
Since I’ve been doing tons of prototypes, I’ve seen that setup screen thousands of times per day. So I downloaded the Oqtane source and started digging through it — using Copilot to generate guides whenever I got stuck. Honestly, the best way to learn is always by looking at the source code. I learned that the hard way years ago with XAF from DevExpress — there was no documentation back then, so I had to figure everything out manually and even assemble the projects myself because they weren’t in one solution. With Oqtane, it’s way simpler: everything’s in one place, just a few main projects.
Now, when I run into a problem, I just open the source code and tell Copilot, “OK, this is what I want to do. Help me figure it out.” Sometimes it goes completely wrong (as all AI tools do), but sometimes it nails it and produces a really good guide.
So the guide below was generated with Copilot, and it’s been super useful. I’ve been using it a lot lately, and I think it’ll save you a ton of time if you’re doing automated deployment with Oqtane.
I don’t want to take more of your time, so here it goes — I hope it helps you as much as it helped me.
Oqtane Installation Configuration Guide
This guide explains the configuration options available in the appsettings.json file under the Installation section for automated installation and default site settings.
Overview
The Installation section in appsettings.json controls the automated installation process and default settings for new sites in Oqtane. These settings are particularly useful for:
Automated installations – Deploy Oqtane without manual configuration
Development environments – Quickly spin up new instances
Multi-tenant deployments – Standardize new site creation
Mental notes on architecture, learning by reading source, and what’s next.
OK — so it’s time for a new article. Lately, I’ve been diving deep into the Oqtane framework, and it’s been a beautiful journey. It reminds me of my early days with XAF from Developer Express—when I learned to think in software architecture and modern design patterns by simply reading the code.Back then, documentation was scarce. The advice was: “Look at the code.” I did—and that shaped a big part of my software education. It taught me that good source code is often self-explanatory.
Even though XAF is still our main tool at the office (Xari & BIT Frameworks), we’re expanding. We’re researching new divisions for Flutter and React, since some projects already use those fronts with an XAF backend. I also wanted to explore building client-server apps with a single .NET codebase that includes mobile—another reason Oqtane caught my eye.
Why Oqtane Caught My Attention
The Oqtane team is very responsive on GitHub. You can open a discussion and get thoughtful replies quickly. The source code is clean and educational—perfect for learning by reading. There are plenty of talks and videos on architecture and module development; some are a bit dated, but if you cross-check with the code, you’ll be fine.
I’ve learned there are two steps to mastering a framework: (1) immerse yourself in material (videos, code, docs), and (2) explain it to someone else. These notes do both—part research, part knowledge sharing.
There’s one clip I couldn’t locate where Shaun Walker explains that .NET already provides the pieces for modern, multi-platform, server-and-client applications—but the ecosystem is fragmented. Oqtane unifies those pieces into a single .NET codebase. If I find it, I’ll make a highlight and share it.
On Learning and Time
I’m trying to publish as much as I can now because I’m about to start a new chapter: I’ll be joining the University of St. Petersburg to learn Russian as my second language. It’s a tough language—very different from Spanish or Italian—so I’ll likely have less time to write for a while. Better to document these experiments now than let them sit in my notes for months.