by Joche Ojeda | May 5, 2025 | Boring systems, ERP, SivarErp
Introduction
In financial accounting systems, the document module serves as the cornerstone upon which all other functionality is built. Just as physical documents form the basis of traditional accounting practices, the digital document module provides the foundation for recording, processing, and analyzing financial transactions. In this article, we’ll explore the structure and importance of the document module in a modern financial accounting system.
The Core Components
The document module consists of three essential components:
1. Documents
Documents represent the source records of financial events. These might include invoices, receipts, bank statements, journal entries, and various specialized financial documents like balance transfer statements and closing entries. Each document contains metadata such as:
- Date of the document
- Document number/reference
- Description and comments
- Document type classification
- Audit information (who created/modified it and when)
Documents serve as the legal proof of financial activities and provide an audit trail that can be followed to verify the accuracy and validity of financial records.
2. Transactions
Transactions represent the financial impact of documents in the general ledger. While a document captures the business event (e.g., an invoice), the transaction represents how that event affects the company’s financial position. A single document may generate one or more transactions depending on its complexity.
Each transaction is linked to its parent document and contains:
- Transaction date (which may differ from the document date)
- Description
- Reference to the parent document
Transactions bridge the gap between source documents and ledger entries, maintaining the relationship between business events and their financial representations.
3. Ledger Entries
Ledger entries are the individual debit and credit entries that make up a transaction. They represent the actual changes to account balances in the general ledger. Each ledger entry contains:
- Reference to the parent transaction
- Account identifier
- Entry type (debit or credit)
- Amount
- Optional references to persons and cost centers for analytical purposes
Ledger entries implement the double-entry accounting principle, ensuring that for every transaction, debits equal credits.
Why This Modular Approach Matters
The document module’s structure offers several significant advantages:
1. Separation of Concerns
By separating documents, transactions, and ledger entries, the system maintains clear boundaries between:
- Business events (documents)
- Financial impacts (transactions)
- Specific account changes (ledger entries)
This separation allows each layer to focus on its specific responsibilities without being overly coupled to other components.
2. Flexibility and Extensibility
The modular design allows for adding new document types without changing the core accounting logic. Whether handling standard invoices or specialized financial instruments, the same underlying structure applies, making the system highly extensible.
3. Robust Audit Trail
With documents serving as the origin of all financial records, the system maintains a complete audit trail. Every ledger entry can be traced back to its transaction and originating document, providing accountability and transparency.
4. Compliance and Reporting
The document-centric approach aligns with legal and regulatory requirements that mandate keeping original document records. This structure facilitates regulatory compliance and simplifies financial reporting.
Implementation Considerations
When implementing a document module, several design principles should be considered:
Interface-Based Design
Using interfaces like IDocument, ITransaction, and ILedgerEntry promotes flexibility and testability. Services operate against these interfaces rather than concrete implementations, following the Dependency Inversion Principle.
Immutability of Processed Documents
Once a document has been processed and its transactions recorded, changes should be restricted to prevent inconsistencies. Any modifications should follow proper accounting procedures, such as creating correction entries.
Versioning and Historical Records
The system should maintain historical versions of documents, especially when they’re modified, to preserve the accurate history of financial events.
Conclusion
The document module serves as the backbone of a financial accounting system, providing the structure and organization needed to maintain accurate financial records. By properly implementing this foundation, accounting systems can ensure data integrity, regulatory compliance, and flexible business operations.
Understanding the document module’s architecture helps developers and accountants alike appreciate the careful design considerations that go into building robust financial systems capable of handling the complexities of modern business operations.
Repo
egarim/SivarErp: Open Source ERP
About Us
YouTube
https://www.youtube.com/c/JocheOjedaXAFXAMARINC
Our sites
Let’s discuss your XAF
This call/zoom will give you the opportunity to define the roadblocks in your current XAF solution. We can talk about performance, deployment or custom implementations. Together we will review you pain points and leave you with recommendations to get your app back in track
https://calendly.com/bitframeworks/bitframeworks-free-xaf-support-hour
Our free A.I courses on Udemy
by Joche Ojeda | May 5, 2025 | Boring systems, ERP
After returning home from an extended journey through the United States, Greece, and Turkey, I found myself contemplating a common challenge over my morning coffee. There are numerous recurring problems in system design and ORM (Object-Relational Mapping) implementation that developers face repeatedly.
To address these challenges, I’ve decided to tackle a system that most professionals are familiar with—an ERP (Enterprise Resource Planning) system—and develop a design that achieves three critical goals:
- Performance Speed: The system must be fast and responsive
- Technology Agnosticism: The architecture should be platform-independent
- Consistent Performance: The system should maintain its performance over time
Design Decisions
To achieve these goals, I’m implementing the following key design decisions:
- Utilizing the SOLID design principles to ensure maintainability and extensibility
- Building with C# and net9 to leverage its modern language features
- Creating an agnostic architecture that can be reimplemented in various technologies like DevExpress XAF or Entity Framework
Day 1: Foundational Structure
In this first article, I’ll propose an initial folder structure that may evolve as the system develops. I’ll also describe a set of base classes and interfaces that will form the foundation of our system.
You can find all the source code for this solution in the designated repository.
The Core Layer
Today we’re starting with the core layer—a set of interfaces that most entities will implement. The system design follows SOLID principles to ensure it can be easily reimplemented using different technologies.
Base Interfaces
Here’s the foundation of our interface hierarchy:
- IEntity: Core entity interface defining the Id property
- IAuditable: Interface for entities with audit information
- IArchivable: Interface for entities supporting soft delete
- IVersionable: Interface for entities with effective dating
- ITimeTrackable: Interface for entities requiring time tracking
Service Interfaces
To complement our entity interfaces, we’re also defining service interfaces:
- IAuditService: Interface for audit-related operations
- IArchiveService: Interface for archiving operations
Repo
egarim/SivarErp: Open Source ERP
Next Steps
In upcoming articles, I’ll expand on this foundation by implementing concrete classes, developing the domain layer, and demonstrating how this architecture can be applied to specific ERP modules.
The goal is to create a reference architecture that addresses the recurring challenges in system design while remaining adaptable to different technological implementations.
Stay tuned for the next installment where we’ll dive deeper into the implementation details of our core interfaces.
About Us
YouTube
https://www.youtube.com/c/JocheOjedaXAFXAMARINC
Our sites
Let’s discuss your XAF
This call/zoom will give you the opportunity to define the roadblocks in your current XAF solution. We can talk about performance, deployment or custom implementations. Together we will review you pain points and leave you with recommendations to get your app back in track
https://calendly.com/bitframeworks/bitframeworks-free-xaf-support-hour
Our free A.I courses on Udemy
by Joche Ojeda | Apr 28, 2025 | dotnet, Uno Platform
It’s been almost a month since I left home to attend the Microsoft MVP Summit in Seattle. I’m still on the road, currently in Athens, Greece, with numerous notes for upcoming articles. While traveling makes writing challenging, I want to maintain the order of my Uno Platform series to ensure everything makes sense for readers.
In this article, we’ll dive into the structure of an Uno Platform solution. There’s some “black magic” happening behind the scenes, so understanding how everything works will make development significantly easier.
What is Uno Platform?
Before we dive into the anatomy, let’s briefly explain what Uno Platform is. Uno Platform is an open-source framework that enables developers to build cross-platform applications from a single codebase. Using C# and XAML, you can create applications that run on Windows, iOS, Android, macOS, Linux, and WebAssembly.
Root Solution Structure
An Uno Platform solution follows a specific structure that facilitates cross-platform development. Let’s break down the key components:
Main (and only) Project
The core of an Uno Platform solution is the main shared project (in our example, “UnoAnatomy”). This project contains cross-platform code shared across all target platforms and includes:
- Assets: Contains shared resources like images and icons used across all platforms. These assets may be adapted for different screen densities and platforms as needed.
- Serialization: Here is where the JsonSerializerContext lives, Since .NET 6 serialization context allows controlling how objects are serialized through the JsonSerializerContext class. It provides ahead-of-time metadata generation for better performance and reduces reflection usage, particularly beneficial for AOT compilation scenarios like Blazor WebAssembly and native apps.
- Models: Contains business model classes representing core domain entities in your application.
- Presentation: Holds UI components including pages, controls, and views. This typically includes files like
Shell.xaml.cs and MainPage.xaml.cs that implement the application’s UI elements and layout.
- Platforms:
- • Android: Contains the Android-specific entry point (MainActivity.Android.cs) and any other Android-specific configurations or code.
- • iOS: Contains the iOS-specific entry point (Main.iOS.cs).
- • MacCatalyst: Contains the MacCatalyst-specific entry point (Main.maccatalyst.cs).
- • BrowserWasm: Contains the Browser WASM specific configurations or code.
- • Desktop: Contains the Desktop specific configurations or code.
- Services: Contains service classes implementing business logic, data access, etc. This folder often includes subfolders like:
- Strings: the purpose of this folder is to store the localized string resources for the application so it can be translated to multiple languages.
- Styles: this folder contains the styles or color configuration for the app.
Build Configuration Files
Several build configuration files in the root of the solution control the build process:
- Directory.Build.props: Contains global MSBuild properties applied to all projects.
- Directory.Build.targets: Contains global MSBuild targets for all projects.
- Directory.Packages.props: Centralizes package versions for dependency management.
- global.json: Specifies the Uno.SDK version and other .NET SDK configurations.
The Power of Uno.Sdk
One of the most important aspects of modern Uno Platform development is the Uno.Sdk, which significantly simplifies the development process.
What is Uno.Sdk?
Uno.Sdk is a specialized MSBuild SDK that streamlines Uno Platform development by providing:
- A cross-platform development experience that simplifies targeting multiple platforms from a single project
- Automatic management of platform-specific dependencies and configurations
- A simplified build process that handles the complexity of building for different target platforms
- Feature-based configuration that enables adding functionality through the UnoFeatures property
In your project file, you’ll see <Project Sdk="Uno.Sdk"> at the top, indicating that this project uses the Uno SDK rather than the standard .NET SDK.
Key Components of the Project File
TargetFrameworks
<TargetFrameworks>net9.0-android;net9.0-ios;net9.0-maccatalyst;net9.0-windows10.0.26100;net9.0-browserwasm;net9.0-desktop</TargetFrameworks>
This line specifies that your application targets:
- Android
- iOS
- macOS (via Mac Catalyst)
- Windows (Windows 10/11 with SDK version 10.0.26100)
- WebAssembly (for browser-based applications)
- Desktop (for cross-platform desktop applications)
All of these targets use .NET 9 as the base framework.
Single Project Configuration
<OutputType>Exe</OutputType>
<UnoSingleProject>true</UnoSingleProject>
OutputType: Specifies this project builds an executable application
UnoSingleProject: Enables Uno’s single-project approach, allowing you to maintain one codebase for all platforms
Application Metadata
<ApplicationTitle>UnoAnatomy</ApplicationTitle>
<ApplicationId>com.companyname.UnoAnatomy</ApplicationId>
<ApplicationDisplayVersion>1.0</ApplicationDisplayVersion>
<ApplicationVersion>1</ApplicationVersion>
<ApplicationPublisher>joche</ApplicationPublisher>
<Description>UnoAnatomy powered by Uno Platform.</Description>
These properties define your app’s identity and metadata used in app stores and installation packages.
UnoFeatures
The most powerful aspect of Uno.Sdk is the UnoFeatures property:
<UnoFeatures>
Material;
Dsp;
Hosting;
Toolkit;
Logging;
Mvvm;
Configuration;
Http;
Serialization;
Localization;
Navigation;
ThemeService;
</UnoFeatures>
This automatically adds relevant NuGet packages for each listed feature:
- Material: Material Design UI components
- Dsp: Digital Signal Processing capabilities
- Hosting: Dependency injection and host builder pattern
- Toolkit: Community Toolkit components
- Logging: Logging infrastructure
- Mvvm: Model-View-ViewModel pattern implementation
- Configuration: Application configuration framework
- Http: HTTP client capabilities
- Serialization: Data serialization/deserialization
- Localization: Multi-language support
- Navigation: Navigation services
- ThemeService: Dynamic theme support
The UnoFeatures property eliminates the need to manually add numerous NuGet packages and ensures compatibility between components.
Benefits of the Uno Platform Structure
This structured approach to cross-platform development offers several advantages:
- Code Sharing: Most code is shared across platforms, reducing duplication and maintenance overhead.
- Platform-Specific Adaptation: When needed, the structure allows for platform-specific implementations.
- Simplified Dependencies: The Uno.Sdk handles complex dependency management behind the scenes.
- Consistent Experience: Ensures a consistent development experience across all target platforms.
- Future-Proofing: The architecture makes it easier to add support for new platforms in the future.
Conclusion
Understanding the anatomy of an Uno Platform solution is crucial for effective cross-platform development. The combination of shared code, platform-specific heads, and the powerful Uno.Sdk creates a development experience that makes it much easier to build and maintain applications across multiple platforms from a single codebase.
By leveraging this structure and the features provided by the Uno Platform, you can focus on building your application’s functionality rather than dealing with the complexities of cross-platform development.
In my next article in this series, we’ll dive deeper into the practical aspects of developing with Uno Platform, exploring how to leverage these structural components to build robust cross-platform applications.
Related articles
Getting Started with Uno Platform: First Steps and Configuration Choices | Joche Ojeda
My Adventures Picking a UI Framework: Why I Chose Uno Platform | Joche Ojeda
Exploring the Uno Platform: Handling Unsafe Code in Multi-Target Applications | Joche Ojeda
About Us
YouTube
https://www.youtube.com/c/JocheOjedaXAFXAMARINC
Our sites
Let’s discuss your XAF
This call/zoom will give you the opportunity to define the roadblocks in your current XAF solution. We can talk about performance, deployment or custom implementations. Together we will review you pain points and leave you with recommendations to get your app back in track
https://calendly.com/bitframeworks/bitframeworks-free-xaf-support-hour
Our free A.I courses on Udemy
by Joche Ojeda | Apr 20, 2025 | WSL
My Docker Adventure in Athens
Hello fellow tech enthusiasts!
I’m currently in Athens, Greece, enjoying a lovely Easter Sunday, when I decided to tackle a little tech project – getting Docker running on my Microsoft Surface with an ARM64 CPU. If you’ve ever tried to do this, you might know it’s not as straightforward as it sounds!
After some research, I discovered something important: there’s a difference between Docker Enterprise and Docker Community Edition (CE). While the enterprise version doesn’t support ARM64 yet, Docker CE does have versions for both ARM64 and x64 architectures. Perfect!
The WSL2 Solution
I initially tried to install Docker directly on Windows, but quickly ran into roadblocks. That’s when I decided to try the Windows Subsystem for Linux (WSL2) route instead. Spoiler alert: it worked like a charm!
While you won’t get the nice Docker Desktop UI that Windows users might be accustomed to, the command line interface through WSL2 works perfectly fine. After all, Docker was born on Linux, so running it in a Linux environment makes sense!
Step-by-Step Guide to Installing Docker CE on WSL2
Here’s how I got Docker CE up and running on my Surface using WSL2:
Step 1: Update Your Packages
First, make sure your WSL2 system is up to date:
sudo apt update && sudo apt upgrade -y
Step 2: Install Required Packages
Install the necessary packages to use HTTPS repositories:
sudo apt install -y apt-transport-https ca-certificates curl software-properties-common gnupg lsb-release
Step 3: Add Docker’s Official GPG Key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
Step 4: Set Up the Stable Docker Repository
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Step 5: Update APT with the New Repository
sudo apt update
Step 6: Install Docker CE
sudo apt install -y docker-ce docker-ce-cli containerd.io
Step 7: Start the Docker Service
sudo service docker start
Step 8: Add Your User to the Docker Group
This allows you to run Docker without sudo:
sudo usermod -aG docker $USER
Step 9: Apply the Group Changes
Either log out and back in, or run:
newgrp docker
Step 10: Verify Your Installation
docker --version
docker run hello-world
Pro Tip!
If you want Docker to start automatically when you launch WSL2, add the service start command to your .bashrc or .zshrc file:
echo "sudo service docker start" >> ~/.bashrc
Final Thoughts
What started as a potentially frustrating experience turned into a surprisingly smooth process. WSL2 continues to impress me with how well it bridges the Windows and Linux worlds. If you have a Surface or any other ARM64-based Windows device and need to run Docker, I highly recommend the WSL2 approach.
Have you tried running Docker on an ARM device? What was your experience like? Let me know in the comments below!
Happy containerizing! 🐳
by Joche Ojeda | Apr 2, 2025 | Testing
In the last days, I have been dealing with a chat prototype that uses SignalR. I’ve been trying to follow the test-driven development (TDD) approach as I like this design pattern. I always try to find a way to test my code and define use cases so that when I’m refactoring or writing code, as long as the tests pass, I know everything is fine.
When doing ORM-related problems, testing is relatively easy because you can set up a memory provider, have a clean database, perform your operations, and then revert to normal. But when testing APIs, there are several approaches.
Some approaches are more like unit tests where you get a controller and directly pass values by mocking them. However, I prefer tests that are more like integration tests – for example, I want to test a scenario where I send a message to a chat and verify that the message send event was real. I want to show the complete set of moving parts and make sure they work together.
In this article, I want to explore how to do this type of test with REST APIs by creating a test host server. This test host creates two important things: a handler and an HTTP client. If you use the HTTP client, each HTTP operation (POST, GET, etc.) will be sent to the controllers that are set up for the test host. For the test host, you do the same configuration as you would for any other host – you can use a startup class or add the services you need and configure them.
I wanted to do the same for SignalR chat applications. In this case, you don’t need the HTTP client; you need the handler. This means that each request you make using that handler will be directed to the hub hosted on the HTTP test host.
Here’s the code that shows how to create the test host:
// ARRANGE
// Build a test server
var hostBuilder = new HostBuilder()
.ConfigureWebHost(webHost =>
{
webHost.UseTestServer();
webHost.UseStartup<Startup>();
});
var host = await hostBuilder.StartAsync();
//Create a test server
var server = host.GetTestServer();
And now the code for handling SignalR connections:
// Create SignalR connection
var connection = new HubConnectionBuilder()
.WithUrl("http://localhost/chathub", options =>
{
// Set up the connection to use the test server
options.HttpMessageHandlerFactory = _ => server.CreateHandler();
})
.Build();
string receivedUser = null;
string receivedMessage = null;
// Set up a handler for received messages
connection.On<string, string>("ReceiveMessage", (user, message) =>
{
receivedUser = user;
receivedMessage = message;
});
//if we take a closer look, we can see the creation of the test handler "server.CreateHandler"
var connection = new HubConnectionBuilder() .WithUrl("http://localhost/chathub", options =>
{
// Set up the connection to use the test server
options.HttpMessageHandlerFactory = _ => server.CreateHandler();
}) .Build();
Now let’s open a SignalR connection and see if we can connect to our test server:
string receivedUser = null;
string receivedMessage = null;
// Set up a handler for received messages
connection.On<string, string>("ReceiveMessage", (user, message) =>
{
receivedUser = user;
receivedMessage = message;
});
// ACT
// Start the connection
await connection.StartAsync();
// Send a test message through the hub
await connection.InvokeAsync("SendMessage", "TestUser", "Hello SignalR");
// Wait a moment for the message to be processed
await Task.Delay(100);
// ASSERT
// Verify the message was received correctly
Assert.That("TestUser"==receivedUser);
Assert.That("Hello SignalR"== receivedMessage);
// Clean up
await connection.DisposeAsync();
You can find the complete source of this example here: https://github.com/egarim/TestingSignalR/blob/master/UnitTest1.cs
by Joche Ojeda | Apr 2, 2025 | Uncategorized
It’s been a week since the Microsoft MVP Summit, and now I finally sit at Javier’s home trying to write about my trip and experience there. So let’s start!
The Journey
First, I needed to fly via Istanbul. That meant waking up around 2:00 AM to go to the airport and catch my flight at 6:00 AM. In Istanbul, I was really lucky because I was in the new airport which is huge and it has a great business lounge to wait in, so I could get some rest between my flights from Istanbul to Seattle.
I tried to sleep a little. The main problem was that the business lounge was on one side of the airport and my gate was on the other side, about 1 kilometer away. It’s a really big airport! I had to walk all that distance, and they announced the gate really late, so I only had about 15 minutes to get there—a really short time.
After that, I took my flight to the States, from Istanbul to Seattle. The route goes through the Arctic (near the North Pole)—you go up and then a little bit to the right, and then you end up in Seattle. It was a strange route; I’d never used it before. The flight was long, around 15 hours, but it wasn’t bad. I enjoyed Turkish Airlines when they use the big airplanes.
Arrival Challenges
I landed in Seattle around 6:00 PM. Then I had to go through immigration control and collect my luggage, which took almost two hours. After that, I went to the Airbnb, which was super beautiful, but I couldn’t get in because the owners had left the gate closed from the inside, and there were no lights at all, so it was impossible to enter. I waited for two hours for Javier to contact them, and after a while, it started raining, so I decided to go to a hotel. I booked a hotel for the night and took a 30-minute taxi ride. I finally went to bed on Monday at 11:00 PM, which was really late.
Day One at the Summit
The next day, I needed to drop my bags at the Airbnb and go to the MVP Summit. It was a nice experience. Javier was flying in that day and arrived around 3:00 PM, so I went to the first part alone. I missed the keynote because I had to drop off my bags and do all that stuff, so I ended up arriving around 11:00 AM.
The first person I met was Veronica, and we talked for a bit. Then I went to one of the sessions—of course, it was a Copilot session. In the afternoon, I met up with Javier, we grabbed some swag, and went to the Hub. Then I met Pablo from Argentina, and by the end of the day, I got together with Michael Washington, who I always hang out with during the MVP Summits.
Time to go home—it was a long day. We went back to the Airbnb, but didn’t do much. We just watched a TV show that our friend Hector recommended on Netflix.
Day Two: Meeting Peers
For day two, the sessions were great, but what I recall most are my meetings with specific people. When you go to the MVP Summits, you get to meet your peers. Usually, it’s like you’re good at one thing—for example, Javier and I do AI courses, and most of what we write about is general development—but there are people who really specialize.
For instance, I met the people from the Uno team, amazing people. Jerome and his team are always on the bleeding edge of .NET. We talked about the “black magic” they’ve written for their multi-target single application for Uno. It’s always nice to meet the Uno team.

I met with Michael Washington again several times in the hallways of Microsoft, and we talked about how to redirect Microsoft AI extensions to use LLM Studio, which is kind of tricky. It’s not something you can do really easily, like with Semantic Kernel where you only need to replace the HTTP client and then you’re good to go. In LLM Studio, it’s a different trick, so I’ll write about it later.

In one of the sessions, Mads Kristensen sat by my side, and I was trying to get some information from him on how to create an extension. Long ago, there was an extension from Oliver Sturm called “Instant Program Gratification” or something similar that displayed a huge congratulation message on the screen every time your compile succeeded, and if it failed, it would display something like “Hey, you need more coffee!” on the screen. I asked Mads how to achieve that with the new extension toolkit, and he explained it to me—he’s the king of extensions for Visual Studio.
Then I met someone new, Jeremy Sinclair, whom Javier introduced me to. We had one of those deep technical conversations about how Windows runs on ARM CPUs and the problems this can bring or how easy some things can be. It’s ironic because the Android architecture is usually ARM, but it doesn’t run on ARM computers because ARM computers emulate x64. We talked a lot about the challenges you might encounter and how to address them. Jeremy has managed to do it; he’s written some articles about what to expect when moving to an ARM computer. He also talked about how the future and the present for MAUI is at the moment.
He was also wearing the Ray-Ban Meta glasses, and I asked him, “Hey, how are they?” He told me they’re nice, though the battery life isn’t great, but they’re kind of fun. So I ordered a pair of Meta Ray-Ban AI glasses, and I like them so far.

More Memorable Conversations
Another great conversation that we had with Javier was with James Montemagno. We met him in the Hub, and then we talked a lot about how we started. I’ve been a long-term fan of Merge Conflict, their podcast, and Javier introduced me to that podcast a long time ago when we met around 9 years ago. When he was traveling to work, he called me, we talked mostly about development for about one hour on his way to work, and then he told me, “Hey, I listen to this and this podcast, I listen to that and that podcast.” So I became a follower of Merge Conflict after that.
James explained all the adventures on the Xamarin team, how it went when Xamarin joined Microsoft, about the difference between Xamarin from Microsoft and Xamarin from Xamarin Forms, and how life is changing for him as more of a project manager than an advocate. So he’s kind of busy all the time, but we had this really long conversation, like 40 minutes or so. He was really open about talking about his adventure of joining Microsoft and eventually working in the MAUI team.

We also met David from the MAUI team, and he was so nice. Long time ago, he featured our company in the list of companies that have made apps with MAUI, and we were on the list they showed in one of the conferences. So we thanked him for that.




That’s everyone I met at the MVP Summit. I had a great time, and I can’t believe it’s been a year already. I’m looking forward to meeting everyone next year and seeing what we come up with during 2025!
by Joche Ojeda | Mar 13, 2025 | netcore, Uno Platform
For the past two weeks, I’ve been experimenting with the Uno Platform in two ways: creating small prototypes to explore features I’m curious about and downloading example applications from the Uno Gallery. In this article, I’ll explain the first steps you need to take when creating an Uno Platform application, the decisions you’ll face, and what I’ve found useful so far in my journey.
Step 1: Create a New Project
I’m using Visual Studio 2022, though the extensions and templates work well with previous versions too. I have both studio versions installed, and Uno Platform works well in both.

Step 2: Project Setup
After naming your project, it’s important to select “Place solution and project in the same directory” because of the solution layout requirements. You need the directory properties file to move forward. I’ll talk more about the solution structure in a future post, but for now, know that without checking this option, you won’t be able to proceed properly.

Step 3: The Configuration Wizard
The Uno Platform team has created a comprehensive wizard that guides you through various configuration options. It might seem overwhelming at first, but it’s better to have this guided approach where you can make one decision at a time.
Your first decision is which target framework to use. They recommend .NET 9, which I like, but in my test project, I’m working with .NET 8 because I’m primarily focused on WebAssembly output. Uno offers multi-threading in Web Assembly with .NET 8, which is why I chose it, but for new projects, .NET 9 is likely the better choice.

Step 4: Target Platforms
Next, you need to select which platforms you want to target. I always select all of them because the most beautiful aspect of the Uno Platform is true multi-targeting with a single codebase.
In the past (during the Xamarin era), you needed multiple projects with a complex directory structure. With Uno, it’s actually a single unified project, creating a clean solution layout. So while you can select just WebAssembly if that’s your only focus, I think you get the most out of Uno by multi-targeting.

Step 5: Presentation Pattern
The next question is which presentation pattern you want to use. I would suggest MVUX, though I still have some doubts as I haven’t tried MVVM with Uno yet. MVVM is the more common pattern that most programmers understand, while MVUX is the new approach.
One challenge is that when you check the official Uno sample repository, the examples come in every presentation pattern flavor. Sometimes you’ll find a solution for your task in one pattern but not another, so you may need to translate between them. You’ll likely find more examples using MVVM.

Step 6: Markup Language
For markup, I recommend selecting XAML. In my first project, I tried using C# markup, which worked well until I reached some roadblocks I couldn’t overcome. I didn’t want to get stuck trying to solve one specific layout issue, so I switched. For beginners, I suggest starting with XAML.

Step 7: Theming
For theming, you’ll need to select a UI theme. I don’t have a strong preference here and typically stick with the defaults: using Material Design, the theme service, and importing Uno DSP.

Step 8: Extensions
When selecting extensions to include, I recommend choosing almost all of them as they’re useful for modern application development. The only thing you might want to customize is the logging type (Console, Debug, or Serilog), depending on your previous experience. Generally, most applications will benefit from all the extensions offered.

Step 9: Features
Next, you’ll select which features to include in your application. For my tests, I include everything except the MAUI embedding and the media element. Most features can be useful, and I’ll show in a future post how to set them up when discussing the solution structure.

Step 10: Authentication
You can select “None” for authentication if you’re building test projects, but I chose “Custom” because I wanted to see how it works. In my case, I’m authenticating against DevExpress XAF REST API, but I’m also interested in connecting my test project to Azure B2C.

Step 11: Application ID
Next, you’ll need to provide an application ID. While I haven’t fully explored the purpose of this ID yet, I believe it’s needed when publishing applications to app stores like Google Play and the Apple App Store.

Step 12: Testing
I’m a big fan of testing, particularly integration tests. While unit tests are essential when developing components, for business applications, integration tests that verify the flow are often sufficient.
Uno also offers UI testing capabilities, which I haven’t tried yet but am looking forward to exploring. In platform UI development, there aren’t many choices for UI testing, so having something built-in is fantastic.
Testing might seem like a waste of time initially, but once you have tests in place, you’ll save time in the future. With each iteration or new release, you can run all your tests to ensure everything works correctly. The time invested in creating tests upfront pays off during maintenance and updates.

Step 13: CI Pipelines
The final step is about CI pipelines. If you’re building a test application, you don’t need to select anything. For production applications, you can choose Azure Pipelines or GitHub Actions based on your preferences. In my case, I’m not involved with CI pipeline configuration at my workplace, so I have limited experience in this area.

Conclusion
If you’ve made it this far, congratulations! You should now have a shiny new Uno Platform application in your IDE.
This post only covers the initial setup choices when creating a new Uno application. Your development path will differ based on the selections you’ve made, which can significantly impact how you write your code. Choose wisely and experiment with different combinations to see what works best for your needs.
During my learning journey with the Uno Platform, I’ve tried various settings—some worked well, others didn’t, but most will function if you understand what you’re doing. I’m still learning and taking a hands-on approach, relying on trial and error, occasional documentation checks, and GitHub Copilot assistance.
Thanks for reading and see you in the next post!
About Us
YouTube
https://www.youtube.com/c/JocheOjedaXAFXAMARINC
Our sites
Let’s discuss your XAF
https://www.udemy.com/course/microsoft-ai-extensions/
Our free A.I courses on Udemy
by Joche Ojeda | Mar 12, 2025 | dotnet, http, netcore, netframework, network, WebServers
Last week, I was diving into Uno Platform to understand its UI paradigms. What particularly caught my attention is Uno’s ability to render a webapp using WebAssembly (WASM). Having worked with WASM apps before, I’m all too familiar with the challenges of connecting to data sources and handling persistence within these applications.
My Previous WASM Struggles
About a year ago, I faced a significant challenge: connecting a desktop WebAssembly app to an old WCF webservice. Despite having the CORS settings correctly configured (or so I thought), I simply couldn’t establish a connection from the WASM app to the server. I spent days troubleshooting both the WCF service and another ASMX service, but both attempts failed. Eventually, I had to resort to webserver proxies to achieve my goal.
This experience left me somewhat traumatized by the mere mention of “connecting WASM with an API.” However, the time came to face this challenge again during my weekend experiments.
A Pleasant Surprise with Uno Platform
This weekend, I wanted to connect a XAF REST API to an Uno Platform client. To my surprise, it turned out to be incredibly straightforward. I successfully performed this procedure twice: once with a XAF REST API and once with the API included in the Uno app template. The ease of this integration was a refreshing change from my previous struggles.
Understanding CORS and Why It Matters for WASM Apps
To understand why my previous attempts failed and my recent ones succeeded, it’s important to grasp what CORS is and why it’s crucial for WebAssembly applications.
What is CORS?
CORS (Cross-Origin Resource Sharing) is a security feature implemented by web browsers that restricts web pages from making requests to a domain different from the one that served the original web page. It’s an HTTP-header based mechanism that allows a server to indicate which origins (domains, schemes, or ports) other than its own are permitted to load resources.
The Same-Origin Policy
Browsers enforce a security restriction called the “same-origin policy” which prevents a website from one origin from requesting resources from another origin. An origin consists of:
- Protocol (HTTP, HTTPS)
- Domain name
- Port number
For example, if your website is hosted at https://myapp.com, it cannot make AJAX requests to https://myapi.com without the server explicitly allowing it through CORS.
Why CORS is Required for Blazor WebAssembly
Blazor WebAssembly (which uses similar principles to Uno Platform’s WASM implementation) is fundamentally different from Blazor Server in how it operates:
- Separate Deployment: Blazor WebAssembly apps are fully downloaded to the client’s browser and run entirely in the browser using WebAssembly. They’re typically hosted on a different server or domain than your API.
- Client-Side Execution: Since all code runs in the browser, when your Blazor WebAssembly app makes HTTP requests to your API, they’re treated as cross-origin requests if the API is hosted on a different domain, port, or protocol.
- Browser Security: Modern browsers block these cross-origin requests by default unless the server (your API) explicitly permits them via CORS headers.
Implementing CORS in Startup.cs
The solution to these CORS issues lies in properly configuring your server. In your Startup.cs file, you can configure CORS as follows:
public void ConfigureServices(IServiceCollection services) {
services.AddCors(options => {
options.AddPolicy("AllowBlazorApp",
builder => {
builder.WithOrigins("https://localhost:5000") // Replace with your Blazor app's URL
.AllowAnyHeader()
.AllowAnyMethod();
});
});
// Other service configurations...
}
public void Configure(IApplicationBuilder app, IWebHostEnvironment env) {
// Other middleware configurations...
app.UseCors("AllowBlazorApp");
// Other middleware configurations...
}
Conclusion
My journey with connecting WebAssembly applications to APIs has had its ups and downs. What once seemed like an insurmountable challenge has now become much more manageable, especially with platforms like Uno that simplify the process. Understanding CORS and implementing it correctly is crucial for successful WASM-to-API communication.
If you’re working with WebAssembly applications and facing similar challenges, I hope my experience helps you avoid some of the pitfalls I encountered along the way.
About Us
YouTube
https://www.youtube.com/c/JocheOjedaXAFXAMARINC
Our sites
Let’s discuss your XAF
https://www.udemy.com/course/microsoft-ai-extensions/
Our free A.I courses on Udemy
by Joche Ojeda | Mar 11, 2025 | network
DNS and Virtual Hosting: A Personal Journey
In this article, I’m going to talk about a topic I’ve been working on lately because I’m creating a course on how to host ASP.NET Core applications on Linux. This is a trick that I learned a really long time ago.
I was talking with one of my students, Lance, who asked me when I learned all this hosting and server stuff. It’s actually a nice story.
My Early Server Adventures
When I was around 16 years old, I got a book on networking and figured out how to find free public IPs on my Internet provider’s network. A few years later, when I was 19, I got a book on Windows 2000 Server and managed to get a copy of that Windows version.
I had a great combination of resources:
- Public IPs that I was “borrowing” from my Internet provider
- A copy of Windows Server
- An extra machine (at that time, we only had one computer at home, unlike now where I have about 5 computers)
I formatted the extra computer using Windows Server 2000 and set up DNS using a program called Simple DNS. I also set up the IIS web server. Finally, for the first time in my life, I could host a domain from a computer at home.
In El Salvador, .sv domains were free at that time—you just needed to fill out a form and you could get them for free for many years. Now they’re quite expensive, around $50, compared to normal domains.
The Magic of Virtual Hosting
What I learned was that you can host multiple websites or web applications sharing the same IP without having to change ports by using a hostname or domain name instead.
Here’s how DNS works: When you have an internet connection, it has several parts—the IP address, the public mask, the gateway, and the DNS servers. The DNS servers essentially house a simple file where they have translations: this domain (like HotCoder.com) translates to this IP address. They make IP addresses human-readable.
Once requests go to the server side, the server checks which domain name is being requested and then picks from all the websites being hosted on that server and responds accordingly.
Creating DNS records was tricky the first time. I spent a lot of time reading about it. The internet wasn’t like it is now—we didn’t have AI to help us. I had to figure it out with books, and growing up in El Salvador, we didn’t always have the newest or most accurate books available.
The Hosts File: A Local DNS
In the most basic setup, you need a record which says “this domain goes to this IP,” and then maybe a CNAME record that does something similar. That’s what DNS servers do—they maintain these translation tables.
Each computer also has its own translation table, which is a text file. In Windows, it’s called the “hosts” file. If you’ve used computers for development, you probably know that there’s an IP address reserved for localhost: 127.0.0.1. When you type “localhost” in the browser, it translates to that IP address.
This translation doesn’t require an external network request. Instead, your computer checks the hosts file, where you can set up the same domain-to-IP translations locally. This is how you can test domains without actually buying them. You can say “google.com will be forwarded to this IP address” which can be on your own computer.
A Real-World Application
I used this principle just this morning. I have an old MSI computer from 2018—still a solid machine with an i7 processor and 64GB of RAM. I reformatted it last week and set up the Hyper-V server. Inside Hyper-V, I set up an Ubuntu machine to emulate hosting, and installed a virtual hosting manager called Webmin.
I know I could do everything via command line, but why write a lot of text when you can use a user interface?
Recently, we’ve been having problems with our servers. My business partner Javier (who’s like a brother to me) mentioned that we have many test servers without clear documentation of what’s inside each one. We decided to format some of them to make them clean test servers again.
One of our servers that was failing happens to host my blog—the very one you’re reading right now! Yesterday, Javier messaged me early in the morning (7 AM for me in Europe, around 9 PM for him in America) to tell me my blog was down. There seemed to be a problem with the server that I couldn’t immediately identify.
We decided to move to a bigger server. I created a backup of the virtual server (something I’ll discuss in a different post) and moved it to the Hyper-V virtual machine on my MSI computer. I didn’t want to redirect my real IP address and DNS servers to my home computer—that would be messy and prevent access to my blog temporarily.
Instead, I modified the hosts file on my computer to point to the private internal IP of that virtual server. This allowed me to test everything locally before making any public DNS changes.
Understanding DNS: A Practical Example
Let me explain how DNS actually works with a simple example using the domain jocheojeda.com and an IP address of 203.0.113.42.
How DNS Resolution Works with ISP DNS Servers
When you type jocheojeda.com in your browser, here’s what happens:

- Your browser asks your operating system to resolve
jocheojeda.com
- Your OS checks its local DNS cache, doesn’t find it, and then asks your ISP’s DNS server
- If the ISP’s DNS server doesn’t know, it asks the root DNS servers, which direct it to the appropriate Top-Level Domain (TLD) servers for
.com
- The TLD servers direct the ISP DNS to the authoritative DNS servers for
jocheojeda.com
- The authoritative DNS server responds with the A record:
jocheojeda.com -> 203.0.113.42
- Your ISP DNS server caches this information and passes it back to your computer
- Your browser can now connect directly to the web server at
203.0.113.42
DNS Records Explained
A Record (Address Record)
An A record maps a domain name directly to an IPv4 address:
jocheojeda.com. IN A 203.0.113.42
This tells DNS servers that when someone asks for jocheojeda.com, they should be directed to the server at 203.0.113.42.
CNAME Record (Canonical Name)
A CNAME record maps one domain name to another domain name:
www.jocheojeda.com. IN CNAME jocheojeda.com.
blog.jocheojeda.com. IN CNAME jocheojeda.com.
This means that www.jocheojeda.com and blog.jocheojeda.com are aliases for jocheojeda.com. When someone visits either of these subdomains, DNS will first resolve them to jocheojeda.com, and then resolve that to 203.0.113.42.
Using the Windows Hosts File
Now, let’s see what happens when you use the hosts file instead:

When using the hosts file:
- Your browser asks your operating system to resolve
jocheojeda.com
- Your OS checks the hosts file first, before any external DNS servers
- It finds an entry:
192.168.1.10 jocheojeda.com
- The OS immediately returns the IP
192.168.1.10 to your browser
- Your browser connects to
192.168.1.10 instead of the actual public IP
- The external DNS servers are never consulted
The Windows hosts file is located at C:\Windows\System32\drivers\etc\hosts. A typical entry might look like:
# For local development
192.168.1.10 jocheojeda.com
192.168.1.10 www.jocheojeda.com
192.168.1.10 api.jocheojeda.com
This is incredibly useful for:
- Testing websites locally before going live
- Testing different server configurations without changing public DNS
- Redirecting domains during development or troubleshooting
- Blocking certain websites by pointing them to 127.0.0.1
Why This Matters for Development
By modifying your hosts file, you can work on multiple websites locally, all running on the same machine but accessible via different domain names. This perfectly mimics how virtual hosting works on real servers, but without needing to change any public DNS records.
This technique saved me when my blog server was failing. I could test everything locally using my actual domain name in the browser, making sure everything was working correctly before changing any public DNS settings.
Conclusion
Understanding DNS and how to manipulate it locally via the hosts file is a powerful skill for any developer or system administrator. It allows you to test complex multi-domain setups without affecting your live environment, and can be a lifesaver when troubleshooting server issues.
In future posts, I’ll dive deeper into server virtualization and how to efficiently manage multiple web applications on a single server.
About Us
YouTube
https://www.youtube.com/c/JocheOjedaXAFXAMARINC
Our sites
Let’s discuss your XAF
https://www.udemy.com/course/microsoft-ai-extensions/
Our free A.I courses on Udemy
by Joche Ojeda | Mar 11, 2025 | http, MAUI, Xamarin
When developing cross-platform mobile applications with .NET MAUI (or previously Xamarin), you may encounter situations where your app works perfectly with public APIs but fails when connecting to internal network services. These issues often stem from HTTP client implementation differences, certificate validation, and TLS compatibility. This article explores how to identify, troubleshoot, and resolve these common networking challenges.
Understanding HTTP Client Options in MAUI/Xamarin
In the MAUI/.NET ecosystem, developers have access to two primary HTTP client implementations:
1. Managed HttpClient (Microsoft’s implementation)
- Cross-platform implementation built into .NET
- Consistent behavior across different operating systems
- May handle SSL/TLS differently than platform-native implementations
- Uses the .NET certificate validation system
2. Native HttpClient (Android’s implementation)
- Leverages the platform’s native networking stack
- Typically offers better performance on the specific platform
- Uses the device’s system certificate trust store
- Follows platform-specific security policies and restrictions
Switching Between Native and Managed HttpClient
In MAUI Applications
MAUI provides a flexible handler registration system that lets you explicitly choose which implementation to use:
// In your MauiProgram.cs
public static MauiApp CreateMauiApp()
{
var builder = MauiApp.CreateBuilder();
builder
.UseMauiApp<App>()
.ConfigureMauiHandlers(handlers =>
{
// Use the managed implementation (Microsoft's .NET HttpClient)
handlers.AddHandler(typeof(HttpClient), typeof(ManagedHttpMessageHandler));
// OR use the native implementation (platform-specific)
// handlers.AddHandler(typeof(HttpClient), typeof(PlatformHttpMessageHandler));
});
return builder.Build();
}
In Xamarin.Forms Legacy Applications
For Xamarin.Forms applications, set this in your platform-specific initialization code:
// In MainActivity.cs (Android) or AppDelegate.cs (iOS)
HttpClientHandler.UseNativePlatformHandler = false; // Use managed handler
// OR
HttpClientHandler.UseNativePlatformHandler = true; // Use native handler
Creating Specific Client Instances
You can also explicitly create HttpClient instances with specific handlers when needed:
// Use the managed handler
var managedHandler = new HttpClientHandler();
var managedClient = new HttpClient(managedHandler);
// Use the native handler (with DependencyService in Xamarin)
var nativeHandler = DependencyService.Get<INativeHttpClientHandler>();
var nativeClient = new HttpClient(nativeHandler);
Using HttpClientFactory (Recommended for MAUI)
For better control, testability, and lifecycle management, consider using HttpClientFactory:
// In your MauiProgram.cs
builder.Services.AddHttpClient("ManagedClient", client => {
client.BaseAddress = new Uri("https://your.api.url/");
})
.ConfigurePrimaryHttpMessageHandler(() => new SocketsHttpHandler());
// Then inject and use it in your services
public class MyApiService
{
private readonly HttpClient _client;
public MyApiService(IHttpClientFactory clientFactory)
{
_client = clientFactory.CreateClient("ManagedClient");
}
}
Common Issues and Troubleshooting
1. Self-Signed Certificates
Internal APIs often use self-signed certificates that aren’t trusted by default. Here’s how to handle them:
// Option 1: Create a custom handler that bypasses certificate validation
// (ONLY for development/testing environments)
var handler = new HttpClientHandler
{
ServerCertificateCustomValidationCallback = (message, cert, chain, errors) => true
};
var client = new HttpClient(handler);
For production environments, instead of bypassing validation:
- Add your self-signed certificate to the Android trust store
- Configure your app to trust specific certificates
- Generate proper certificates from a trusted Certificate Authority
2. TLS Version Mismatches
Different Android versions support different TLS versions by default:
- Android 4.1-4.4: TLS 1.0 by default
- Android 5.0+: TLS 1.0, 1.1, 1.2
- Android 10+: TLS 1.3 support
If your server requires a specific TLS version:
// Force specific TLS versions
System.Net.ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12 | SecurityProtocolType.Tls13;
3. Network Configuration
Ensure your app has the proper permissions in the AndroidManifest.xml:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
For Android 9+ (API level 28+), configure network security:
<!-- Create a network_security_config.xml file in Resources/xml -->
<?xml version="1.0" encoding="utf-8"?>
<network-security-config>
<domain-config cleartextTrafficPermitted="true">
<domain includeSubdomains="true">your.internal.domain</domain>
</domain-config>
</network-security-config>
Then reference it in your AndroidManifest.xml:
<application android:networkSecurityConfig="@xml/network_security_config">
Practical Troubleshooting Steps
- Test with both HTTP client implementationsSwitch between native and managed implementations to isolate whether the issue is specific to one implementation
- Test the API endpoint outside your appUse tools like Postman or curl on the same network
- Enable logging for network calls
// Add this before making requests
HttpClient.DefaultRequestHeaders.TryAddWithoutValidation("User-Agent", "YourApp/1.0");
- Capture and inspect network trafficUse Charles Proxy or Fiddler to inspect the actual requests/responses
- Check certificate information
# On your development machine
openssl s_client -connect your.internal.server:443 -showcerts
- Verify which implementation you’re using
var client = new HttpClient();
var handlerType = client.GetType().GetField("_handler",
System.Reflection.BindingFlags.Instance |
System.Reflection.BindingFlags.NonPublic)?.GetValue(client);
Console.WriteLine($"Using handler: {handlerType?.GetType().FullName}");
- Debug specific errors
- For Java.IO.IOException: “Trust anchor for certification path not found” – this means your app doesn’t trust the certificate
- For HttpRequestException with “The SSL connection could not be established” – likely a TLS version mismatch
Conclusion
When your MAUI Android app connects successfully to public APIs but fails with internal network services, the issue often lies with HTTP client implementation differences, certificate validation, or TLS compatibility. By systematically switching between native and managed HTTP clients and applying the troubleshooting techniques outlined above, you can identify and resolve these networking challenges.
Remember that each implementation has its advantages – the native implementation typically offers better performance and follows platform-specific security policies, while the managed implementation provides more consistent cross-platform behavior. Choose the one that best fits your specific requirements and security considerations.
About Us
YouTube
https://www.youtube.com/c/JocheOjedaXAFXAMARINC
Our sites
Let’s discuss your XAF
https://www.udemy.com/course/microsoft-ai-extensions/
Our free A.I courses on Udemy