by Joche Ojeda | Mar 12, 2025 | dotnet, http, netcore, netframework, network, WebServers
Last week, I was diving into Uno Platform to understand its UI paradigms. What particularly caught my attention is Uno’s ability to render a webapp using WebAssembly (WASM). Having worked with WASM apps before, I’m all too familiar with the challenges of connecting to data sources and handling persistence within these applications.
My Previous WASM Struggles
About a year ago, I faced a significant challenge: connecting a desktop WebAssembly app to an old WCF webservice. Despite having the CORS settings correctly configured (or so I thought), I simply couldn’t establish a connection from the WASM app to the server. I spent days troubleshooting both the WCF service and another ASMX service, but both attempts failed. Eventually, I had to resort to webserver proxies to achieve my goal.
This experience left me somewhat traumatized by the mere mention of “connecting WASM with an API.” However, the time came to face this challenge again during my weekend experiments.
A Pleasant Surprise with Uno Platform
This weekend, I wanted to connect a XAF REST API to an Uno Platform client. To my surprise, it turned out to be incredibly straightforward. I successfully performed this procedure twice: once with a XAF REST API and once with the API included in the Uno app template. The ease of this integration was a refreshing change from my previous struggles.
Understanding CORS and Why It Matters for WASM Apps
To understand why my previous attempts failed and my recent ones succeeded, it’s important to grasp what CORS is and why it’s crucial for WebAssembly applications.
What is CORS?
CORS (Cross-Origin Resource Sharing) is a security feature implemented by web browsers that restricts web pages from making requests to a domain different from the one that served the original web page. It’s an HTTP-header based mechanism that allows a server to indicate which origins (domains, schemes, or ports) other than its own are permitted to load resources.
The Same-Origin Policy
Browsers enforce a security restriction called the “same-origin policy” which prevents a website from one origin from requesting resources from another origin. An origin consists of:
- Protocol (HTTP, HTTPS)
- Domain name
- Port number
For example, if your website is hosted at https://myapp.com
, it cannot make AJAX requests to https://myapi.com
without the server explicitly allowing it through CORS.
Why CORS is Required for Blazor WebAssembly
Blazor WebAssembly (which uses similar principles to Uno Platform’s WASM implementation) is fundamentally different from Blazor Server in how it operates:
- Separate Deployment: Blazor WebAssembly apps are fully downloaded to the client’s browser and run entirely in the browser using WebAssembly. They’re typically hosted on a different server or domain than your API.
- Client-Side Execution: Since all code runs in the browser, when your Blazor WebAssembly app makes HTTP requests to your API, they’re treated as cross-origin requests if the API is hosted on a different domain, port, or protocol.
- Browser Security: Modern browsers block these cross-origin requests by default unless the server (your API) explicitly permits them via CORS headers.
Implementing CORS in Startup.cs
The solution to these CORS issues lies in properly configuring your server. In your Startup.cs
file, you can configure CORS as follows:
public void ConfigureServices(IServiceCollection services) {
services.AddCors(options => {
options.AddPolicy("AllowBlazorApp",
builder => {
builder.WithOrigins("https://localhost:5000") // Replace with your Blazor app's URL
.AllowAnyHeader()
.AllowAnyMethod();
});
});
// Other service configurations...
}
public void Configure(IApplicationBuilder app, IWebHostEnvironment env) {
// Other middleware configurations...
app.UseCors("AllowBlazorApp");
// Other middleware configurations...
}
Conclusion
My journey with connecting WebAssembly applications to APIs has had its ups and downs. What once seemed like an insurmountable challenge has now become much more manageable, especially with platforms like Uno that simplify the process. Understanding CORS and implementing it correctly is crucial for successful WASM-to-API communication.
If you’re working with WebAssembly applications and facing similar challenges, I hope my experience helps you avoid some of the pitfalls I encountered along the way.
About Us
YouTube
https://www.youtube.com/c/JocheOjedaXAFXAMARINC
Our sites
Let’s discuss your XAF
https://www.udemy.com/course/microsoft-ai-extensions/
Our free A.I courses on Udemy
by Joche Ojeda | Mar 11, 2025 | http, MAUI, Xamarin
When developing cross-platform mobile applications with .NET MAUI (or previously Xamarin), you may encounter situations where your app works perfectly with public APIs but fails when connecting to internal network services. These issues often stem from HTTP client implementation differences, certificate validation, and TLS compatibility. This article explores how to identify, troubleshoot, and resolve these common networking challenges.
Understanding HTTP Client Options in MAUI/Xamarin
In the MAUI/.NET ecosystem, developers have access to two primary HTTP client implementations:
1. Managed HttpClient (Microsoft’s implementation)
- Cross-platform implementation built into .NET
- Consistent behavior across different operating systems
- May handle SSL/TLS differently than platform-native implementations
- Uses the .NET certificate validation system
2. Native HttpClient (Android’s implementation)
- Leverages the platform’s native networking stack
- Typically offers better performance on the specific platform
- Uses the device’s system certificate trust store
- Follows platform-specific security policies and restrictions
Switching Between Native and Managed HttpClient
In MAUI Applications
MAUI provides a flexible handler registration system that lets you explicitly choose which implementation to use:
// In your MauiProgram.cs
public static MauiApp CreateMauiApp()
{
var builder = MauiApp.CreateBuilder();
builder
.UseMauiApp<App>()
.ConfigureMauiHandlers(handlers =>
{
// Use the managed implementation (Microsoft's .NET HttpClient)
handlers.AddHandler(typeof(HttpClient), typeof(ManagedHttpMessageHandler));
// OR use the native implementation (platform-specific)
// handlers.AddHandler(typeof(HttpClient), typeof(PlatformHttpMessageHandler));
});
return builder.Build();
}
In Xamarin.Forms Legacy Applications
For Xamarin.Forms applications, set this in your platform-specific initialization code:
// In MainActivity.cs (Android) or AppDelegate.cs (iOS)
HttpClientHandler.UseNativePlatformHandler = false; // Use managed handler
// OR
HttpClientHandler.UseNativePlatformHandler = true; // Use native handler
Creating Specific Client Instances
You can also explicitly create HttpClient instances with specific handlers when needed:
// Use the managed handler
var managedHandler = new HttpClientHandler();
var managedClient = new HttpClient(managedHandler);
// Use the native handler (with DependencyService in Xamarin)
var nativeHandler = DependencyService.Get<INativeHttpClientHandler>();
var nativeClient = new HttpClient(nativeHandler);
Using HttpClientFactory (Recommended for MAUI)
For better control, testability, and lifecycle management, consider using HttpClientFactory:
// In your MauiProgram.cs
builder.Services.AddHttpClient("ManagedClient", client => {
client.BaseAddress = new Uri("https://your.api.url/");
})
.ConfigurePrimaryHttpMessageHandler(() => new SocketsHttpHandler());
// Then inject and use it in your services
public class MyApiService
{
private readonly HttpClient _client;
public MyApiService(IHttpClientFactory clientFactory)
{
_client = clientFactory.CreateClient("ManagedClient");
}
}
Common Issues and Troubleshooting
1. Self-Signed Certificates
Internal APIs often use self-signed certificates that aren’t trusted by default. Here’s how to handle them:
// Option 1: Create a custom handler that bypasses certificate validation
// (ONLY for development/testing environments)
var handler = new HttpClientHandler
{
ServerCertificateCustomValidationCallback = (message, cert, chain, errors) => true
};
var client = new HttpClient(handler);
For production environments, instead of bypassing validation:
- Add your self-signed certificate to the Android trust store
- Configure your app to trust specific certificates
- Generate proper certificates from a trusted Certificate Authority
2. TLS Version Mismatches
Different Android versions support different TLS versions by default:
- Android 4.1-4.4: TLS 1.0 by default
- Android 5.0+: TLS 1.0, 1.1, 1.2
- Android 10+: TLS 1.3 support
If your server requires a specific TLS version:
// Force specific TLS versions
System.Net.ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12 | SecurityProtocolType.Tls13;
3. Network Configuration
Ensure your app has the proper permissions in the AndroidManifest.xml:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
For Android 9+ (API level 28+), configure network security:
<!-- Create a network_security_config.xml file in Resources/xml -->
<?xml version="1.0" encoding="utf-8"?>
<network-security-config>
<domain-config cleartextTrafficPermitted="true">
<domain includeSubdomains="true">your.internal.domain</domain>
</domain-config>
</network-security-config>
Then reference it in your AndroidManifest.xml:
<application android:networkSecurityConfig="@xml/network_security_config">
Practical Troubleshooting Steps
- Test with both HTTP client implementationsSwitch between native and managed implementations to isolate whether the issue is specific to one implementation
- Test the API endpoint outside your appUse tools like Postman or curl on the same network
- Enable logging for network calls
// Add this before making requests
HttpClient.DefaultRequestHeaders.TryAddWithoutValidation("User-Agent", "YourApp/1.0");
- Capture and inspect network trafficUse Charles Proxy or Fiddler to inspect the actual requests/responses
- Check certificate information
# On your development machine
openssl s_client -connect your.internal.server:443 -showcerts
- Verify which implementation you’re using
var client = new HttpClient();
var handlerType = client.GetType().GetField("_handler",
System.Reflection.BindingFlags.Instance |
System.Reflection.BindingFlags.NonPublic)?.GetValue(client);
Console.WriteLine($"Using handler: {handlerType?.GetType().FullName}");
- Debug specific errors
- For Java.IO.IOException: “Trust anchor for certification path not found” – this means your app doesn’t trust the certificate
- For HttpRequestException with “The SSL connection could not be established” – likely a TLS version mismatch
Conclusion
When your MAUI Android app connects successfully to public APIs but fails with internal network services, the issue often lies with HTTP client implementation differences, certificate validation, or TLS compatibility. By systematically switching between native and managed HTTP clients and applying the troubleshooting techniques outlined above, you can identify and resolve these networking challenges.
Remember that each implementation has its advantages – the native implementation typically offers better performance and follows platform-specific security policies, while the managed implementation provides more consistent cross-platform behavior. Choose the one that best fits your specific requirements and security considerations.
About Us
YouTube
https://www.youtube.com/c/JocheOjedaXAFXAMARINC
Our sites
Let’s discuss your XAF
https://www.udemy.com/course/microsoft-ai-extensions/
Our free A.I courses on Udemy
by Joche Ojeda | Mar 5, 2025 | C#, dotnet, Uno Platform
Exploring the Uno Platform: Handling Unsafe Code in Multi-Target Applications
This last weekend I wanted to do a technical experiment as I always do when I have some free time. I decided there was something new I needed to try and see if I could write about. The weekend turned out to be a beautiful surprise as I went back to test the Uno platform – a multi-OS, multi-target UI framework that generates mobile applications, desktop applications, web applications, and even Linux applications.
The idea of Uno is a beautiful concept, but for a long time, the tooling wasn’t quite there. I had made it work several times in the past, but after an update or something in Visual Studio, the setup would break and applications would become basically impossible to compile. That seems to no longer be the case!
Last weekend, I set up Uno on two different computers: my new Surface laptop with an ARM type of processor (which can sometimes be tricky for some tools) and my old MSI with an x64 type of processor. I was thrilled that the setup was effortless on both machines.
After the successful setup, I decided to download the entire Uno demo repository and start trying out the demos. However, for some reason, they didn’t compile. I eventually realized there was a problem with generated code during compilation time that turned out to be unsafe code. Here are my findings about how to handle the unsafe code that is generated.
AllowUnsafeBlocks Setting in Project File
I discovered that this setting was commented out in the Navigation.csproj file:
<!--<AllowUnsafeBlocks>true</AllowUnsafeBlocks>-->
When uncommented, this setting allows the use of unsafe code blocks in your .NET 8 Uno Platform project. To enable unsafe code, you need to remove the comment markers from this line in your project file.
Why It’s Needed
The <AllowUnsafeBlocks>true</AllowUnsafeBlocks> setting is required whenever you want to use “unsafe” code in C#. By default, C# is designed to be memory-safe, preventing direct memory manipulation that could lead to memory corruption, buffer overflows, or security vulnerabilities. When you add this setting to your project file, you’re explicitly telling the compiler to allow portions of code marked with the unsafe keyword.
Unsafe code lets you work with pointers and perform direct memory operations, which can be useful for:
- Performance-critical operations
- Interoperability with native code
- Direct memory manipulation
What Makes Code “Unsafe”
Code is considered “unsafe” when it bypasses .NET’s memory safety guarantees. Specifically, unsafe code includes:
- Pointer operations: Using the * and -> operators with memory addresses
- Fixed statements: Pinning managed objects in memory so their addresses don’t change during garbage collection
- Sizeof operator: Getting the size of a type in bytes
- Stackalloc keyword: Allocating memory on the stack instead of the heap
Example of Unsafe Code
Here’s an example of unsafe code that might be generated:
unsafe
{
int[] numbers = new int[] { 10, 20, 30, 40, 50 };
// UNSAFE: Pinning an array in memory and getting direct pointer
fixed (int* pNumbers = numbers)
{
// UNSAFE: Pointer declaration and manipulation
int* p = pNumbers;
// UNSAFE: Dereferencing pointers to modify memory directly
*p = *p + 5;
*(p + 1) = *(p + 1) + 5;
}
}
Why Use Unsafe Code?
There are several legitimate reasons to use unsafe code:
- Performance optimization: For extremely performance-critical sections where you need to eliminate overhead from bounds checking or other safety features.
- Interoperability: When interfacing with native libraries or system APIs that require pointers.
- Low-level operations: For systems programming tasks that require direct memory manipulation, like implementing custom memory managers.
- Hardware access: When working directly with device drivers or memory-mapped hardware.
- Algorithms requiring pointer arithmetic: Some specialized algorithms are most efficiently implemented using pointer operations.
Risks and Considerations
Using unsafe code comes with significant responsibilities:
- You bypass the runtime’s safety checks, so errors can cause application crashes or security vulnerabilities
- Memory leaks are possible if you allocate unmanaged memory and don’t free it properly
- Your code becomes less portable across different .NET implementations
- Debugging unsafe code is more challenging
In general, you should only use unsafe code when absolutely necessary and isolate it in small, well-tested sections of your application.
In conclusion, I’m happy to see that the Uno platform has matured significantly. While there are still some challenges like handling unsafe generated code, the setup process has become much more reliable. If you’re looking to develop truly cross-platform applications with a single codebase, Uno is worth exploring – just remember to uncomment that AllowUnsafeBlocks setting if you run into compilation issues!
by Joche Ojeda | Jan 20, 2025 | ADO, ADO.NET, Database, dotnet
When I first encountered the challenge of migrating hundreds of Visual Basic 6 reports to .NET, I never imagined it would lead me down a path of discovering specialized data analytics tools. Today, I want to share my experience with ADOMD.NET and how it could have transformed our reporting challenges, even though we couldn’t implement it due to our database constraints.
The Challenge: The Sales Gap Report
The story begins with a seemingly simple report called “Sales Gap.” Its purpose was critical: identify periods when regular customers stopped purchasing specific items. For instance, if a customer typically bought 10 units monthly from January to May, then suddenly stopped in June and July, sales representatives needed to understand why.
This report required complex queries across multiple transactional tables:
- Invoicing
- Sales
- Returns
- Debits
- Credits
Initially, the report took about a minute to run. As our data grew, so did the execution time—eventually reaching an unbearable 15 minutes. We were stuck with a requirement to use real-time transactional data, making traditional optimization techniques like data warehousing off-limits.
Enter ADOMD.NET: A Specialized Solution
ADOMD.NET (ActiveX Data Objects Multidimensional .NET) emerged as a potential solution. Here’s why it caught my attention:
Key Features:
-
Multidimensional Analysis
Unlike traditional SQL queries, ADOMD.NET uses MDX (Multidimensional Expressions), specifically designed for analytical queries. Here’s a basic example:
string mdxQuery = @"
SELECT
{[Measures].[Sales Amount]} ON COLUMNS,
{[Date].[Calendar Year].MEMBERS} ON ROWS
FROM [Sales Cube]
WHERE [Product].[Category].[Electronics]";
-
Performance Optimization
ADOMD.NET is built for analytical workloads, offering better performance for complex calculations and aggregations. It achieves this through:
- Specialized data structures for multidimensional analysis
- Efficient handling of hierarchical data
- Built-in support for complex calculations
-
Advanced Analytics Capabilities
The tool supports sophisticated analysis patterns like:
string mdxQuery = @"
WITH MEMBER [Measures].[GrowthVsPreviousYear] AS
([Measures].[Sales Amount] -
([Measures].[Sales Amount], [Date].[Calendar Year].PREVMEMBER)
)/([Measures].[Sales Amount], [Date].[Calendar Year].PREVMEMBER)
SELECT
{[Measures].[Sales Amount], [Measures].[GrowthVsPreviousYear]}
ON COLUMNS...";
Lessons Learned
While we couldn’t implement ADOMD.NET due to our use of Pervasive Database instead of SQL Server, the investigation taught me valuable lessons about report optimization:
- The importance of choosing the right tools for analytical workloads
- The limitations of running complex analytics on transactional databases
- The value of specialized query languages for different types of data analysis
Modern Applications
Today, ADOMD.NET continues to be relevant for organizations using:
- SQL Server Analysis Services (SSAS)
- Azure Analysis Services
- Power BI Premium datasets
If I were facing the same challenge today with SQL Server, ADOMD.NET would be my go-to solution for:
- Complex sales analysis
- Customer behavior tracking
- Performance-intensive analytical reports
Conclusion
While our specific situation with Pervasive Database prevented us from using ADOMD.NET, it remains a powerful tool for organizations using Microsoft’s analytics stack. The experience taught me that sometimes the solution isn’t about optimizing existing queries, but about choosing the right specialized tools for analytical workloads.
Remember: Just because you can run analytics on your transactional database doesn’t mean you should. Tools like ADOMD.NET exist for a reason, and understanding when to use them can save countless hours of optimization work and provide better results for your users.
by Joche Ojeda | Jan 17, 2025 | DevExpress, dotnet
My mom used to say that fashion is cyclical – whatever you do will eventually come back around. I’ve come to realize the same principle applies to technology. Many technologies have come and gone, only to resurface again in new forms.
Take Command Line Interface (CLI) commands, for example. For years, the industry pushed to move away from CLI towards graphical interfaces, promising a more user-friendly experience. Yet here we are in 2025, witnessing a remarkable return to CLI-based tools, especially in software development.
As a programmer, efficiency is key – particularly when dealing with repetitive tasks. This became evident when my business partner Javier and I decided to create our own application templates for Visual Studio. The process was challenging, mainly because Visual Studio’s template infrastructure isn’t well maintained. Documentation was sparse, and the whole process felt cryptic.
Our first major project was creating a template for Xamarin.Forms (now .NET MAUI), aiming to build a multi-target application template that could work across Android, iOS, and Windows. We relied heavily on James Montemagno’s excellent resources and videos to navigate this complex territory.
The task became significantly easier with the introduction of the new SDK-style projects. Compared to the older MSBuild project types, which were notoriously complex to template, the new format makes creating custom project templates much more straightforward.
In today’s development landscape, most application templates are distributed as NuGet packages, making them easier to share and implement. Interestingly, these packages are primarily designed for CLI use rather than Visual Studio’s graphical interface – a perfect example of technology coming full circle.
Following this trend, DevExpress has developed a new set of application templates that work cross-platform using the CLI. These templates leverage SkiaSharp for UI rendering, enabling true multi-IDE and multi-OS compatibility. While they’re not yet compatible with Apple Silicon, that support is likely coming in future updates.
The templates utilize CLI under the hood to generate new project structures. When you install these templates in Visual Studio Code or Visual Studio, they become available through both the CLI and the graphical interface, offering developers the best of both worlds.
Here is the official DevExpress blog post for the new application templates
https://www.devexpress.com/subscriptions/whats-new/#project-template-gallery-net8
Templates for Visual Studio
DevExpress Template Kit for Visual Studio – Visual Studio Marketplace
Templates for VS Code
DevExpress Template Kit for VS Code – Visual Studio Marketplace
If you want to see the list of the new installed DevExpress templates, you can use the following command on the terminal
dotnet new list dx

I’d love to hear your thoughts on this technological cycle. Which approach do you prefer for creating new projects – CLI or graphical interface? Let me know in the comments below!
by Joche Ojeda | Jan 15, 2025 | C#, dotnet, Emit, MetaProgramming, Reflection
Every programmer encounters that one technology that draws them into the darker arts of software development. For some, it’s metaprogramming; for others, it’s assembly hacking. For me, it was the mysterious world of runtime code generation through Emit in the early 2000s, during my adventures with XPO and the enigmatic Sage Accpac ERP.
The Quest Begins: A Tale of Documentation and Dark Arts
Back in the early 2000s, when the first version of XPO was released, I found myself working alongside my cousin Carlitos in our startup. Fresh from his stint as an ERP consultant in the United States, Carlitos brought with him deep knowledge of Sage Accpac, setting us on a path to provide integration services for this complex system.
Our daily bread and butter were custom reports – starting with Crystal Reports before graduating to DevExpress’s XtraReports and XtraPivotGrid. But we faced an interesting challenge: Accpac’s database was intentionally designed to resist reverse engineering, with flat tables devoid of constraints or relationships. All we had was their HTML documentation, a labyrinth of interconnected pages holding the secrets of their entity relationships.
Genesis: When Documentation Meets Dark Magic
This challenge birthed Project Genesis, my ambitious attempt to create an XPO class generator that could parse Accpac’s documentation. The first hurdle was parsing HTML – a quest that led me to CodePlex (yes, I’m dating myself here) and the discovery of HTMLAgilityPack, a remarkable tool that still serves developers today.
But the real dark magic emerged when I faced the challenge of generating classes dynamically. Buried in our library’s .NET books, I discovered the arcane art of Emit – a powerful technique for runtime assembly and class generation that would forever change my perspective on what’s possible in .NET.
Diving into the Abyss: Understanding Emit
At its core, Emit is like having a magical forge where you can craft code at runtime. Imagine being able to write code that writes more code – not just as text to be compiled later, but as actual, executable IL instructions that the CLR can run immediately.
AssemblyName assemblyName = new AssemblyName("DynamicAssembly");
AssemblyBuilder assemblyBuilder = AssemblyBuilder.DefineDynamicAssembly(
assemblyName,
AssemblyBuilderAccess.Run
);
This seemingly simple code opens a portal to one of .NET’s most powerful capabilities: dynamic assembly generation. It’s the beginning of a spell that allows you to craft types and methods from pure thought (and some carefully crafted IL instructions).
The Power and the Peril
Like all dark magic, Emit comes with its own dangers and responsibilities. When you’re generating IL directly, you’re dancing with the very fabric of .NET execution. One wrong move – one misplaced instruction – and your carefully crafted spell can backfire spectacularly.
The first rule of Emit Club is: don’t use Emit unless you absolutely have to. The second rule is: if you do use it, document everything meticulously. Your future self (and your team) will thank you.
Modern Alternatives and Evolution
Today, the .NET ecosystem offers alternatives like Source Generators that provide similar power with less risk. But understanding Emit remains valuable – it’s like knowing the fundamental laws of magic while using higher-level spells for daily work.
In my case, Project Genesis evolved beyond its original scope, teaching me crucial lessons about runtime code generation, performance optimization, and the delicate balance between power and maintainability.
Conclusion: The Magic Lives On
Twenty years later, Emit remains one of .NET’s most powerful and mysterious features. While modern development practices might steer us toward safer alternatives, understanding these fundamental building blocks of runtime code generation gives us deeper insight into the framework’s capabilities.
For those brave enough to venture into this realm, remember: with great power comes great responsibility – and the need for comprehensive unit tests. The dark magic of Emit might be seductive, but like all powerful tools, it demands respect and careful handling.