by Joche Ojeda | Aug 4, 2025 | Linux, Ubuntu, WSL
Email functionality is a critical component of most modern applications, from user authentication and password resets to notifications and marketing campaigns. However, testing email features during development can be challenging—you don’t want to accidentally send test emails to real users, and setting up a complete email server for testing is often overkill. This is where MailHog comes to the rescue.
What is MailHog?
MailHog is an open-source email testing tool designed specifically for development and testing environments. Think of it as a “fake” SMTP server that captures emails sent by your application instead of delivering them to real recipients. It provides a clean web interface where you can view, inspect, and manage all captured emails in real-time.
Built with Go and completely free, MailHog has become an indispensable tool for developers who need to test email functionality without the complexity and risks associated with real email delivery.
Why MailHog is Perfect for .NET Development
As a .NET developer, you’ve likely encountered scenarios where you need to test:
- User registration and email verification
- Password reset workflows
- Account activation processes
- Notification systems
- Email templates and formatting
MailHog seamlessly integrates with .NET applications using the standard SMTP libraries you’re already familiar with. Whether you’re using System.Net.Mail.SmtpClient
or other SMTP libraries, MailHog works transparently as a drop-in replacement for your production SMTP server.
Key Features That Make MailHog Stand Out
SMTP Server Compliance
- Full RFC5321 ESMTP server implementation
- Support for SMTP AUTH (RFC4954) and PIPELINING (RFC2920)
- Works with any SMTP client library
Developer-Friendly Interface
- Clean web UI to view messages in plain text, HTML, or raw source
- Real-time updates using EventSource technology
- Support for RFC2047 encoded headers
- Multipart MIME support with downloadable individual parts
Testing and Development Features
- Chaos Monkey: Built-in failure testing to simulate email delivery issues
- Message Release: Forward captured emails to real SMTP servers when needed
- HTTP API: Programmatically list, retrieve, and delete messages (APIv1 and APIv2)
- Authentication: HTTP basic authentication for UI and API security
Storage Options
- In-memory storage: Lightweight and fast for development
- MongoDB persistence: For scenarios requiring message persistence
- File-based storage: Simple file system storage option
Deployment Benefits
- Lightweight and portable: Single binary with no dependencies
- No installation required: Download and run
- Cross-platform: Works on Windows, macOS, and Linux
Installing MailHog on WSL2
Setting up MailHog on Windows Subsystem for Linux (WSL2) is straightforward and provides excellent performance for .NET development workflows.
Option 1: Automated Installation with Script
If you don’t want to manually install MailHog, you can use my automated installation script for WSL:
# Download and run the installation script
curl -sSL https://raw.githubusercontent.com/egarim/MyWslScripts/master/install_mailhog.sh | bash
This script will automatically download MailHog, set it up, and configure it as a service. You can find the script at: https://github.com/egarim/MyWslScripts/blob/master/install_mailhog.sh
Option 2: Manual Installation
Step 1: Download MailHog
# Create a directory for MailHog
mkdir ~/mailhog
cd ~/mailhog
# Download the latest Linux binary
wget https://github.com/mailhog/MailHog/releases/download/v1.0.1/MailHog_linux_amd64
# Make it executable
chmod +x MailHog_linux_amd64
# Optional: Create a symlink for easier access
sudo ln -s ~/mailhog/MailHog_linux_amd64 /usr/local/bin/mailhog
Step 2: Start MailHog
# Start MailHog (runs on ports 1025 for SMTP and 8025 for web UI)
./MailHog_linux_amd64
# Or if you created the symlink:
mailhog
Step 3: Verify Installation
Open your browser and navigate to http://localhost:8025
. You should see the MailHog web interface ready to capture emails.
Step 4: Configure as a Service (Optional)
For persistent use, create a systemd service:
# Create service file
sudo nano /etc/systemd/system/mailhog.service
Add the following content:
[Unit]
Description=MailHog Email Web Service
After=network.target
[Service]
Type=simple
User=your-username
ExecStart=/home/your-username/mailhog/MailHog_linux_amd64
Restart=always
[Install]
WantedBy=multi-user.target
Enable and start the service:
sudo systemctl enable mailhog
sudo systemctl start mailhog
Integrating MailHog with .NET Applications
Configuration in appsettings.json
{
"EmailSettings": {
"SmtpServer": "localhost",
"SmtpPort": 1025,
"FromEmail": "noreply@yourapp.com",
"FromName": "Your Application"
}
}
Using with System.Net.Mail
public class EmailService
{
private readonly IConfiguration _configuration;
public EmailService(IConfiguration configuration)
{
_configuration = configuration;
}
public async Task SendEmailAsync(string to, string subject, string body)
{
var smtpClient = new SmtpClient(_configuration["EmailSettings:SmtpServer"])
{
Port = int.Parse(_configuration["EmailSettings:SmtpPort"]),
EnableSsl = false, // MailHog doesn't require SSL
UseDefaultCredentials = true
};
var mailMessage = new MailMessage
{
From = new MailAddress(_configuration["EmailSettings:FromEmail"],
_configuration["EmailSettings:FromName"]),
Subject = subject,
Body = body,
IsBodyHtml = true
};
mailMessage.To.Add(to);
await smtpClient.SendMailAsync(mailMessage);
}
}
Real-World Testing Scenarios
Password Reset Testing
[Fact]
public async Task PasswordReset_ShouldSendEmail()
{
// Arrange
var userEmail = "test@example.com";
var resetToken = Guid.NewGuid().ToString();
// Act
await _authService.SendPasswordResetEmailAsync(userEmail, resetToken);
// Assert - Check MailHog API for sent email
var httpClient = new HttpClient();
var response = await httpClient.GetAsync("http://localhost:8025/api/v2/messages");
var messages = JsonSerializer.Deserialize<MailHogResponse>(await response.Content.ReadAsStringAsync());
Assert.Single(messages.Items);
Assert.Contains(resetToken, messages.Items[0].Content.Body);
}
Email Template Verification
With MailHog’s web interface, you can:
- Preview HTML email templates exactly as recipients would see them
- Test responsive design across different screen sizes
- Verify that images and styling render correctly
- Check for broken links or formatting issues
Advanced MailHog Usage
Environment-Specific Configuration
Use different MailHog instances for different environments:
# Development environment
mailhog -smtp-bind-addr 127.0.0.1:1025 -ui-bind-addr 127.0.0.1:8025
# Testing environment
mailhog -smtp-bind-addr 127.0.0.1:1026 -ui-bind-addr 127.0.0.1:8026
API Integration for Automated Tests
public class MailHogClient
{
private readonly HttpClient _httpClient;
public MailHogClient()
{
_httpClient = new HttpClient { BaseAddress = new Uri("http://localhost:8025/") };
}
public async Task<IEnumerable<Email>> GetEmailsAsync()
{
var response = await _httpClient.GetAsync("api/v2/messages");
var content = await response.Content.ReadAsStringAsync();
var mailHogResponse = JsonSerializer.Deserialize<MailHogResponse>(content);
return mailHogResponse.Items;
}
public async Task DeleteAllEmailsAsync()
{
await _httpClient.DeleteAsync("api/v1/messages");
}
}
Why I Use MailHog Daily
As someone who works extensively with .NET applications requiring email functionality, MailHog has become an essential part of my development toolkit. Here’s why:
Reliability: No more worrying about test emails reaching real users or bouncing back from invalid addresses.
Speed: Instant email capture and viewing without network delays or external dependencies.
Debugging: The ability to inspect raw email headers and content makes troubleshooting email issues much easier.
Team Collaboration: Developers can share MailHog URLs to demonstrate email functionality during code reviews or testing sessions.
CI/CD Integration: MailHog works perfectly in Docker containers and automated testing pipelines.
Conclusion
MailHog represents the perfect balance of simplicity and functionality for email testing in .NET development. Its open-source nature, zero-configuration setup, and comprehensive feature set make it an invaluable tool for any developer working with email functionality.
Whether you’re building a simple contact form or a complex multi-tenant application with sophisticated email workflows, MailHog provides the testing infrastructure you need without the complexity of traditional email servers.
Give MailHog a try in your next .NET project—you’ll wonder how you ever developed email features without it.
Resources:
by Joche Ojeda | Jun 26, 2025 | EfCore
What is the N+1 Problem?
Imagine you’re running a blog website and want to display a list of all blogs along with how many posts each one has. The N+1 problem is a common database performance issue that happens when your application makes way too many database trips to get this simple information.
Our Test Database Setup
Our test suite creates a realistic blog scenario with:
- 3 different blogs
- Multiple posts for each blog
- Comments on posts
- Tags associated with blogs
This mirrors real-world applications where data is interconnected and needs to be loaded efficiently.
Test Case 1: The Classic N+1 Problem (Lazy Loading)
What it does: This test demonstrates how “lazy loading” can accidentally create the N+1 problem. Lazy loading sounds helpful – it automatically fetches related data when you need it. But this convenience comes with a hidden cost.
The Code:
[Test]
public void Test_N_Plus_One_Problem_With_Lazy_Loading()
{
var blogs = _context.Blogs.ToList(); // Query 1: Load blogs
foreach (var blog in blogs)
{
var postCount = blog.Posts.Count; // Each access triggers a query!
TestLogger.WriteLine($"Blog: {blog.Title} - Posts: {postCount}");
}
}
The SQL Queries Generated:
-- Query 1: Load all blogs
SELECT "b"."Id", "b"."CreatedDate", "b"."Description", "b"."Title"
FROM "Blogs" AS "b"
-- Query 2: Load posts for Blog 1 (triggered by lazy loading)
SELECT "p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title"
FROM "Posts" AS "p"
WHERE "p"."BlogId" = 1
-- Query 3: Load posts for Blog 2 (triggered by lazy loading)
SELECT "p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title"
FROM "Posts" AS "p"
WHERE "p"."BlogId" = 2
-- Query 4: Load posts for Blog 3 (triggered by lazy loading)
SELECT "p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title"
FROM "Posts" AS "p"
WHERE "p"."BlogId" = 3
The Problem: 4 total queries (1 + 3) – Each time you access blog.Posts.Count
, lazy loading triggers a separate database trip.
Test Case 2: Alternative N+1 Demonstration
What it does: This test manually recreates the N+1 pattern to show exactly what’s happening, even if lazy loading isn’t working properly.
The Code:
[Test]
public void Test_N_Plus_One_Problem_Alternative_Approach()
{
var blogs = _context.Blogs.ToList(); // Query 1
foreach (var blog in blogs)
{
// This explicitly loads posts for THIS blog only (simulates lazy loading)
var posts = _context.Posts.Where(p => p.BlogId == blog.Id).ToList();
TestLogger.WriteLine($"Loaded {posts.Count} posts for blog {blog.Id}");
}
}
The Lesson: This explicitly demonstrates the N+1 pattern with manual queries. The result is identical to lazy loading – one query per blog plus the initial blogs query.
Test Case 3: N+1 vs Include() – Side by Side Comparison
What it does: This is the money shot – a direct comparison showing the dramatic difference between the problematic approach and the solution.
The Bad Code (N+1):
// BAD: N+1 Problem
var blogsN1 = _context.Blogs.ToList(); // Query 1
foreach (var blog in blogsN1)
{
var posts = _context.Posts.Where(p => p.BlogId == blog.Id).ToList(); // Queries 2,3,4...
}
The Good Code (Include):
// GOOD: Include() Solution
var blogsInclude = _context.Blogs
.Include(b => b.Posts)
.ToList(); // Single query with JOIN
foreach (var blog in blogsInclude)
{
// No additional queries needed - data is already loaded!
var postCount = blog.Posts.Count;
}
The SQL Queries:
Bad Approach (Multiple Queries):
-- Same 4 separate queries as shown in Test Case 1
Good Approach (Single Query):
SELECT "b"."Id", "b"."CreatedDate", "b"."Description", "b"."Title",
"p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title"
FROM "Blogs" AS "b"
LEFT JOIN "Posts" AS "p" ON "b"."Id" = "p"."BlogId"
ORDER BY "b"."Id"
Results from our test:
- Bad approach: 4 total queries (1 + 3)
- Good approach: 1 total query
- Performance improvement: 75% fewer database round trips!
Test Case 4: Guaranteed N+1 Problem
What it does: This test removes any doubt by explicitly demonstrating the N+1 pattern with clear step-by-step output.
The Code:
[Test]
public void Test_Guaranteed_N_Plus_One_Problem()
{
var blogs = _context.Blogs.ToList(); // Query 1
int queryCount = 1;
foreach (var blog in blogs)
{
queryCount++;
// This explicitly demonstrates the N+1 pattern
var posts = _context.Posts.Where(p => p.BlogId == blog.Id).ToList();
TestLogger.WriteLine($"Loading posts for blog '{blog.Title}' (Query #{queryCount})");
}
}
Why it’s useful: This ensures we can always see the problem clearly by manually executing the problematic pattern, making it impossible to miss.
Test Case 5: Eager Loading with Include()
What it does: Shows the correct way to load related data upfront using Include()
.
The Code:
[Test]
public void Test_Eager_Loading_With_Include()
{
var blogsWithPosts = _context.Blogs
.Include(b => b.Posts)
.ToList();
foreach (var blog in blogsWithPosts)
{
// No additional queries - data already loaded!
TestLogger.WriteLine($"Blog: {blog.Title} - Posts: {blog.Posts.Count}");
}
}
The SQL Query:
SELECT "b"."Id", "b"."CreatedDate", "b"."Description", "b"."Title",
"p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title"
FROM "Blogs" AS "b"
LEFT JOIN "Posts" AS "p" ON "b"."Id" = "p"."BlogId"
ORDER BY "b"."Id"
The Benefit: One database trip loads everything. When you access blog.Posts.Count
, the data is already there.
Test Case 6: Multiple Includes with ThenInclude()
What it does: Demonstrates loading deeply nested data – blogs → posts → comments – all in one query.
The Code:
[Test]
public void Test_Multiple_Includes_With_ThenInclude()
{
var blogsWithPostsAndComments = _context.Blogs
.Include(b => b.Posts)
.ThenInclude(p => p.Comments)
.ToList();
foreach (var blog in blogsWithPostsAndComments)
{
foreach (var post in blog.Posts)
{
// All data loaded in one query!
TestLogger.WriteLine($"Post: {post.Title} - Comments: {post.Comments.Count}");
}
}
}
The SQL Query:
SELECT "b"."Id", "b"."CreatedDate", "b"."Description", "b"."Title",
"p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title",
"c"."Id", "c"."Author", "c"."Content", "c"."CreatedDate", "c"."PostId"
FROM "Blogs" AS "b"
LEFT JOIN "Posts" AS "p" ON "b"."Id" = "p"."BlogId"
LEFT JOIN "Comments" AS "c" ON "p"."Id" = "c"."PostId"
ORDER BY "b"."Id", "p"."Id"
The Challenge: Loading three levels of data in one optimized query instead of potentially hundreds of separate queries.
Test Case 7: Projection with Select()
What it does: Shows how to load only the specific data you actually need instead of entire objects.
The Code:
[Test]
public void Test_Projection_With_Select()
{
var blogData = _context.Blogs
.Select(b => new
{
BlogTitle = b.Title,
PostCount = b.Posts.Count(),
RecentPosts = b.Posts
.OrderByDescending(p => p.PublishedDate)
.Take(2)
.Select(p => new { p.Title, p.PublishedDate })
})
.ToList();
}
The SQL Query (from our test output):
SELECT "b"."Title", (
SELECT COUNT(*)
FROM "Posts" AS "p"
WHERE "b"."Id" = "p"."BlogId"), "b"."Id", "t0"."Title", "t0"."PublishedDate", "t0"."Id"
FROM "Blogs" AS "b"
LEFT JOIN (
SELECT "t"."Title", "t"."PublishedDate", "t"."Id", "t"."BlogId"
FROM (
SELECT "p0"."Title", "p0"."PublishedDate", "p0"."Id", "p0"."BlogId",
ROW_NUMBER() OVER(PARTITION BY "p0"."BlogId" ORDER BY "p0"."PublishedDate" DESC) AS "row"
FROM "Posts" AS "p0"
) AS "t"
WHERE "t"."row" <= 2
) AS "t0" ON "b"."Id" = "t0"."BlogId"
ORDER BY "b"."Id", "t0"."BlogId", "t0"."PublishedDate" DESC
Why it matters: This query only loads the specific fields needed, uses window functions for efficiency, and calculates counts in the database rather than loading full objects.
Test Case 8: Split Query Strategy
What it does: Demonstrates an alternative approach where one large JOIN is split into multiple optimized queries.
The Code:
[Test]
public void Test_Split_Query()
{
var blogs = _context.Blogs
.AsSplitQuery()
.Include(b => b.Posts)
.Include(b => b.Tags)
.ToList();
}
The SQL Queries (from our test output):
-- Query 1: Load blogs
SELECT "b"."Id", "b"."CreatedDate", "b"."Description", "b"."Title"
FROM "Blogs" AS "b"
ORDER BY "b"."Id"
-- Query 2: Load posts (automatically generated)
SELECT "p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title", "b"."Id"
FROM "Blogs" AS "b"
INNER JOIN "Posts" AS "p" ON "b"."Id" = "p"."BlogId"
ORDER BY "b"."Id"
-- Query 3: Load tags (automatically generated)
SELECT "t"."Id", "t"."Name", "b"."Id"
FROM "Blogs" AS "b"
INNER JOIN "BlogTag" AS "bt" ON "b"."Id" = "bt"."BlogsId"
INNER JOIN "Tags" AS "t" ON "bt"."TagsId" = "t"."Id"
ORDER BY "b"."Id"
When to use it: When JOINing lots of related data creates one massive, slow query. Split queries break this into several smaller, faster queries.
Test Case 9: Filtered Include()
What it does: Shows how to load only specific related data – in this case, only recent posts from the last 15 days.
The Code:
[Test]
public void Test_Filtered_Include()
{
var cutoffDate = DateTime.Now.AddDays(-15);
var blogsWithRecentPosts = _context.Blogs
.Include(b => b.Posts.Where(p => p.PublishedDate > cutoffDate))
.ToList();
}
The SQL Query:
SELECT "b"."Id", "b"."CreatedDate", "b"."Description", "b"."Title",
"p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title"
FROM "Blogs" AS "b"
LEFT JOIN "Posts" AS "p" ON "b"."Id" = "p"."BlogId" AND "p"."PublishedDate" > @cutoffDate
ORDER BY "b"."Id"
The Efficiency: Only loads posts that meet the criteria, reducing data transfer and memory usage.
Test Case 10: Explicit Loading
What it does: Demonstrates manually controlling when related data gets loaded.
The Code:
[Test]
public void Test_Explicit_Loading()
{
var blogs = _context.Blogs.ToList(); // Load blogs only
// Now explicitly load posts for all blogs
foreach (var blog in blogs)
{
_context.Entry(blog)
.Collection(b => b.Posts)
.Load();
}
}
The SQL Queries:
-- Query 1: Load blogs
SELECT "b"."Id", "b"."CreatedDate", "b"."Description", "b"."Title"
FROM "Blogs" AS "b"
-- Query 2: Explicitly load posts for blog 1
SELECT "p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title"
FROM "Posts" AS "p"
WHERE "p"."BlogId" = 1
-- Query 3: Explicitly load posts for blog 2
SELECT "p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title"
FROM "Posts" AS "p"
WHERE "p"."BlogId" = 2
-- ... and so on
When useful: When you sometimes need related data and sometimes don’t. You control exactly when the additional database trip happens.
Test Case 11: Batch Loading Pattern
What it does: Shows a clever technique to avoid N+1 by loading all related data in one query, then organizing it in memory.
The Code:
[Test]
public void Test_Batch_Loading_Pattern()
{
var blogs = _context.Blogs.ToList(); // Query 1
var blogIds = blogs.Select(b => b.Id).ToList();
// Single query to get all posts for all blogs
var posts = _context.Posts
.Where(p => blogIds.Contains(p.BlogId))
.ToList(); // Query 2
// Group posts by blog in memory
var postsByBlog = posts.GroupBy(p => p.BlogId).ToDictionary(g => g.Key, g => g.ToList());
}
The SQL Queries:
-- Query 1: Load all blogs
SELECT "b"."Id", "b"."CreatedDate", "b"."Description", "b"."Title"
FROM "Blogs" AS "b"
-- Query 2: Load ALL posts for ALL blogs in one query
SELECT "p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title"
FROM "Posts" AS "p"
WHERE "p"."BlogId" IN (1, 2, 3)
The Result: Just 2 queries total, regardless of how many blogs you have. Data organization happens in memory.
Test Case 12: Performance Comparison
What it does: Puts all the approaches head-to-head to show their relative performance.
The Code:
[Test]
public void Test_Performance_Comparison()
{
// N+1 Problem (Multiple Queries)
var blogs1 = _context.Blogs.ToList();
foreach (var blog in blogs1)
{
var count = blog.Posts.Count(); // Triggers separate query
}
// Eager Loading (Single Query)
var blogs2 = _context.Blogs
.Include(b => b.Posts)
.ToList();
// Projection (Minimal Data)
var blogSummaries = _context.Blogs
.Select(b => new { b.Title, PostCount = b.Posts.Count() })
.ToList();
}
The SQL Queries Generated:
N+1 Problem: 4 separate queries (as shown in previous examples)
Eager Loading: 1 JOIN query (as shown in Test Case 5)
Projection: 1 optimized query with subquery:
SELECT "b"."Title", (
SELECT COUNT(*)
FROM "Posts" AS "p"
WHERE "b"."Id" = "p"."BlogId") AS "PostCount"
FROM "Blogs" AS "b"
Real-World Performance Impact
Let’s scale this up to see why it matters:
Small Application (10 blogs):
- N+1 approach: 11 queries (≈110ms)
- Optimized approach: 1 query (≈10ms)
- Time saved: 100ms
Medium Application (100 blogs):
- N+1 approach: 101 queries (≈1,010ms)
- Optimized approach: 1 query (≈10ms)
- Time saved: 1 second
Large Application (1000 blogs):
- N+1 approach: 1001 queries (≈10,010ms)
- Optimized approach: 1 query (≈10ms)
- Time saved: 10 seconds
Key Takeaways
- The N+1 problem gets exponentially worse as your data grows
- Lazy loading is convenient but dangerous – it can hide performance problems
- Include() is your friend for loading related data efficiently
- Projection is powerful when you only need specific fields
- Different problems need different solutions – there’s no one-size-fits-all approach
- SQL query inspection is crucial – always check what queries your ORM generates
The Bottom Line
This test suite shows that small changes in how you write database queries can transform a slow, database-heavy operation into a fast, efficient one. The difference between a frustrated user waiting 10 seconds for a page to load and a happy user getting instant results often comes down to understanding and avoiding the N+1 problem.
The beauty of these tests is that they use real database queries with actual SQL output, so you can see exactly what’s happening under the hood. Understanding these patterns will make you a more effective developer and help you build applications that stay fast as they grow.
You can find the source for this article in my here
by Joche Ojeda | Oct 11, 2024 | A.I, DevExpress, XAF
Today is Friday, so I decided to take it easy with my integration research. When I woke up, I decided that I just wanted to read the source code of DevExpress AI integrations to get inspired. I began by reading the official blog post about AI and reporting (DevExpress Blog Post). Then, as usual, I proceeded to fork the repository to make my own modifications.
After completing the typical cloning procedure in Visual Studio, I realized that to use the AI functionalities of XtraReport, you don’t need any special version of the report viewer.
The only requirement is to have the NuGet reference as shown below:
<ItemGroup>
<PackageReference Include="DevExpress.AIIntegration.Blazor.Reporting.Viewer" Version="24.2.1-alpha-24260" />
</ItemGroup>
Then, add the report integration as shown below:
config.AddBlazorReportingAIIntegration(config =>
{
config.SummarizeBehavior = SummarizeBehavior.Abstractive;
config.AvailableLanguages = new List<LanguageItem>
{
new LanguageItem { Key = "de", Text = "German" },
new LanguageItem { Key = "es", Text = "Spanish" },
new LanguageItem { Key = "en", Text = "English" },
new LanguageItem { Key = "ru", Text = "Russian" },
new LanguageItem { Key = "it", Text = "Italian" }
};
});
After completing these steps, your report viewer will display a little star in the options menu, where you can invoke the AI operations.
You can find the source code for this example in my GitHub repository: https://github.com/egarim/XafSmartEditors
Till next time, XAF out!!!
by Joche Ojeda | Oct 10, 2024 | A.I, PropertyEditors, XAF
The New Era of Smart Editors: Developer Express and AI Integration
The new era of smart editors is already here. Developer Express has introduced AI functionality in many of their controls for .NET (Windows Forms, Blazor, WPF, MAUI).
This advancement will eventually come to XAF, but in the meantime, here at XARI, we are experimenting with XAF integrations to add value to our customers.
In this article, we are going to integrate the new chat component into an XAF application, and our first use case will be RAG (Retrieval-Augmented Generation). RAG is a system that combines external data sources with AI-generated responses, improving accuracy and relevance in answers by retrieving information from a document set or knowledge base and using it in conjunction with AI predictions.
To achieve this integration, we will follow the steps outlined in this tutorial:
Implement a Property Editor Based on Custom Components (Blazor)
Implementing the Property Editor
When I implement my own property editor, I usually avoid doing so for primitive types because, in most cases, my property editor will need more information than a simple primitive value. For this implementation, I want to handle a custom value in my property editor. I typically create an interface to represent the type, ensuring compatibility with both XPO and EF Core.
namespace XafSmartEditors.Razor.RagChat
{
public interface IRagData
{
Stream FileContent { get; set; }
string Prompt { get; set; }
string FileName { get; set; }
}
}
Non-Persistent Implementation
After defining the type for my editor, I need to create a non-persistent implementation:
namespace XafSmartEditors.Razor.RagChat
{
[DomainComponent]
public class IRagDataImp : IRagData, IXafEntityObject, INotifyPropertyChanged
{
private void OnPropertyChanged([CallerMemberName] string propertyName = null)
{
PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));
}
public IRagDataImp()
{
Oid = Guid.NewGuid();
}
[DevExpress.ExpressApp.Data.Key]
[Browsable(false)]
public Guid Oid { get; set; }
private string prompt;
private string fileName;
private Stream fileContent;
public Stream FileContent
{
get => fileContent;
set
{
if (fileContent == value) return;
fileContent = value;
OnPropertyChanged();
}
}
public string FileName
{
get => fileName;
set
{
if (fileName == value) return;
fileName = value;
OnPropertyChanged();
}
}
public string Prompt
{
get => prompt;
set
{
if (prompt == value) return;
prompt = value;
OnPropertyChanged();
}
}
// IXafEntityObject members
void IXafEntityObject.OnCreated() { }
void IXafEntityObject.OnLoaded() { }
void IXafEntityObject.OnSaving() { }
public event PropertyChangedEventHandler PropertyChanged;
}
}
Creating the Blazor Chat Component
Now, it’s time to create our Blazor component and add the new DevExpress chat component for Blazor:
<DxAIChat CssClass="my-chat" Initialized="Initialized"
RenderMode="AnswerRenderMode.Markdown"
UseStreaming="true"
SizeMode="SizeMode.Medium">
<EmptyMessageAreaTemplate>
<div class="my-chat-ui-description">
<span style="font-weight: bold; color: #008000;">Rag Chat</span> Assistant is ready to answer your questions.
</div>
</EmptyMessageAreaTemplate>
<MessageContentTemplate>
<div class="my-chat-content">
@ToHtml(context.Content)
</div>
</MessageContentTemplate>
</DxAIChat>
@code {
IRagData _value;
[Parameter]
public IRagData Value
{
get => _value;
set => _value = value;
}
async Task Initialized(IAIChat chat)
{
await chat.UseAssistantAsync(new OpenAIAssistantOptions(
this.Value.FileName,
this.Value.FileContent,
this.Value.Prompt
));
}
MarkupString ToHtml(string text)
{
return (MarkupString)Markdown.ToHtml(text);
}
}
The main takeaway from this component is that it receives a parameter named Value
of type IRagData
, and we use this value to initialize the IAIChat
service in the Initialized
method.
Creating the Component Model
With the interface and domain component in place, we can now create the component model to communicate the value of our domain object with the Blazor component:
namespace XafSmartEditors.Razor.RagChat
{
public class RagDataComponentModel : ComponentModelBase
{
public IRagData Value
{
get => GetPropertyValue<IRagData>();
set => SetPropertyValue(value);
}
public EventCallback<IRagData> ValueChanged
{
get => GetPropertyValue<EventCallback<IRagData>>();
set => SetPropertyValue(value);
}
public override Type ComponentType => typeof(RagChat);
}
}
Creating the Property Editor
Finally, let’s create the property editor class that serves as a bridge between XAF and the new component:
namespace XafSmartEditors.Blazor.Server.Editors
{
[PropertyEditor(typeof(IRagData), true)]
public class IRagDataPropertyEditor : BlazorPropertyEditorBase, IComplexViewItem
{
private IObjectSpace _objectSpace;
private XafApplication _application;
public IRagDataPropertyEditor(Type objectType, IModelMemberViewItem model) : base(objectType, model) { }
public void Setup(IObjectSpace objectSpace, XafApplication application)
{
_objectSpace = objectSpace;
_application = application;
}
public override RagDataComponentModel ComponentModel => (RagDataComponentModel)base.ComponentModel;
protected override IComponentModel CreateComponentModel()
{
var model = new RagDataComponentModel();
model.ValueChanged = EventCallback.Factory.Create<IRagData>(this, value =>
{
model.Value = value;
OnControlValueChanged();
WriteValue();
});
return model;
}
protected override void ReadValueCore()
{
base.ReadValueCore();
ComponentModel.Value = (IRagData)PropertyValue;
}
protected override object GetControlValueCore() => ComponentModel.Value;
protected override void ApplyReadOnly()
{
base.ApplyReadOnly();
ComponentModel?.SetAttribute("readonly", !AllowEdit);
}
}
}
Bringing It All Together
Now, let’s create a domain object that can feed the content of a file to our chat component:
namespace XafSmartEditors.Module.BusinessObjects
{
[DefaultClassOptions]
public class PdfFile : BaseObject
{
public PdfFile(Session session) : base(session) { }
string prompt;
string name;
FileData file;
public FileData File
{
get => file;
set => SetPropertyValue(nameof(File), ref file, value);
}
public string Name
{
get => name;
set => SetPropertyValue(nameof(Name), ref name, value);
}
public string Prompt
{
get => prompt;
set => SetPropertyValue(nameof(Prompt), ref prompt, value);
}
}
}
Creating the Controller
We are almost done! Now, we need to create a controller with a popup action:
namespace XafSmartEditors.Module.Controllers
{
public class OpenChatController : ViewController
{
Popup
WindowShowAction Chat;
public OpenChatController()
{
this.TargetObjectType = typeof(PdfFile);
Chat = new PopupWindowShowAction(this, "ChatAction", "View");
Chat.Caption = "Chat";
Chat.ImageName = "artificial_intelligence";
Chat.Execute += Chat_Execute;
Chat.CustomizePopupWindowParams += Chat_CustomizePopupWindowParams;
}
private void Chat_Execute(object sender, PopupWindowShowActionExecuteEventArgs e) { }
private void Chat_CustomizePopupWindowParams(object sender, CustomizePopupWindowParamsEventArgs e)
{
PdfFile pdfFile = this.View.CurrentObject as PdfFile;
var os = this.Application.CreateObjectSpace(typeof(ChatView));
var chatView = os.CreateObject<ChatView>();
MemoryStream memoryStream = new MemoryStream();
pdfFile.File.SaveToStream(memoryStream);
memoryStream.Seek(0, SeekOrigin.Begin);
chatView.RagData = os.CreateObject<IRagDataImp>();
chatView.RagData.FileName = pdfFile.File.FileName;
chatView.RagData.Prompt = !string.IsNullOrEmpty(pdfFile.Prompt) ? pdfFile.Prompt : DefaultPrompt;
chatView.RagData.FileContent = memoryStream;
DetailView detailView = this.Application.CreateDetailView(os, chatView);
detailView.Caption = $"Chat with Document | {pdfFile.File.FileName.Trim()}";
e.View = detailView;
}
}
}
Conclusion
That’s everything we need to create a RAG system using XAF and the new DevExpress Chat component. You can find the complete source code here: GitHub Repository.
If you want to meet and discuss AI, XAF, and .NET, feel free to schedule a meeting: Schedule a Meeting.
Until next time, XAF out!
by Joche Ojeda | May 23, 2024 | CPU
The ARM, x86, and Itanium CPU architectures each have unique characteristics that impact .NET developers. Understanding how these architectures affect your code, along with the importance of using appropriate NuGet packages, is crucial for developing efficient and compatible applications.
ARM Architecture and .NET Development
1. Performance and Optimization:
- Energy Efficiency: ARM processors are known for their power efficiency, benefiting .NET applications on devices like mobile phones and tablets with longer battery life and reduced thermal output.
- Performance: ARM processors may exhibit different performance characteristics compared to x86 processors. Developers need to optimize their code to ensure efficient execution on ARM architecture.
2. Cross-Platform Development:
- .NET Core and .NET 5+: These versions support cross-platform development, allowing code to run on Windows, macOS, and Linux, including ARM-based versions.
- Compatibility: Ensuring .NET applications are compatible with ARM devices may require testing and modifications to address architecture-specific issues.
3. Tooling and Development Environment:
- Visual Studio and Visual Studio Code: Both provide support for ARM development, though there may be differences in features and performance compared to x86 environments.
- Emulators and Physical Devices: Testing on actual ARM hardware or using emulators helps identify performance bottlenecks and compatibility issues.
x86 Architecture and .NET Development
1. Performance and Optimization:
- Processing Power: x86 processors are known for high performance and are widely used in desktops, servers, and high-end gaming.
- Instruction Set Complexity: The complex instruction set of x86 (CISC) allows for efficient execution of certain tasks, which can differ from ARM’s RISC approach.
2. Compatibility:
- Legacy Applications: x86’s extensive history means many enterprise and legacy applications are optimized for this architecture.
- NuGet Packages: Ensuring that NuGet packages target x86 or are architecture-agnostic is crucial for maintaining compatibility and performance.
3. Development Tools:
- Comprehensive Support: x86 development benefits from mature tools and extensive resources available in Visual Studio and other IDEs.
Itanium Architecture and .NET Development
1. Performance and Optimization:
- High-End Computing: Itanium processors were designed for high-end computing tasks, such as large-scale data processing and enterprise servers.
- EPIC Architecture: Itanium uses Explicitly Parallel Instruction Computing (EPIC), which requires different optimization strategies compared to x86 and ARM.
2. Limited Support:
- Niche Market: Itanium has a smaller market presence, primarily in enterprise environments.
- .NET Support: .NET support for Itanium is limited, requiring careful consideration of architecture-specific issues.
CPU Architecture and Code Impact
1. Instruction Sets and Performance:
- Differences: x86 (CISC), ARM (RISC), and Itanium (EPIC) have different instruction sets, affecting code efficiency. Optimizations effective on one architecture might not work well on another.
- Compiler Optimizations: .NET compilers optimize code for specific architectures, but understanding the underlying architecture helps write more efficient code.
2. Multi-Platform Development:
-
- Conditional Compilation: .NET supports conditional compilation for architecture-specific code optimizations.
#if ARM
// ARM-specific code
#elif x86
// x86-specific code
#elif Itanium
// Itanium-specific code
#endif
- Libraries and Dependencies: Ensure all libraries and dependencies in your .NET project are compatible with the target CPU architecture. Use NuGet packages that are either architecture-agnostic or specifically target your architecture.
3. Debugging and Testing:
- Architecture-Specific Bugs: Bugs may manifest differently across ARM, x86, and Itanium. Rigorous testing on all target architectures is essential.
- Performance Testing: Conduct performance testing on each architecture to identify and resolve any specific issues.
Supported CPU Architectures in .NET
1. .NET Core and .NET 5+:
- x86 and x64: Full support for 32-bit and 64-bit x86 architectures across all major operating systems.
- ARM32 and ARM64: Support for 32-bit and 64-bit ARM architectures, including Windows on ARM, Linux on ARM, and macOS on ARM (Apple Silicon).
- Itanium: Limited support, mainly in specific enterprise scenarios.
2. .NET Framework:
- x86 and x64: Primarily designed for Windows, the .NET Framework supports both 32-bit and 64-bit x86 architectures.
- Limited ARM and Itanium Support: The traditional .NET Framework has limited support for ARM and Itanium, mainly for older devices and specific enterprise applications.
3. .NET MAUI and Xamarin:
- Mobile Development: .NET MAUI (Multi-platform App UI) and Xamarin provide extensive support for ARM architectures, targeting Android and iOS devices which predominantly use ARM processors.
Using NuGet Packages
1. Architecture-Agnostic Packages:
- Compatibility: Use NuGet packages that are agnostic to CPU architecture whenever possible. These packages are designed to work across different architectures without modification.
- Example: Common libraries like Newtonsoft.Json, which work across ARM, x86, and Itanium.
2. Architecture-Specific Packages:
- Performance: For performance-critical applications, use NuGet packages optimized for the target architecture.
- Example: Graphics processing libraries optimized for x86 may need alternatives for ARM or Itanium.
Conclusion
For .NET developers, understanding the impact of ARM, x86, and Itanium architectures is essential for creating efficient, cross-platform applications. The differences in CPU architectures affect performance, compatibility, and optimization strategies. By leveraging cross-platform capabilities of .NET, using appropriate NuGet packages, and testing thoroughly on all target architectures, developers can ensure their applications run smoothly across ARM, x86, and Itanium devices.