by Joche Ojeda | Oct 8, 2025 | Oqtane, Tenants
OK — it’s time for today’s Oqtane blog post!
Yesterday, I wrote about changing the runtime mode in Oqtane and how that allows you to switch between
Blazor Server and Blazor WebAssembly functionality.
Today, I decided to explore how multi-tenancy works — specifically, how Oqtane manages multiple sites within the same
installation.
Originally, I wanted to cover the entire administrative panel and all of its modules in one post, but that would’ve been too big.
So, I’m breaking it down into smaller topics. This post focuses only on site functionality and how
multi-tenancy works from the administrative side — basically, how to set up tenants in Oqtane.
Setting Up a Multi-Tenant Oqtane Installation
To make these experiments easy to replicate, I decided to use SQLite as my database.
I created a new .NET Oqtane application using the official templates and added it to a
GitHub repository.
Here’s what I did:
- Set up the host configuration directly in
appsettings.json.
- Ran the app, went to the admin panel, and created two additional sites.
You can see the screenshots below showing the settings for each site.
At first, it was a bit confusing — I thought I could simply use different ports for each site (like 8081,
8082, etc.), but that’s not how Oqtane works. Everything runs in the same process, so all tenants
share the same port.
Instead of changing ports, you configure different URL paths or folders. For example:
http://localhost:8080/ → the main host
http://localhost:8080/mycompany1 → first tenant
http://localhost:8080/mycompany2 → second tenant
Site Managment

Site MyCompany1

Site MyCompany2

Each tenant can:
- Use a separate database or share the same one as the host
- Have its own theme
- Maintain independent site settings

In the short GIF animation I attached to the repository, you can see how each site has its
own unique visual theme — it’s really neat to watch.
When you add a new site, its connection string is also stored automatically in the application settings. So, if you download
the repository and run it locally, you’ll be able to access all the sites and see how the URLs and configurations work.
Here is the repository egarim/OqtaneMultiTenant: an example of how to use multi tenant in oqtane
Why I’m Doing These Posts
These blog entries are like my personal research notes — documenting what I discover while working with Oqtane.
I’m keeping each experiment small and reproducible so I can:
- Share them with others easily
- Download them later and reproduce the same setup, including data and configuration
What’s Next
In the next post, I’ll cover virtual hosting — how to use domain names that forward to specific
URLs or tenants. I’ve already done some research on that, but I don’t want to overload this post with too many topics.
For now, I’ll just attach the screenshots showing the different site configurations and URLs, along with a link to the GitHub
repository so you can try it yourself.
If you have any questions, feel free to reach out! I’ll keep documenting everything as I go.
One of the great things about Oqtane is that it’s open source — you can always dive into the code, or if you’re
stuck, open a GitHub issue. Shaun Walker and the community are incredibly helpful, so don’t hesitate to ask.
Thanks again to the Oqtane team for building such an amazing framework.
by Joche Ojeda | Oct 5, 2025 | Oqtane, ORM
In this article, I’ll show you what to do after you’ve obtained and opened an Oqtane solution. Specifically, we’ll go through two different ways to set up your database for the first time.
- Using the setup wizard — this option appears automatically the first time you run the application.
- Configuring it manually — by directly editing the
appsettings.json file to skip the wizard.
Both methods achieve the same result. The only difference is that, if you configure the database manually, you won’t see the setup wizard during startup.
Step 1: Running the Application for the First Time
Once your solution is open in Visual Studio, set the Server project as the startup project. Then run it just as you would with any ASP.NET Core application.
You’ll notice several run options — I recommend using the HTTPS version instead of IIS Express (I stopped using IIS Express because it doesn’t work well on ARM-based computers).

When you run the application for the first time and your settings file is still empty, you’ll see the Database Setup Wizard. As shown in the image, the wizard allows you to select a database provider and configure it through a form.

There’s also an option to paste your connection string directly. Make sure it’s a valid Entity Framework Core connection string.
After that, fill in the admin user’s details — username, email, and password — and you’re done. Once this process completes, you’ll have a working Oqtane installation.
Step 2: Setting Up the Database Manually
If you prefer to skip the wizard, you can configure the database manually. To do this, open the appsettings.json file and add the following parameters:
{
"DefaultDBType": "Oqtane.Database.Sqlite.SqliteDatabase, Oqtane.Server",
"ConnectionStrings": {
"DefaultConnection": "Data Source=Oqtane-202510052045.db;"
},
"Installation": {
"DefaultAlias": "https://localhost:44388",
"HostPassword": "MyPasswor25!",
"HostEmail": "joche@myemail.com",
"SiteTemplate": "",
"DefaultTheme": "",
"DefaultContainer": ""
}
}
Here you need to specify:
- The database provider type (e.g., SQLite, SQL Server, PostgreSQL, etc.)
- The connection string
- The admin email and password for the first user — known as the host user (essentially the root or super admin).
This is the method I usually use now since I’ve set up Oqtane so many times recently that I’ve grown tired of the wizard. However, if you’re new to Oqtane, the wizard is a great way to get started.
Wrapping Up
That’s it for this setup guide! By now, you should have a running Oqtane installation configured either through the setup wizard or manually via the configuration file. Both methods give you a solid foundation to start exploring what Oqtane can do.
In the next article, we’ll dive into the Oqtane backend, exploring how the framework handles modules, data, and the underlying architecture that makes it flexible and powerful. Stay tuned — things are about to get interesting!
by Joche Ojeda | Jun 26, 2025 | EfCore
What is the N+1 Problem?
Imagine you’re running a blog website and want to display a list of all blogs along with how many posts each one has. The N+1 problem is a common database performance issue that happens when your application makes way too many database trips to get this simple information.
Our Test Database Setup
Our test suite creates a realistic blog scenario with:
- 3 different blogs
- Multiple posts for each blog
- Comments on posts
- Tags associated with blogs
This mirrors real-world applications where data is interconnected and needs to be loaded efficiently.
Test Case 1: The Classic N+1 Problem (Lazy Loading)
What it does: This test demonstrates how “lazy loading” can accidentally create the N+1 problem. Lazy loading sounds helpful – it automatically fetches related data when you need it. But this convenience comes with a hidden cost.
The Code:
[Test]
public void Test_N_Plus_One_Problem_With_Lazy_Loading()
{
var blogs = _context.Blogs.ToList(); // Query 1: Load blogs
foreach (var blog in blogs)
{
var postCount = blog.Posts.Count; // Each access triggers a query!
TestLogger.WriteLine($"Blog: {blog.Title} - Posts: {postCount}");
}
}
The SQL Queries Generated:
-- Query 1: Load all blogs
SELECT "b"."Id", "b"."CreatedDate", "b"."Description", "b"."Title"
FROM "Blogs" AS "b"
-- Query 2: Load posts for Blog 1 (triggered by lazy loading)
SELECT "p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title"
FROM "Posts" AS "p"
WHERE "p"."BlogId" = 1
-- Query 3: Load posts for Blog 2 (triggered by lazy loading)
SELECT "p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title"
FROM "Posts" AS "p"
WHERE "p"."BlogId" = 2
-- Query 4: Load posts for Blog 3 (triggered by lazy loading)
SELECT "p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title"
FROM "Posts" AS "p"
WHERE "p"."BlogId" = 3
The Problem: 4 total queries (1 + 3) – Each time you access blog.Posts.Count, lazy loading triggers a separate database trip.
Test Case 2: Alternative N+1 Demonstration
What it does: This test manually recreates the N+1 pattern to show exactly what’s happening, even if lazy loading isn’t working properly.
The Code:
[Test]
public void Test_N_Plus_One_Problem_Alternative_Approach()
{
var blogs = _context.Blogs.ToList(); // Query 1
foreach (var blog in blogs)
{
// This explicitly loads posts for THIS blog only (simulates lazy loading)
var posts = _context.Posts.Where(p => p.BlogId == blog.Id).ToList();
TestLogger.WriteLine($"Loaded {posts.Count} posts for blog {blog.Id}");
}
}
The Lesson: This explicitly demonstrates the N+1 pattern with manual queries. The result is identical to lazy loading – one query per blog plus the initial blogs query.
Test Case 3: N+1 vs Include() – Side by Side Comparison
What it does: This is the money shot – a direct comparison showing the dramatic difference between the problematic approach and the solution.
The Bad Code (N+1):
// BAD: N+1 Problem
var blogsN1 = _context.Blogs.ToList(); // Query 1
foreach (var blog in blogsN1)
{
var posts = _context.Posts.Where(p => p.BlogId == blog.Id).ToList(); // Queries 2,3,4...
}
The Good Code (Include):
// GOOD: Include() Solution
var blogsInclude = _context.Blogs
.Include(b => b.Posts)
.ToList(); // Single query with JOIN
foreach (var blog in blogsInclude)
{
// No additional queries needed - data is already loaded!
var postCount = blog.Posts.Count;
}
The SQL Queries:
Bad Approach (Multiple Queries):
-- Same 4 separate queries as shown in Test Case 1
Good Approach (Single Query):
SELECT "b"."Id", "b"."CreatedDate", "b"."Description", "b"."Title",
"p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title"
FROM "Blogs" AS "b"
LEFT JOIN "Posts" AS "p" ON "b"."Id" = "p"."BlogId"
ORDER BY "b"."Id"
Results from our test:
- Bad approach: 4 total queries (1 + 3)
- Good approach: 1 total query
- Performance improvement: 75% fewer database round trips!
Test Case 4: Guaranteed N+1 Problem
What it does: This test removes any doubt by explicitly demonstrating the N+1 pattern with clear step-by-step output.
The Code:
[Test]
public void Test_Guaranteed_N_Plus_One_Problem()
{
var blogs = _context.Blogs.ToList(); // Query 1
int queryCount = 1;
foreach (var blog in blogs)
{
queryCount++;
// This explicitly demonstrates the N+1 pattern
var posts = _context.Posts.Where(p => p.BlogId == blog.Id).ToList();
TestLogger.WriteLine($"Loading posts for blog '{blog.Title}' (Query #{queryCount})");
}
}
Why it’s useful: This ensures we can always see the problem clearly by manually executing the problematic pattern, making it impossible to miss.
Test Case 5: Eager Loading with Include()
What it does: Shows the correct way to load related data upfront using Include().
The Code:
[Test]
public void Test_Eager_Loading_With_Include()
{
var blogsWithPosts = _context.Blogs
.Include(b => b.Posts)
.ToList();
foreach (var blog in blogsWithPosts)
{
// No additional queries - data already loaded!
TestLogger.WriteLine($"Blog: {blog.Title} - Posts: {blog.Posts.Count}");
}
}
The SQL Query:
SELECT "b"."Id", "b"."CreatedDate", "b"."Description", "b"."Title",
"p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title"
FROM "Blogs" AS "b"
LEFT JOIN "Posts" AS "p" ON "b"."Id" = "p"."BlogId"
ORDER BY "b"."Id"
The Benefit: One database trip loads everything. When you access blog.Posts.Count, the data is already there.
Test Case 6: Multiple Includes with ThenInclude()
What it does: Demonstrates loading deeply nested data – blogs → posts → comments – all in one query.
The Code:
[Test]
public void Test_Multiple_Includes_With_ThenInclude()
{
var blogsWithPostsAndComments = _context.Blogs
.Include(b => b.Posts)
.ThenInclude(p => p.Comments)
.ToList();
foreach (var blog in blogsWithPostsAndComments)
{
foreach (var post in blog.Posts)
{
// All data loaded in one query!
TestLogger.WriteLine($"Post: {post.Title} - Comments: {post.Comments.Count}");
}
}
}
The SQL Query:
SELECT "b"."Id", "b"."CreatedDate", "b"."Description", "b"."Title",
"p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title",
"c"."Id", "c"."Author", "c"."Content", "c"."CreatedDate", "c"."PostId"
FROM "Blogs" AS "b"
LEFT JOIN "Posts" AS "p" ON "b"."Id" = "p"."BlogId"
LEFT JOIN "Comments" AS "c" ON "p"."Id" = "c"."PostId"
ORDER BY "b"."Id", "p"."Id"
The Challenge: Loading three levels of data in one optimized query instead of potentially hundreds of separate queries.
Test Case 7: Projection with Select()
What it does: Shows how to load only the specific data you actually need instead of entire objects.
The Code:
[Test]
public void Test_Projection_With_Select()
{
var blogData = _context.Blogs
.Select(b => new
{
BlogTitle = b.Title,
PostCount = b.Posts.Count(),
RecentPosts = b.Posts
.OrderByDescending(p => p.PublishedDate)
.Take(2)
.Select(p => new { p.Title, p.PublishedDate })
})
.ToList();
}
The SQL Query (from our test output):
SELECT "b"."Title", (
SELECT COUNT(*)
FROM "Posts" AS "p"
WHERE "b"."Id" = "p"."BlogId"), "b"."Id", "t0"."Title", "t0"."PublishedDate", "t0"."Id"
FROM "Blogs" AS "b"
LEFT JOIN (
SELECT "t"."Title", "t"."PublishedDate", "t"."Id", "t"."BlogId"
FROM (
SELECT "p0"."Title", "p0"."PublishedDate", "p0"."Id", "p0"."BlogId",
ROW_NUMBER() OVER(PARTITION BY "p0"."BlogId" ORDER BY "p0"."PublishedDate" DESC) AS "row"
FROM "Posts" AS "p0"
) AS "t"
WHERE "t"."row" <= 2
) AS "t0" ON "b"."Id" = "t0"."BlogId"
ORDER BY "b"."Id", "t0"."BlogId", "t0"."PublishedDate" DESC
Why it matters: This query only loads the specific fields needed, uses window functions for efficiency, and calculates counts in the database rather than loading full objects.
Test Case 8: Split Query Strategy
What it does: Demonstrates an alternative approach where one large JOIN is split into multiple optimized queries.
The Code:
[Test]
public void Test_Split_Query()
{
var blogs = _context.Blogs
.AsSplitQuery()
.Include(b => b.Posts)
.Include(b => b.Tags)
.ToList();
}
The SQL Queries (from our test output):
-- Query 1: Load blogs
SELECT "b"."Id", "b"."CreatedDate", "b"."Description", "b"."Title"
FROM "Blogs" AS "b"
ORDER BY "b"."Id"
-- Query 2: Load posts (automatically generated)
SELECT "p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title", "b"."Id"
FROM "Blogs" AS "b"
INNER JOIN "Posts" AS "p" ON "b"."Id" = "p"."BlogId"
ORDER BY "b"."Id"
-- Query 3: Load tags (automatically generated)
SELECT "t"."Id", "t"."Name", "b"."Id"
FROM "Blogs" AS "b"
INNER JOIN "BlogTag" AS "bt" ON "b"."Id" = "bt"."BlogsId"
INNER JOIN "Tags" AS "t" ON "bt"."TagsId" = "t"."Id"
ORDER BY "b"."Id"
When to use it: When JOINing lots of related data creates one massive, slow query. Split queries break this into several smaller, faster queries.
Test Case 9: Filtered Include()
What it does: Shows how to load only specific related data – in this case, only recent posts from the last 15 days.
The Code:
[Test]
public void Test_Filtered_Include()
{
var cutoffDate = DateTime.Now.AddDays(-15);
var blogsWithRecentPosts = _context.Blogs
.Include(b => b.Posts.Where(p => p.PublishedDate > cutoffDate))
.ToList();
}
The SQL Query:
SELECT "b"."Id", "b"."CreatedDate", "b"."Description", "b"."Title",
"p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title"
FROM "Blogs" AS "b"
LEFT JOIN "Posts" AS "p" ON "b"."Id" = "p"."BlogId" AND "p"."PublishedDate" > @cutoffDate
ORDER BY "b"."Id"
The Efficiency: Only loads posts that meet the criteria, reducing data transfer and memory usage.
Test Case 10: Explicit Loading
What it does: Demonstrates manually controlling when related data gets loaded.
The Code:
[Test]
public void Test_Explicit_Loading()
{
var blogs = _context.Blogs.ToList(); // Load blogs only
// Now explicitly load posts for all blogs
foreach (var blog in blogs)
{
_context.Entry(blog)
.Collection(b => b.Posts)
.Load();
}
}
The SQL Queries:
-- Query 1: Load blogs
SELECT "b"."Id", "b"."CreatedDate", "b"."Description", "b"."Title"
FROM "Blogs" AS "b"
-- Query 2: Explicitly load posts for blog 1
SELECT "p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title"
FROM "Posts" AS "p"
WHERE "p"."BlogId" = 1
-- Query 3: Explicitly load posts for blog 2
SELECT "p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title"
FROM "Posts" AS "p"
WHERE "p"."BlogId" = 2
-- ... and so on
When useful: When you sometimes need related data and sometimes don’t. You control exactly when the additional database trip happens.
Test Case 11: Batch Loading Pattern
What it does: Shows a clever technique to avoid N+1 by loading all related data in one query, then organizing it in memory.
The Code:
[Test]
public void Test_Batch_Loading_Pattern()
{
var blogs = _context.Blogs.ToList(); // Query 1
var blogIds = blogs.Select(b => b.Id).ToList();
// Single query to get all posts for all blogs
var posts = _context.Posts
.Where(p => blogIds.Contains(p.BlogId))
.ToList(); // Query 2
// Group posts by blog in memory
var postsByBlog = posts.GroupBy(p => p.BlogId).ToDictionary(g => g.Key, g => g.ToList());
}
The SQL Queries:
-- Query 1: Load all blogs
SELECT "b"."Id", "b"."CreatedDate", "b"."Description", "b"."Title"
FROM "Blogs" AS "b"
-- Query 2: Load ALL posts for ALL blogs in one query
SELECT "p"."Id", "p"."BlogId", "p"."Content", "p"."PublishedDate", "p"."Title"
FROM "Posts" AS "p"
WHERE "p"."BlogId" IN (1, 2, 3)
The Result: Just 2 queries total, regardless of how many blogs you have. Data organization happens in memory.
Test Case 12: Performance Comparison
What it does: Puts all the approaches head-to-head to show their relative performance.
The Code:
[Test]
public void Test_Performance_Comparison()
{
// N+1 Problem (Multiple Queries)
var blogs1 = _context.Blogs.ToList();
foreach (var blog in blogs1)
{
var count = blog.Posts.Count(); // Triggers separate query
}
// Eager Loading (Single Query)
var blogs2 = _context.Blogs
.Include(b => b.Posts)
.ToList();
// Projection (Minimal Data)
var blogSummaries = _context.Blogs
.Select(b => new { b.Title, PostCount = b.Posts.Count() })
.ToList();
}
The SQL Queries Generated:
N+1 Problem: 4 separate queries (as shown in previous examples)
Eager Loading: 1 JOIN query (as shown in Test Case 5)
Projection: 1 optimized query with subquery:
SELECT "b"."Title", (
SELECT COUNT(*)
FROM "Posts" AS "p"
WHERE "b"."Id" = "p"."BlogId") AS "PostCount"
FROM "Blogs" AS "b"
Real-World Performance Impact
Let’s scale this up to see why it matters:
Small Application (10 blogs):
- N+1 approach: 11 queries (≈110ms)
- Optimized approach: 1 query (≈10ms)
- Time saved: 100ms
Medium Application (100 blogs):
- N+1 approach: 101 queries (≈1,010ms)
- Optimized approach: 1 query (≈10ms)
- Time saved: 1 second
Large Application (1000 blogs):
- N+1 approach: 1001 queries (≈10,010ms)
- Optimized approach: 1 query (≈10ms)
- Time saved: 10 seconds
Key Takeaways
- The N+1 problem gets exponentially worse as your data grows
- Lazy loading is convenient but dangerous – it can hide performance problems
- Include() is your friend for loading related data efficiently
- Projection is powerful when you only need specific fields
- Different problems need different solutions – there’s no one-size-fits-all approach
- SQL query inspection is crucial – always check what queries your ORM generates
The Bottom Line
This test suite shows that small changes in how you write database queries can transform a slow, database-heavy operation into a fast, efficient one. The difference between a frustrated user waiting 10 seconds for a page to load and a happy user getting instant results often comes down to understanding and avoiding the N+1 problem.
The beauty of these tests is that they use real database queries with actual SQL output, so you can see exactly what’s happening under the hood. Understanding these patterns will make you a more effective developer and help you build applications that stay fast as they grow.
You can find the source for this article in my here
by Joche Ojeda | Jan 22, 2024 | Sqlite
SQLite and Its Journal Modes: Understanding the Differences and Advantages
SQLite, an acclaimed lightweight database engine, is widely used in various applications due to its simplicity, reliability, and open-source nature. One of the critical aspects of SQLite that ensures data integrity and supports various use-cases is its “journal mode.” This mode is a part of SQLite’s transaction mechanism, which is vital for maintaining database consistency. In this article, we’ll explore the different journal modes available in SQLite and their respective advantages.
Understanding Journal Modes in SQLite
Journal modes in SQLite are methods used to handle transactions and rollbacks. They dictate how the database engine logs changes and how it recovers in case of failures or rollbacks. There are several journal modes available in SQLite, each with unique characteristics suited for different scenarios.
1. Delete Mode
Description:
The default mode in SQLite, Delete mode, creates a rollback journal file alongside the database file. This file records a copy of the original unchanged data before any modifications.
Advantages:
- Simplicity: Easy to understand and use, making it ideal for basic applications.
- Reliability: It ensures data integrity by preserving original data until the transaction is committed.
2. Truncate Mode
Description:
Truncate mode operates similarly to Delete mode, but instead of deleting the journal file at the end of a transaction, it truncates it to zero length.
Advantages:
- Faster Commit: Reduces the time to commit transactions, as truncating is generally quicker than deleting.
- Reduced Disk Space Usage: By truncating the file, it avoids leaving large, unused files on the disk.
3. Persist Mode
Description:
In Persist mode, the journal file is not deleted or truncated but is left on the disk with its header marked as inactive.
Advantages:
- Reduced File Operations: This mode minimizes file system operations, which can be beneficial in environments where these operations are expensive.
- Quick Restart: It allows for faster restarts of transactions in busy systems.
4. Memory Mode
Description:
Memory mode stores the rollback journal in volatile memory (RAM) instead of the disk.
Advantages:
- High Performance: It offers the fastest possible transaction times since memory operations are quicker than disk operations.
- Ideal for Temporary Databases: Best suited for databases that don’t require data persistence, like temporary caches.
5. Write-Ahead Logging (WAL) Mode
Description:
WAL mode is a significant departure from the traditional rollback journal. It writes changes to a separate WAL file without changing the original database file until a checkpoint occurs.
Advantages:
- Concurrency: It allows read operations to proceed concurrently with write operations, enhancing performance in multi-user environments.
- Consistency and Durability: Ensures data integrity and durability without locking the entire database.
6. Off Mode
Description:
This mode disables the rollback journal entirely. Transactions are not atomic in this mode.
Advantages:
- Maximum Speed: It can be faster since there’s no overhead of maintaining a journal.
- Use Case Specific: Useful for scenarios where speed is critical and data integrity is not a concern, like intermediate calculations or disposable data.
Conclusion
Choosing the right journal mode in SQLite depends on the specific requirements of the application. While Delete and Truncate modes are suitable for most general purposes, Persist and Memory modes serve niche use-cases. WAL mode stands out for applications requiring high concurrency and performance. Understanding these modes helps developers and database administrators optimize SQLite databases for their particular needs, balancing between data integrity, speed, and resource utilization.
In summary, SQLite’s flexibility in journal modes is a testament to its adaptability, making it a preferred choice for a wide range of applications, from embedded systems to web applications.