SyncFramework Update: Now Supporting .NET 9 and EfCore 9!

SyncFramework Update: Now Supporting .NET 9 and EfCore 9!

SyncFramework Update: Now Supporting .NET 9!

SyncFramework is a C# library that simplifies data synchronization using delta encoding technology. Instead of transferring entire datasets, it efficiently synchronizes by tracking and transmitting only the changes between data versions, significantly reducing bandwidth and processing overhead.

What’s New

  • All packages now target .NET 9
  • BIT.Data.Sync packages updated to support the latest framework
  • Entity Framework Core packages upgraded to EF Core 9
  • Various minor fixes and improvements

Available Implementations

  • SyncFramework for XPO: For DevExpress XPO users
  • SyncFramework for Entity Framework Core: For EF Core users

Package Statistics

Our packages have been serving the community well, with steady adoption:

  • BIT.Data.Sync: 2,142 downloads
  • BIT.Data.Sync.AspNetCore: 1,064 downloads
  • BIT.Data.Sync.AspNetCore.Xpo: 521 downloads
  • BIT.Data.Sync.EfCore: 1,691 downloads
  • BIT.Data.Sync.EfCore.Npgsql: 1,120 downloads
  • BIT.Data.Sync.EfCore.Pomelo.MySql: 1,172 downloads
  • BIT.Data.Sync.EfCore.Sqlite: 887 downloads
  • BIT.Data.Sync.EfCore.SqlServer: 982 downloads

Resources

NuGet Packages
Source Code

As always, you can compile the source code yourself from our GitHub repository. The framework continues to provide reliable data synchronization across different platforms and databases.

Happy Delta Encoding! ?

The AnyCPU Illusion: Native Dependencies in .NET Applications

The AnyCPU Illusion: Native Dependencies in .NET Applications

Introduction

In the .NET ecosystem, “AnyCPU” is often considered a silver bullet for cross-platform deployment. However, this assumption can lead to significant problems when your application depends on native assemblies. In this post, I want to share a personal story that highlights how I discovered these limitations and how native dependencies affect the true portability of AnyCPU applications, especially for database access through ADO.NET and popular ORMs.

My Journey to Understanding AnyCPU’s Limitations

Every year, around Thanksgiving or Christmas, I visit my friend, brother, and business partner Javier. Two years ago, during one of these visits, I made a decision that would lead me to a pivotal realization about AnyCPU architecture.

At the time, I was tired of traveling with my bulky MSI GE72 Apache Pro-24 gaming laptop. According to MSI’s official specifications, it weighed 5.95 pounds—but that number didn’t include the hefty charger, which brought the total to around 12 pounds. Later, I upgraded to an MSI GF63 Thin, which was lighter at 4.10 pounds—but with the charger, it was still around 7.5 pounds. Lugging these laptops through airports felt like a workout.

Determined to travel lighter, I purchased a MacBook Air with the M2 chip. At just 2.7 pounds, including the charger, the MacBook Air felt like a breath of fresh air. The Apple Silicon chip was incredibly fast, and I immediately fell in love with the machine.

Having used a MacBook Pro with Bootcamp and Windows 7 years ago, I thought I could recreate that experience by running a Windows virtual machine on my MacBook Air to check projects and do some light development while traveling.

The Virtualization Experiment

As someone who loves virtualization, I eagerly set up a Windows virtual machine on my MacBook Air. I grabbed my trusty Windows x64 ISO, set up the virtual machine, and attempted to boot it—but it failed. I quickly realized the issue was related to CPU architecture. My x64 ISO wasn’t compatible with the ARM-based M2 chip.

Undeterred, I downloaded a Windows 11 ISO for ARM architecture and created the VM. Success! Windows was up and running, and I installed Visual Studio along with my essential development tools, including DevExpress XPO (my favorite ORM).

The Demo Disaster

The real test came during a trip to Dubai, where I was scheduled to give a live demo showcasing how quickly you can develop Line-of-Business (LOB) apps with XAF. Everything started smoothly until I tried to connect my XAF app to the database. Despite my best efforts, the connection failed.

In the middle of the demo, I switched to an in-memory data provider to salvage the presentation. After the demo, I dug into the issue and realized the root cause was related to the CPU architecture. The native database drivers I was using weren’t compatible with the ARM architecture.

A Familiar Problem

This situation reminded me of the transition from x86 to x64 years ago. Back then, I encountered similar issues where native drivers wouldn’t load unless they matched the process architecture.

The Native Dependency Challenge

Platform-Specific Loading Requirements

Native DLLs must exactly match the CPU architecture of your application:

  • If your app runs as x86, it can only load x86 native DLLs.
  • If running as x64, it requires x64 native DLLs.
  • ARM requires ARM-specific binaries.
  • ARM64 requires ARM64-specific binaries.

There is no flexibility—attempting to load a DLL compiled for a different architecture results in an immediate failure.

How Native Libraries are Loaded

When your application loads a native DLL, the operating system follows a specific search pattern:

  1. The application’s directory
  2. System directories (System32/SysWOW64)
  3. Directories listed in the PATH environment variable

Crucially, these native libraries must match the exact architecture of the running process.

// This seemingly simple code
[DllImport("native.dll")]
static extern void NativeMethod();

// Actually requires:
// - native.dll compiled for x86 when running as 32-bit
// - native.dll compiled for x64 when running as 64-bit
// - native.dll compiled for ARM64 when running on ARM64

The SQL Server Example

Let’s look at SQL Server connectivity, a common scenario where the AnyCPU illusion breaks down:

// Traditional ADO.NET connection
using (var connection = new SqlConnection(connectionString))
{
    // This requires SQL Native Client
    // Which must match the process architecture
    await connection.OpenAsync();
}

Even though your application is compiled as AnyCPU, the SQL Native Client must match the process architecture. This becomes particularly problematic on newer architectures like ARM64, where native drivers may not be available.

Impact on ORMs

Entity Framework Core

Entity Framework Core, despite its modern design, still relies on database providers that may have native dependencies:

public class MyDbContext : DbContext
{
    protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
    {
        // This configuration depends on:
        // 1. SQL Native Client
        // 2. Microsoft.Data.SqlClient native components
        optionsBuilder.UseSqlServer(connectionString);
    }
}

DevExpress XPO

DevExpress XPO faces similar challenges:

// XPO configuration
string connectionString = MSSqlConnectionProvider.GetConnectionString("server", "database");
XpoDefault.DataLayer = XpoDefault.GetDataLayer(connectionString, AutoCreateOption.DatabaseAndSchema);

// The MSSqlConnectionProvider relies on the same native SQL Server components

Solutions and Best Practices

1. Architecture-Specific Deployment

Instead of relying on AnyCPU, consider creating architecture-specific builds:

<PropertyGroup>
    <Platforms>x86;x64;arm64</Platforms>
    <RuntimeIdentifiers>win-x86;win-x64;win-arm64</RuntimeIdentifiers>
</PropertyGroup>

2. Runtime Provider Selection

Implement smart provider selection based on the current architecture:

public static class DatabaseProviderFactory
{
    public static IDbConnection GetProvider()
    {
        return RuntimeInformation.ProcessArchitecture switch
        {
            Architecture.X86 => new SqlConnection(), // x86 native provider
            Architecture.X64 => new SqlConnection(), // x64 native provider
            Architecture.Arm64 => new Microsoft.Data.SqlClient.SqlConnection(), // ARM64 support
            _ => throw new PlatformNotSupportedException()
        };
    }
}

3. Managed Fallbacks

Implement fallback strategies when native providers aren’t available:

public class DatabaseConnection
{
    public async Task<IDbConnection> CreateConnectionAsync()
    {
        try
        {
            var connection = new SqlConnection(_connectionString);
            await connection.OpenAsync();
            return connection;
        }
        catch (DllNotFoundException)
        {
            var managedConnection = new Microsoft.Data.SqlClient.SqlConnection(_connectionString);
            await managedConnection.OpenAsync();
            return managedConnection;
        }
    }
}

4. Deployment Considerations

  • Include all necessary native dependencies for each target architecture.
  • Use architecture-specific directories in your deployment.
  • Consider self-contained deployment to include the correct runtime.

Real-World Implications

This experience taught me that while AnyCPU provides excellent flexibility for managed code, it has limitations when dealing with native dependencies. These limitations become more apparent in scenarios like cloud deployments, ARM64 devices, and live demos.

Conclusion

The transition to ARM architecture is accelerating, and understanding the nuances of AnyCPU and native dependencies is more important than ever. By planning for architecture-specific deployments and implementing fallback strategies, you can build more resilient applications that can thrive in a multi-architecture world.

PostgreSQL Full-text search using “text search vectors”

PostgreSQL Full-text search using “text search vectors”

Postgres Vector Search

Full-text search in PostgreSQL is implemented using a concept called “text search vectors” (or just “tsvector”). Let’s dive into how it works:

  1. Text Search Vectors (tsvector):
    • A tsvector is a sorted list of distinct lexemes, which are words that have been normalized to merge different forms of the same word (e.g., “run” and “running”).
    • PostgreSQL provides functions to convert plain text into tsvector format, which typically involves:
      • Parsing the text into tokens.
      • Converting tokens to lexemes.
      • Removing stop words (common words like “and” or “the” that are typically ignored in searches).
    • Example: The text “The quick brown fox” might be represented in tsvector as ‘brown’:3 ‘fox’:4 ‘quick’:2.
  2. Text Search Queries (tsquery):
    • A tsquery represents a text search query, which includes lexemes and optional operators.
    • Operators can be used to combine lexemes in different ways (e.g., AND, OR, NOT).
    • Example: The query “quick & fox” would match any tsvector containing both “quick” and “fox”.
  3. Searching:
    • PostgreSQL provides the @@ operator to search a tsvector column with a tsquery.
    • Example: WHERE column @@ to_tsquery(‘english’, ‘quick & fox’).
  4. Ranking:
    • Once you’ve found matches using the @@ operator, you often want to rank them by relevance.
    • PostgreSQL provides the ts_rank function to rank results. It returns a number indicating how relevant a tsvector is to a tsquery.
    • The ranking is based on various factors, including the frequency of lexemes and their proximity to each other in the text.
  5. Indexes:
    • One of the significant advantages of tsvector is that you can create a GiST or GIN index on it.
    • These indexes significantly speed up full-text search queries.
    • GIN indexes, in particular, are optimized for tsvector and provide very fast lookups.
  6. Normalization and Configuration:
    • PostgreSQL supports multiple configurations (e.g., “english”, “french”) that determine how text is tokenized and which stop words are used.
    • This allows you to tailor your full-text search to specific languages or requirements.
  7. Highlighting and Snippets:
    • In addition to just searching, PostgreSQL provides functions like ts_headline to return snippets of the original text with search terms highlighted.

In summary, PostgreSQL’s full-text search works by converting regular text into a normalized format (tsvector) that is optimized for searching. This combined with powerful query capabilities (tsquery) and indexing options makes it a robust solution for many full-text search needs.

Implementing vector search using E.F Core and Postgres SQL

here are the steps to implement vector search in your dot net project:

Step 1: Add the required nuget packages

   
<PackageReference Include="Npgsql.EntityFrameworkCore.PostgreSQL" Version="7.0.11" />
<PackageReference Include="Npgsql.EntityFrameworkCore.PostgreSQL.Design" Version="1.1.0" />
<PackageReference Include="Npgsql.EntityFrameworkCore.PostgreSQL.NetTopologySuite" Version="7.0.11" />

Step 2: Implement a vector in your entities by implementing properties of type NpgsqlTsVector as shown below

public class Blog
{
    public int Id { get; set; }
    public string Title { get; set; }
    public NpgsqlTsVector SearchVector { get; set; }
}

Step 3: add a computed column in your DbContext

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder.Entity<Blog>()
        .Property(b => b.SearchVector)
      .HasComputedColumnSql("to_tsvector('english', \"Blogs\".\"Title\")", stored: true);
}

in this case you are calculating the vector using the value of the title column of the blogs table, you can calculate the vector using a single column or a combination of columns

Now you are ready to use vector search in your queries, please check the example below

var searchTerm = "Jungle"; // Example search term
var searchVector = NpgsqlTsVector.Parse(searchTerm);

var blogs = context.Blogs
    .Where(p => p.SearchVector.Matches(searchTerm))
    .OrderByDescending(td => td.SearchVector.Rank(EF.Functions.ToTsQuery(searchTerm))).ToList();

In real world scenarios its better to create a vector by joining the values of several columns and weight them according to the relevance for your business case, you can check the test project I have created here : https://github.com/egarim/PostgresVectorSearch

and that’s it for this post, until next time, happy coding ))

 

 

 

 

Alchemy Framework: 1 – Creating a framework for import data

Alchemy Framework: 1 – Creating a framework for import data

Last Friday, I received a message from a dear friend and colleague, Pedro Hernandez. He asked me if I had the latest compiled version of the XPO import framework we created in our office. As it turned out, I did not have it readily available and had to search extensively for it.

While conducting this search across my computers, repositories, and virtual machines, I was inspired to create another import framework — yes, another piece of code to maintain.

Most of the time, my research projects begin in the same manner, typically after a conversation with my close friend Jose Javier Columbie. As always, he would say something like this: “Jose, do you think this can be possible? Why don’t you give it a try? If we succeed, that will be el batazo (like hitting a home run).”

In this specific case, that conversation didn’t happen. However, I could hear his words echoing in my mind.

Furthermore, I want this project to be community-driven, not just a technical experiment I have to maintain alone. I truly believe in the power of a community.

First Step

So, let’s begin. I’ll start with my favorite part: naming the project. I spent all weekend pondering this, attempting to condense the concept and associate it with a literary term or a Latin word (these are my preferred methods for naming a project).

Let’s define what the ultimate goal is. In an import process, the aim is to take information from a source ‘A’, translate or transform the information, and then store it in a target ‘B’.

Growing up, I was an avid reader – and I mean, I read a lot. (Now, I’ve switched to audiobooks.) Therefore, after defining what this project is about, naming it became incredibly easy.

noun

noun: alchemy

  1. the medieval forerunner of chemistry, concerned with the transmutation of matter, in particular with attempts to convert base metals into gold or find a universal elixir.

“occult sciences, such as alchemy and astrology”

Origin

late Middle English: via Old French and medieval Latin from Arabic al-kīmiyā’, from al ‘the’ + kīmiyā’ (from Greek khēmiakhēmeia ‘art of transmuting metals’).

The alchemy Framework

Alchemy is a framework created for DotNet, designed to import data from a data source to a data target. These sources and targets can be anything from a text file, CSV file, a database, ORMs, and so on.

The framework consists of a set of contracts, interfaces, and base classes. When implemented, these allow you to import and transform data between various sources and targets.

The requirement in a few words

As stated in the framework’s description, the requirement is only to define the contracts that represent the sources and targets, as well as a job configuration that describes how the information flows from one source to a target. The concrete implementations are not important at this point and should be discussed individually for each case.

The design patterns.

For this project, we will use the SOLID design principles and dependency injection. This will enable us to easily replace small functionalities, allowing us to mix and match different implementations depending on the data source and data target.

 

Querying external data sources in XAF

Querying external data sources in XAF

I will explain what XAF is just for the sake of the consistency of this article, XAF is a low code application framework for line of business applications that runs on NET framework (windows forms and web forms) and in dotnet (windows forms, Blazor and Web API)

XAF is laser focus on productivity, DevExpress team has created several modules that encapsulate design patterns and common tasks needed on L.O.B apps.

The starting point in XAF is to provide a domain model using an ORMs like XPO or Entity framework and then XAF will create an application for you using the target platform of choice.

It’s a common misunderstanding that you need to use and ORM in order to provide a domain model to XAF

DevExpress team has created ObjectSpace abstraction so XAF can be extended to use different data access technologies ( you can read more about it here https://docs.devexpress.com/eXpressAppFramework/DevExpress.ExpressApp.BaseObjectSpace)

Out of the box XAF provide 3 branches of object spaces as show is the graph below.

XPObjectSpace: this is the object space that allows you to use XPO as a data access technology.

EfCoreObjectSpace: this is the object space that allows you to use Microsoft Entity Framework as a data access technology.

NonPersistenObjectSpace: this object space is interesting as it provides the domain model needed for XAF to generate the views and interact with the data is not attached to an ORM technology so it’s up to us to provide the data, also this type of object space can be used in combination with XPObjectSpace and EfCoreObjectSpace

 

When querying external data sources, you also need to solve the problem of filtering and sorting data in order to provide a full solution, for that reason DevExpress team provide us with the DynamicCollection class, that is a proxy collection that allows you to filter and sort an original collection without changing it.

Now that we know the parts involved in presenting data in a XAF application, we can define the required flow.

 

    [DefaultClassOptions]
    [DefaultProperty(nameof(Article.Title))]
    [DevExpress.ExpressApp.ConditionalAppearance.Appearance("", Enabled = false, TargetItems = "*")]
    [DevExpress.ExpressApp.DC.DomainComponent]
    public class Article : NonPersistentObjectBase {
        internal Article() { }

 

    public override void Setup(XafApplication application) {
        base.Setup(application);
        // Manage various aspects of the application UI and behavior at the module level.
        application.SetupComplete += Application_SetupComplete;
    }

 

  • Wire the application object space created event.
private void Application_SetupComplete(object sender, EventArgs e) {
    Application.ObjectSpaceCreated += Application_ObjectSpaceCreated;
}
private void Application_ObjectSpaceCreated(object sender, ObjectSpaceCreatedEventArgs e) {
    var npos = e.ObjectSpace as NonPersistentObjectSpace;
    if (npos != null) {
        new ArticleAdapter(npos);
        new ContactAdapter(npos);
    }
}

 

public ArticleAdapter(NonPersistentObjectSpace npos) {
     this.objectSpace = npos;
     objectSpace.ObjectsGetting += ObjectSpace_ObjectsGetting;
 }

 

private void ObjectSpace_ObjectsGetting(object sender, ObjectsGettingEventArgs e) {
     if(e.ObjectType == typeof(Article)) {
         var collection = new DynamicCollection(objectSpace, e.ObjectType, e.Criteria, e.Sorting, e.InTransaction);
         collection.FetchObjects += DynamicCollection_FetchObjects;
         e.Objects = collection;
     }
 }
 private void DynamicCollection_FetchObjects(object sender, FetchObjectsEventArgs e) {
     if(e.ObjectType == typeof(Article)) {
         e.Objects = articles;
         e.ShapeData = true;
     }
 }

Full source code here

In conclusion the ObjectSpace abstraction ensures that different data access technologies can be employed, while the DynamicCollection class allows for seamless filtering and sorting of data from external sources. By following the outlined steps, developers can create robust, adaptable, and efficient applications with XAF, ultimately saving time and effort while maximizing application performance.