Day 4 (the missing day): Building Data Import/Export Services for Your ERP System

Day 4 (the missing day): Building Data Import/Export Services for Your ERP System

Welcome back to our ERP development series! In previous days, we’ve covered the foundational architecture, database design, and core entity structures for our accounting system. Today, we’re tackling an essential but often overlooked aspect of any enterprise software: data import and export capabilities.

Why is this important? Because no enterprise system exists in isolation. Companies need to move data between systems, migrate from legacy software, or simply handle batch data operations. In this article, we’ll build robust import/export services for the Chart of Accounts, demonstrating principles you can apply to any part of your ERP system.

The Importance of Data Exchange

Before diving into the code, let’s understand why dedicated import/export functionality matters:

  1. Data Migration – When companies adopt your ERP, they need to transfer existing data
  2. System Integration – ERPs need to exchange data with other business systems
  3. Batch Processing – Accountants often prepare data in spreadsheets before importing
  4. Backup & Transfer – Provides a simple way to backup or transfer configurations
  5. User Familiarity – Many users are comfortable working with CSV files

CSV (Comma-Separated Values) is our format of choice because it’s universally supported and easily edited in spreadsheet applications like Excel, which most business users are familiar with.

Our Implementation Approach

For our Chart of Accounts module, we’ll create:

  1. A service interface defining import/export operations
  2. A concrete implementation handling CSV parsing/generation
  3. Unit tests verifying all functionality

Our goal is to maintain clean separation of concerns, robust error handling, and clear validation rules.

Defining the Interface

First, we define a clear contract for our import/export service:

/// <summary>
/// Interface for chart of accounts import/export operations
/// </summary>
public interface IAccountImportExportService
{
    /// <summary>
    /// Imports accounts from a CSV file
    /// </summary>
    /// <param name="csvContent">Content of the CSV file as a string</param>
    /// <param name="userName">User performing the operation</param>
    /// <returns>Collection of imported accounts and any validation errors</returns>
    Task<(IEnumerable<IAccount> ImportedAccounts, IEnumerable<string> Errors)> ImportFromCsvAsync(string csvContent, string userName);

    /// <summary>
    /// Exports accounts to a CSV format
    /// </summary>
    /// <param name="accounts">Accounts to export</param>
    /// <returns>CSV content as a string</returns>
    Task<string> ExportToCsvAsync(IEnumerable<IAccount> accounts);
}

Notice how we use C# tuples to return both the imported accounts and any validation errors from the import operation. This gives callers full insight into the operation’s results.

Implementing CSV Import

The import method is the more complex of the two, requiring:

  1. Parsing and validating the CSV structure
  2. Converting CSV data to domain objects
  3. Validating the created objects
  4. Reporting any errors along the way

Here’s our implementation approach:

public async Task<(IEnumerable<IAccount> ImportedAccounts, IEnumerable<string> Errors)> ImportFromCsvAsync(string csvContent, string userName)
{
    List<AccountDto> importedAccounts = new List<AccountDto>();
    List<string> errors = new List<string>();

    if (string.IsNullOrEmpty(csvContent))
    {
        errors.Add("CSV content is empty");
        return (importedAccounts, errors);
    }

    try
    {
        // Split the CSV into lines
        string[] lines = csvContent.Split(new[] { "\r\n", "\r", "\n" }, StringSplitOptions.RemoveEmptyEntries);
        
        if (lines.Length <= 1)
        {
            errors.Add("CSV file contains no data rows");
            return (importedAccounts, errors);
        }

        // Assume first line is header
        string[] headers = ParseCsvLine(lines[0]);
        
        // Validate headers
        if (!ValidateHeaders(headers, errors))
        {
            return (importedAccounts, errors);
        }

        // Process data rows
        for (int i = 1; i < lines.Length; i++)
        {
            string[] fields = ParseCsvLine(lines[i]);
            
            if (fields.Length != headers.Length)
            {
                errors.Add($"Line {i + 1}: Column count mismatch. Expected {headers.Length}, got {fields.Length}");
                continue;
            }

            var account = CreateAccountFromCsvFields(headers, fields);
            
            // Validate account
            if (!_accountValidator.ValidateAccount(account))
            {
                errors.Add($"Line {i + 1}: Account validation failed for account {account.AccountName}");
                continue;
            }

            // Set audit information
            _auditService.SetCreationAudit(account, userName);
            
            importedAccounts.Add(account);
        }

        return (importedAccounts, errors);
    }
    catch (Exception ex)
    {
        errors.Add($"Error importing CSV: {ex.Message}");
        return (importedAccounts, errors);
    }
}

Key aspects of this implementation:

  1. Early validation – We quickly detect and report basic issues like empty input
  2. Row-by-row processing – Each line is processed independently, allowing partial success
  3. Detailed error reporting – We collect specific errors with line numbers
  4. Domain validation – We apply business rules from AccountValidator
  5. Audit trail – We set audit fields for each imported account

The ParseCsvLine method handles the complexities of CSV parsing, including quoted fields that may contain commas:

private string[] ParseCsvLine(string line)
{
    List<string> fields = new List<string>();
    bool inQuotes = false;
    int startIndex = 0;
    
    for (int i = 0; i < line.Length; i++)
    {
        if (line[i] == '"')
        {
            inQuotes = !inQuotes;
        }
        else if (line[i] == ',' && !inQuotes)
        {
            fields.Add(line.Substring(startIndex, i - startIndex).Trim().TrimStart('"').TrimEnd('"'));
            startIndex = i + 1;
        }
    }
    
    // Add the last field
    fields.Add(line.Substring(startIndex).Trim().TrimStart('"').TrimEnd('"'));
    
    return fields.ToArray();
}

Implementing CSV Export

The export method is simpler, converting domain objects to CSV format:

public Task<string> ExportToCsvAsync(IEnumerable<IAccount> accounts)
{
    if (accounts == null || !accounts.Any())
    {
        return Task.FromResult(GetCsvHeader());
    }

    StringBuilder csvBuilder = new StringBuilder();
    
    // Add header
    csvBuilder.AppendLine(GetCsvHeader());
    
    // Add data rows
    foreach (var account in accounts)
    {
        csvBuilder.AppendLine(GetCsvRow(account));
    }
    
    return Task.FromResult(csvBuilder.ToString());
}

We take special care to handle edge cases like null or empty collections, making the API robust against improper usage.

Testing the Implementation

Our test suite verifies both the happy paths and various error conditions:

  1. Import validation – Tests for empty content, missing headers, etc.
  2. Export formatting – Tests for proper CSV generation, handling of special characters
  3. Round-trip integrity – Tests exporting and re-importing preserves data integrity

For example, here’s a round-trip test to verify data integrity:

[Test]
public async Task RoundTrip_ExportThenImport_PreservesAccounts()
{
    // Arrange
    var originalAccounts = new List<IAccount>
    {
        new AccountDto
        {
            Id = Guid.NewGuid(),
            AccountName = "Cash",
            OfficialCode = "11000",
            AccountType = AccountType.Asset,
            // other properties...
        },
        new AccountDto
        {
            Id = Guid.NewGuid(),
            AccountName = "Accounts Receivable",
            OfficialCode = "12000",
            AccountType = AccountType.Asset,
            // other properties...
        }
    };

    // Act
    string csv = await _importExportService.ExportToCsvAsync(originalAccounts);
    var (importedAccounts, errors) = await _importExportService.ImportFromCsvAsync(csv, "Test User");

    // Assert
    Assert.That(errors, Is.Empty);
    Assert.That(importedAccounts.Count(), Is.EqualTo(originalAccounts.Count));
    
    // Check first account
    var firstOriginal = originalAccounts[0];
    var firstImported = importedAccounts.First();
    Assert.That(firstImported.AccountName, Is.EqualTo(firstOriginal.AccountName));
    Assert.That(firstImported.OfficialCode, Is.EqualTo(firstOriginal.OfficialCode));
    Assert.That(firstImported.AccountType, Is.EqualTo(firstOriginal.AccountType));
    
    // Check second account similarly...
}

Integration with the Broader System

This service isn’t meant to be used in isolation. In a complete ERP system, you’d typically:

  1. Add a controller to expose these operations via API endpoints
  2. Create UI components for file upload/download
  3. Implement progress reporting for larger imports
  4. Add transaction support to make imports atomic
  5. Include validation rules specific to your business domain

Design Patterns and Best Practices

Our implementation exemplifies several important patterns:

  1. Interface Segregation – The service has a focused, cohesive purpose
  2. Dependency Injection – We inject the IAuditService rather than creating it
  3. Early Validation – We validate input before processing
  4. Detailed Error Reporting – We collect and return specific errors
  5. Defensive Programming – We handle edge cases and exceptions gracefully

Future Extensions

This pattern can be extended to other parts of your ERP system:

  1. Customer/Vendor Data – Import/export contact information
  2. Inventory Items – Handle product catalog updates
  3. Journal Entries – Process batch financial transactions
  4. Reports – Export financial data for external analysis

Conclusion

Data import/export capabilities are a critical component of any enterprise system. They bridge the gap between systems, facilitate migration, and support batch operations. By implementing these services with careful error handling and validation, we’ve added significant value to our ERP system.

In the next article, we’ll explore building financial reporting services to generate balance sheets, income statements, and other critical financial reports from our accounting data.

Stay tuned, and happy coding!


About Us

YouTube

https://www.youtube.com/c/JocheOjedaXAFXAMARINC

Our sites

Let’s discuss your XAF

This call/zoom will give you the opportunity to define the roadblocks in your current XAF solution. We can talk about performance, deployment or custom implementations. Together we will review you pain points and leave you with recommendations to get your app back in track

https://calendly.com/bitframeworks/bitframeworks-free-xaf-support-hour

Our free A.I courses on Udemy

 

Exploring the Uno Platform: Handling Unsafe Code in Multi-Target Applications

Exploring the Uno Platform: Handling Unsafe Code in Multi-Target Applications

Exploring the Uno Platform: Handling Unsafe Code in Multi-Target Applications

This last weekend I wanted to do a technical experiment as I always do when I have some free time. I decided there was something new I needed to try and see if I could write about. The weekend turned out to be a beautiful surprise as I went back to test the Uno platform – a multi-OS, multi-target UI framework that generates mobile applications, desktop applications, web applications, and even Linux applications.

The idea of Uno is a beautiful concept, but for a long time, the tooling wasn’t quite there. I had made it work several times in the past, but after an update or something in Visual Studio, the setup would break and applications would become basically impossible to compile. That seems to no longer be the case!

Last weekend, I set up Uno on two different computers: my new Surface laptop with an ARM type of processor (which can sometimes be tricky for some tools) and my old MSI with an x64 type of processor. I was thrilled that the setup was effortless on both machines.

After the successful setup, I decided to download the entire Uno demo repository and start trying out the demos. However, for some reason, they didn’t compile. I eventually realized there was a problem with generated code during compilation time that turned out to be unsafe code. Here are my findings about how to handle the unsafe code that is generated.

AllowUnsafeBlocks Setting in Project File

I discovered that this setting was commented out in the Navigation.csproj file:

<!--<AllowUnsafeBlocks>true</AllowUnsafeBlocks>-->

When uncommented, this setting allows the use of unsafe code blocks in your .NET 8 Uno Platform project. To enable unsafe code, you need to remove the comment markers from this line in your project file.

Why It’s Needed

The <AllowUnsafeBlocks>true</AllowUnsafeBlocks> setting is required whenever you want to use “unsafe” code in C#. By default, C# is designed to be memory-safe, preventing direct memory manipulation that could lead to memory corruption, buffer overflows, or security vulnerabilities. When you add this setting to your project file, you’re explicitly telling the compiler to allow portions of code marked with the unsafe keyword.

Unsafe code lets you work with pointers and perform direct memory operations, which can be useful for:

  • Performance-critical operations
  • Interoperability with native code
  • Direct memory manipulation

What Makes Code “Unsafe”

Code is considered “unsafe” when it bypasses .NET’s memory safety guarantees. Specifically, unsafe code includes:

  1. Pointer operations: Using the * and -> operators with memory addresses
  2. Fixed statements: Pinning managed objects in memory so their addresses don’t change during garbage collection
  3. Sizeof operator: Getting the size of a type in bytes
  4. Stackalloc keyword: Allocating memory on the stack instead of the heap

Example of Unsafe Code

Here’s an example of unsafe code that might be generated:

unsafe
{
    int[] numbers = new int[] { 10, 20, 30, 40, 50 };
    
    // UNSAFE: Pinning an array in memory and getting direct pointer
    fixed (int* pNumbers = numbers)
    {
        // UNSAFE: Pointer declaration and manipulation
        int* p = pNumbers;
        
        // UNSAFE: Dereferencing pointers to modify memory directly
        *p = *p + 5;
        *(p + 1) = *(p + 1) + 5;
    }
}

Why Use Unsafe Code?

There are several legitimate reasons to use unsafe code:

  1. Performance optimization: For extremely performance-critical sections where you need to eliminate overhead from bounds checking or other safety features.
  2. Interoperability: When interfacing with native libraries or system APIs that require pointers.
  3. Low-level operations: For systems programming tasks that require direct memory manipulation, like implementing custom memory managers.
  4. Hardware access: When working directly with device drivers or memory-mapped hardware.
  5. Algorithms requiring pointer arithmetic: Some specialized algorithms are most efficiently implemented using pointer operations.

Risks and Considerations

Using unsafe code comes with significant responsibilities:

  • You bypass the runtime’s safety checks, so errors can cause application crashes or security vulnerabilities
  • Memory leaks are possible if you allocate unmanaged memory and don’t free it properly
  • Your code becomes less portable across different .NET implementations
  • Debugging unsafe code is more challenging

In general, you should only use unsafe code when absolutely necessary and isolate it in small, well-tested sections of your application.

In conclusion, I’m happy to see that the Uno platform has matured significantly. While there are still some challenges like handling unsafe generated code, the setup process has become much more reliable. If you’re looking to develop truly cross-platform applications with a single codebase, Uno is worth exploring – just remember to uncomment that AllowUnsafeBlocks setting if you run into compilation issues!

State Machines and Wizard Components: A Clean Implementation Approach

State Machines and Wizard Components: A Clean Implementation Approach

This past week, I have been working on a prototype for a wizard component. As you might know, in computer interfaces, wizard components (or multi-step forms) allow users to navigate through a finite number of steps or pages until they reach the end. Wizards are particularly useful because they don’t overwhelm users with too many choices at once, effectively minimizing the number of decisions a user needs to make at any specific moment.

The current prototype is created using XAF from DevExpress. If you follow this blog, you probably know that I’m a DevExpress MVP, and I wanted to use their tools to create this prototype.

I’ve built wizard components before, but mostly in a rush. Those previous implementations had the wizard logic hardcoded directly inside the UI components, with no separation between the UI and the underlying logic. While they worked, they were quite messy. This time, I wanted to take a more structured approach to creating a wizard component, so here are a few of my findings. Most of this might seem obvious, but sometimes it’s hard to see the forest for the trees when you’re sitting in front of the computer writing code.

Understanding the Core Concept: State Machines

To create an effective wizard component, you need to understand several underlying concepts. The idea of a wizard is actually rooted in system theory and computer science—it’s essentially an implementation of what’s called a state machine or finite state machine.

Theory of a State Machine

A state machine is the same as a finite state machine (FSM). Both terms refer to a computational model that describes a system existing in one of a finite number of states at any given time.

A state machine (or FSM) consists of:

  • States: Distinct conditions the system can be in
  • Transitions: Rules for moving between states
  • Events/Inputs: Triggers that cause transitions
  • Actions: Operations performed when entering/exiting states or during transitions

The term “finite” emphasizes that there’s a limited, countable number of possible states. This finite nature is crucial as it makes the system predictable and analyzable.

State machines come in several variants:

  • Deterministic FSMs (one transition per input)
  • Non-deterministic FSMs (multiple possible transitions per input)
  • Mealy machines (outputs depend on state and input)
  • Moore machines (outputs depend only on state)

They’re widely used in software development, hardware design, linguistics, and many other fields because they make complex behavior easier to visualize, implement, and debug. Common examples include traffic lights, UI workflows, network protocols, and parsers.

In practical usage, when someone refers to a “state machine,” they’re almost always talking about a finite state machine.

Implementing a Wizard State Machine

Here’s an implementation of a wizard state machine that separates the logic from the UI:

public class WizardStateMachineBase
{
    readonly List<WizardPage> _pages;
    int _currentIndex;

    public WizardStateMachineBase(IEnumerable<WizardPage> pages)
    {
        _pages = pages.OrderBy(p => p.Index).ToList();
        _currentIndex = 0;
    }

    public event EventHandler<StateTransitionEventArgs> StateTransition;

    public WizardPage CurrentPage => _pages[_currentIndex];

    public virtual bool MoveNext()
    {
        if (_currentIndex < _pages.Count - 1) { var args = new StateTransitionEventArgs(CurrentPage, _pages[_currentIndex + 1]); OnStateTransition(args); if (!args.Cancel) { _currentIndex++; return true; } } return false; } public virtual bool MovePrevious() { if (_currentIndex > 0)
        {
            var args = new StateTransitionEventArgs(CurrentPage, _pages[_currentIndex - 1]);
            OnStateTransition(args);

            if (!args.Cancel)
            {
                _currentIndex--;
                return true;
            }
        }
        return false;
    }

    protected virtual void OnStateTransition(StateTransitionEventArgs e)
    {
        StateTransition?.Invoke(this, e);
    }
}

public class StateTransitionEventArgs : EventArgs
{
    public WizardPage CurrentPage { get; }
    public WizardPage NextPage { get; }
    public bool Cancel { get; set; }

    public StateTransitionEventArgs(WizardPage currentPage, WizardPage nextPage)
    {
        CurrentPage = currentPage;
        NextPage = nextPage;
        Cancel = false;
    }
}

public class WizardPage
{
    public int Index { get; set; }
    public string Title { get; set; }
    public string Description { get; set; }
    public bool IsRequired { get; set; } = true;
    public bool IsCompleted { get; set; }
    
    // Additional properties specific to your wizard implementation
    public object Content { get; set; }
    
    public WizardPage(int index, string title)
    {
        Index = index;
        Title = title;
    }
    
    public virtual bool Validate()
    {
        // Default implementation assumes page is valid
        // Override this method in derived classes to provide specific validation logic
        return true;
    }
}

Benefits of This Approach

As you can see, by defining a state machine, you significantly narrow down the implementation possibilities. You solve the problem of “too many parts to consider” – questions like “How do I start?”, “How do I control the state?”, “Should the state be in the UI or a separate class?”, and so on. These problems can become really complicated, especially if you don’t centralize the state control.

This simple implementation of a wizard state machine shows how to centralize control of the component’s state. By separating the state management from the UI components, we create a cleaner, more maintainable architecture.

The WizardStateMachineBase class manages the collection of pages and handles navigation between them, while the StateTransitionEventArgs class provides a mechanism to cancel transitions if needed (for example, if validation fails). The newly added WizardPage class encapsulates all the information needed for each step in the wizard.

What’s Next?

The next step will be to control how the visual components react to the state of the machine – essentially connecting our state machine to the UI layer. This will include handling the display of the current page content, updating navigation buttons (previous/next/finish), and possibly showing progress indicators. I’ll cover this UI integration in my next post.

By following this pattern, you can create wizard interfaces that are not only user-friendly but also maintainable and extensible from a development perspective.

Source Code

egarim/WizardStateMachineTest

About US

YouTube

https://www.youtube.com/c/JocheOjedaXAFXAMARINC

Our sites

https://www.bitframeworks.com

https://www.xari.io

https://www.xafers.training

Let’s discuss your XAF Support needs together! This 1-hour call/zoom will give you the opportunity to define the roadblocks in your current XAF solution

Schedule a meeting with us on this link

Hard to Kill: Why Auto-Increment Primary Keys Can Make Data Sync Die Harder

Hard to Kill: Why Auto-Increment Primary Keys Can Make Data Sync Die Harder

Working with the SyncFramework, I’ve noticed a recurring pattern when discussing schema design with customers. One crucial question that often surprises them is about their choice of primary keys: “Are you using auto-incremental integers or unique identifiers (like GUIDs)?”

Approximately 90% of users rely on auto-incremental integer primary keys. While this seems like a straightforward choice, it can create significant challenges for data synchronization. Let’s dive deep into how different database engines handle auto-increment values and why this matters for synchronization scenarios.

Database Implementation Deep Dive

SQL Server

SQL Server uses the IDENTITY property, storing current values in system tables (sys.identity_columns) and caching them in memory for performance. During restarts, it reads the last used value from these system tables. The values are managed as 8-byte numbers internally, with new ranges allocated when the cache is exhausted.

MySQL

MySQL’s InnoDB engine maintains auto-increment counters in memory and persists them to the system tablespace or table’s .frm file. After a restart, it scans the table to find the maximum used value. Each table has its own counter stored in the metadata.

PostgreSQL

PostgreSQL takes a different approach, using separate sequence objects stored in the pg_class catalog. These sequences maintain their own relation files containing crucial metadata like last value, increment, and min/max values. The sequence data is periodically checkpointed to disk for durability.

Oracle

Oracle traditionally uses sequences and triggers, with modern versions (12c+) supporting identity columns. The sequence information is stored in the SEQ$ system table, tracking the last number used, cache size, and increment values.

The Synchronization Challenge

This diversity in implementation creates several challenges for data synchronization:

  1. Unpredictable Sequence Generation: Even within the same database engine, gaps can occur due to rolled-back transactions or server restarts.
  2. Infrastructure Dependencies: The mechanisms for generating next values are deeply embedded within each database engine and aren’t easily accessible to frameworks like Entity Framework or XPO.
  3. Cross-Database Complexity: When synchronizing across different database instances, coordinating auto-increment values becomes even more complex.

The GUID Alternative

Using GUIDs (Globally Unique Identifiers) as primary keys offers a solution to these synchronization challenges. While GUIDs come with their own set of considerations, they provide guaranteed uniqueness across distributed systems without requiring centralized coordination.

Traditional GUID Concerns

  • Index fragmentation
  • Storage size
  • Performance impact

Modern Solutions

These concerns have been addressed through:

  • Sequential GUID generation techniques
  • Improved indexing in modern databases
  • Optimizations in .NET 9

Recommendations

When designing systems that require data synchronization:

  1. Consider using GUIDs instead of auto-increment integers for primary keys
  2. Evaluate sequential GUID generation for better performance
  3. Understand that auto-increment values, while simple, can complicate synchronization scenarios
  4. Plan for the infrastructure needed to maintain consistent primary key generation across your distributed system

Conclusion

The choice of primary key strategy significantly impacts your system’s ability to handle data synchronization effectively. While auto-increment integers might seem simpler at first, understanding their implementation details across different databases reveals why GUIDs often provide a more robust solution for distributed systems.

Remember: Data synchronization is not a trivial problem, and your primary key strategy plays a crucial role in its success. Take the time to evaluate your requirements and choose the appropriate approach for your specific use case.

Till next time, happy delta encoding.

 
SyncFramework for XPO: Updated for .NET 8 & 9  and DevExpress 24.2.3!

SyncFramework for XPO: Updated for .NET 8 & 9 and DevExpress 24.2.3!

SyncFramework for XPO is a specialized implementation of our delta encoding synchronization library, designed specifically for DevExpress XPO users. It enables efficient data synchronization by tracking and transmitting only the changes between data versions, optimizing both bandwidth usage and processing time.

What’s New

  • Base target framework updated to .NET 8.0
  • Added compatibility with .NET 9.0
  • Updated DevExpress XPO dependencies to 24.2.3
  • Continued support for delta encoding synchronization
  • Various performance improvements and bug fixes

Framework Compatibility

  • Primary Target: .NET 8.0
  • Additional Support: .NET 9.0

Our XPO implementation continues to serve the DevExpress community.

Key Features

  • Seamless integration with DevExpress XPO
  • Efficient delta-based synchronization
  • Support for multiple database providers
  • Cross-platform compatibility
  • Easy integration with existing XPO and XAF applications

As always, if you own a license, you can compile the source code yourself from our GitHub repository. The framework maintains its commitment to providing reliable data synchronization for XPO applications.

Happy Delta Encoding! ?

 

SyncFramework Update: Now Supporting .NET 9 and EfCore 9!

SyncFramework Update: Now Supporting .NET 9 and EfCore 9!

SyncFramework Update: Now Supporting .NET 9!

SyncFramework is a C# library that simplifies data synchronization using delta encoding technology. Instead of transferring entire datasets, it efficiently synchronizes by tracking and transmitting only the changes between data versions, significantly reducing bandwidth and processing overhead.

What’s New

  • All packages now target .NET 9
  • BIT.Data.Sync packages updated to support the latest framework
  • Entity Framework Core packages upgraded to EF Core 9
  • Various minor fixes and improvements

Available Implementations

  • SyncFramework for XPO: For DevExpress XPO users
  • SyncFramework for Entity Framework Core: For EF Core users

Package Statistics

Our packages have been serving the community well, with steady adoption:

  • BIT.Data.Sync: 2,142 downloads
  • BIT.Data.Sync.AspNetCore: 1,064 downloads
  • BIT.Data.Sync.AspNetCore.Xpo: 521 downloads
  • BIT.Data.Sync.EfCore: 1,691 downloads
  • BIT.Data.Sync.EfCore.Npgsql: 1,120 downloads
  • BIT.Data.Sync.EfCore.Pomelo.MySql: 1,172 downloads
  • BIT.Data.Sync.EfCore.Sqlite: 887 downloads
  • BIT.Data.Sync.EfCore.SqlServer: 982 downloads

Resources

NuGet Packages
Source Code

As always, you can compile the source code yourself from our GitHub repository. The framework continues to provide reliable data synchronization across different platforms and databases.

Happy Delta Encoding! ?