Day 4 (the missing day): Building Data Import/Export Services for Your ERP System

Day 4 (the missing day): Building Data Import/Export Services for Your ERP System

Welcome back to our ERP development series! In previous days, we’ve covered the foundational architecture, database design, and core entity structures for our accounting system. Today, we’re tackling an essential but often overlooked aspect of any enterprise software: data import and export capabilities.

Why is this important? Because no enterprise system exists in isolation. Companies need to move data between systems, migrate from legacy software, or simply handle batch data operations. In this article, we’ll build robust import/export services for the Chart of Accounts, demonstrating principles you can apply to any part of your ERP system.

The Importance of Data Exchange

Before diving into the code, let’s understand why dedicated import/export functionality matters:

  1. Data Migration – When companies adopt your ERP, they need to transfer existing data
  2. System Integration – ERPs need to exchange data with other business systems
  3. Batch Processing – Accountants often prepare data in spreadsheets before importing
  4. Backup & Transfer – Provides a simple way to backup or transfer configurations
  5. User Familiarity – Many users are comfortable working with CSV files

CSV (Comma-Separated Values) is our format of choice because it’s universally supported and easily edited in spreadsheet applications like Excel, which most business users are familiar with.

Our Implementation Approach

For our Chart of Accounts module, we’ll create:

  1. A service interface defining import/export operations
  2. A concrete implementation handling CSV parsing/generation
  3. Unit tests verifying all functionality

Our goal is to maintain clean separation of concerns, robust error handling, and clear validation rules.

Defining the Interface

First, we define a clear contract for our import/export service:

/// <summary>
/// Interface for chart of accounts import/export operations
/// </summary>
public interface IAccountImportExportService
{
    /// <summary>
    /// Imports accounts from a CSV file
    /// </summary>
    /// <param name="csvContent">Content of the CSV file as a string</param>
    /// <param name="userName">User performing the operation</param>
    /// <returns>Collection of imported accounts and any validation errors</returns>
    Task<(IEnumerable<IAccount> ImportedAccounts, IEnumerable<string> Errors)> ImportFromCsvAsync(string csvContent, string userName);

    /// <summary>
    /// Exports accounts to a CSV format
    /// </summary>
    /// <param name="accounts">Accounts to export</param>
    /// <returns>CSV content as a string</returns>
    Task<string> ExportToCsvAsync(IEnumerable<IAccount> accounts);
}

Notice how we use C# tuples to return both the imported accounts and any validation errors from the import operation. This gives callers full insight into the operation’s results.

Implementing CSV Import

The import method is the more complex of the two, requiring:

  1. Parsing and validating the CSV structure
  2. Converting CSV data to domain objects
  3. Validating the created objects
  4. Reporting any errors along the way

Here’s our implementation approach:

public async Task<(IEnumerable<IAccount> ImportedAccounts, IEnumerable<string> Errors)> ImportFromCsvAsync(string csvContent, string userName)
{
    List<AccountDto> importedAccounts = new List<AccountDto>();
    List<string> errors = new List<string>();

    if (string.IsNullOrEmpty(csvContent))
    {
        errors.Add("CSV content is empty");
        return (importedAccounts, errors);
    }

    try
    {
        // Split the CSV into lines
        string[] lines = csvContent.Split(new[] { "\r\n", "\r", "\n" }, StringSplitOptions.RemoveEmptyEntries);
        
        if (lines.Length <= 1)
        {
            errors.Add("CSV file contains no data rows");
            return (importedAccounts, errors);
        }

        // Assume first line is header
        string[] headers = ParseCsvLine(lines[0]);
        
        // Validate headers
        if (!ValidateHeaders(headers, errors))
        {
            return (importedAccounts, errors);
        }

        // Process data rows
        for (int i = 1; i < lines.Length; i++)
        {
            string[] fields = ParseCsvLine(lines[i]);
            
            if (fields.Length != headers.Length)
            {
                errors.Add($"Line {i + 1}: Column count mismatch. Expected {headers.Length}, got {fields.Length}");
                continue;
            }

            var account = CreateAccountFromCsvFields(headers, fields);
            
            // Validate account
            if (!_accountValidator.ValidateAccount(account))
            {
                errors.Add($"Line {i + 1}: Account validation failed for account {account.AccountName}");
                continue;
            }

            // Set audit information
            _auditService.SetCreationAudit(account, userName);
            
            importedAccounts.Add(account);
        }

        return (importedAccounts, errors);
    }
    catch (Exception ex)
    {
        errors.Add($"Error importing CSV: {ex.Message}");
        return (importedAccounts, errors);
    }
}

Key aspects of this implementation:

  1. Early validation – We quickly detect and report basic issues like empty input
  2. Row-by-row processing – Each line is processed independently, allowing partial success
  3. Detailed error reporting – We collect specific errors with line numbers
  4. Domain validation – We apply business rules from AccountValidator
  5. Audit trail – We set audit fields for each imported account

The ParseCsvLine method handles the complexities of CSV parsing, including quoted fields that may contain commas:

private string[] ParseCsvLine(string line)
{
    List<string> fields = new List<string>();
    bool inQuotes = false;
    int startIndex = 0;
    
    for (int i = 0; i < line.Length; i++)
    {
        if (line[i] == '"')
        {
            inQuotes = !inQuotes;
        }
        else if (line[i] == ',' && !inQuotes)
        {
            fields.Add(line.Substring(startIndex, i - startIndex).Trim().TrimStart('"').TrimEnd('"'));
            startIndex = i + 1;
        }
    }
    
    // Add the last field
    fields.Add(line.Substring(startIndex).Trim().TrimStart('"').TrimEnd('"'));
    
    return fields.ToArray();
}

Implementing CSV Export

The export method is simpler, converting domain objects to CSV format:

public Task<string> ExportToCsvAsync(IEnumerable<IAccount> accounts)
{
    if (accounts == null || !accounts.Any())
    {
        return Task.FromResult(GetCsvHeader());
    }

    StringBuilder csvBuilder = new StringBuilder();
    
    // Add header
    csvBuilder.AppendLine(GetCsvHeader());
    
    // Add data rows
    foreach (var account in accounts)
    {
        csvBuilder.AppendLine(GetCsvRow(account));
    }
    
    return Task.FromResult(csvBuilder.ToString());
}

We take special care to handle edge cases like null or empty collections, making the API robust against improper usage.

Testing the Implementation

Our test suite verifies both the happy paths and various error conditions:

  1. Import validation – Tests for empty content, missing headers, etc.
  2. Export formatting – Tests for proper CSV generation, handling of special characters
  3. Round-trip integrity – Tests exporting and re-importing preserves data integrity

For example, here’s a round-trip test to verify data integrity:

[Test]
public async Task RoundTrip_ExportThenImport_PreservesAccounts()
{
    // Arrange
    var originalAccounts = new List<IAccount>
    {
        new AccountDto
        {
            Id = Guid.NewGuid(),
            AccountName = "Cash",
            OfficialCode = "11000",
            AccountType = AccountType.Asset,
            // other properties...
        },
        new AccountDto
        {
            Id = Guid.NewGuid(),
            AccountName = "Accounts Receivable",
            OfficialCode = "12000",
            AccountType = AccountType.Asset,
            // other properties...
        }
    };

    // Act
    string csv = await _importExportService.ExportToCsvAsync(originalAccounts);
    var (importedAccounts, errors) = await _importExportService.ImportFromCsvAsync(csv, "Test User");

    // Assert
    Assert.That(errors, Is.Empty);
    Assert.That(importedAccounts.Count(), Is.EqualTo(originalAccounts.Count));
    
    // Check first account
    var firstOriginal = originalAccounts[0];
    var firstImported = importedAccounts.First();
    Assert.That(firstImported.AccountName, Is.EqualTo(firstOriginal.AccountName));
    Assert.That(firstImported.OfficialCode, Is.EqualTo(firstOriginal.OfficialCode));
    Assert.That(firstImported.AccountType, Is.EqualTo(firstOriginal.AccountType));
    
    // Check second account similarly...
}

Integration with the Broader System

This service isn’t meant to be used in isolation. In a complete ERP system, you’d typically:

  1. Add a controller to expose these operations via API endpoints
  2. Create UI components for file upload/download
  3. Implement progress reporting for larger imports
  4. Add transaction support to make imports atomic
  5. Include validation rules specific to your business domain

Design Patterns and Best Practices

Our implementation exemplifies several important patterns:

  1. Interface Segregation – The service has a focused, cohesive purpose
  2. Dependency Injection – We inject the IAuditService rather than creating it
  3. Early Validation – We validate input before processing
  4. Detailed Error Reporting – We collect and return specific errors
  5. Defensive Programming – We handle edge cases and exceptions gracefully

Future Extensions

This pattern can be extended to other parts of your ERP system:

  1. Customer/Vendor Data – Import/export contact information
  2. Inventory Items – Handle product catalog updates
  3. Journal Entries – Process batch financial transactions
  4. Reports – Export financial data for external analysis

Conclusion

Data import/export capabilities are a critical component of any enterprise system. They bridge the gap between systems, facilitate migration, and support batch operations. By implementing these services with careful error handling and validation, we’ve added significant value to our ERP system.

In the next article, we’ll explore building financial reporting services to generate balance sheets, income statements, and other critical financial reports from our accounting data.

Stay tuned, and happy coding!


About Us

YouTube

https://www.youtube.com/c/JocheOjedaXAFXAMARINC

Our sites

Let’s discuss your XAF

This call/zoom will give you the opportunity to define the roadblocks in your current XAF solution. We can talk about performance, deployment or custom implementations. Together we will review you pain points and leave you with recommendations to get your app back in track

https://calendly.com/bitframeworks/bitframeworks-free-xaf-support-hour

Our free A.I courses on Udemy

 

Building an Agnostic ERP System: Day 1 – Core Architecture

Building an Agnostic ERP System: Day 1 – Core Architecture

After returning home from an extended journey through the United States, Greece, and Turkey, I found myself contemplating a common challenge over my morning coffee. There are numerous recurring problems in system design and ORM (Object-Relational Mapping) implementation that developers face repeatedly.

To address these challenges, I’ve decided to tackle a system that most professionals are familiar with—an ERP (Enterprise Resource Planning) system—and develop a design that achieves three critical goals:

  1. Performance Speed: The system must be fast and responsive
  2. Technology Agnosticism: The architecture should be platform-independent
  3. Consistent Performance: The system should maintain its performance over time

Design Decisions

To achieve these goals, I’m implementing the following key design decisions:

  • Utilizing the SOLID design principles to ensure maintainability and extensibility
  • Building with C# and net9 to leverage its modern language features
  • Creating an agnostic architecture that can be reimplemented in various technologies like DevExpress XAF or Entity Framework

Day 1: Foundational Structure

In this first article, I’ll propose an initial folder structure that may evolve as the system develops. I’ll also describe a set of base classes and interfaces that will form the foundation of our system.

You can find all the source code for this solution in the designated repository.

The Core Layer

Today we’re starting with the core layer—a set of interfaces that most entities will implement. The system design follows SOLID principles to ensure it can be easily reimplemented using different technologies.

Base Interfaces

Here’s the foundation of our interface hierarchy:

  • IEntity: Core entity interface defining the Id property
  • IAuditable: Interface for entities with audit information
  • IArchivable: Interface for entities supporting soft delete
  • IVersionable: Interface for entities with effective dating
  • ITimeTrackable: Interface for entities requiring time tracking

Service Interfaces

To complement our entity interfaces, we’re also defining service interfaces:

  • IAuditService: Interface for audit-related operations
  • IArchiveService: Interface for archiving operations

 

Repo

egarim/SivarErp: Open Source ERP

Next Steps

In upcoming articles, I’ll expand on this foundation by implementing concrete classes, developing the domain layer, and demonstrating how this architecture can be applied to specific ERP modules.

The goal is to create a reference architecture that addresses the recurring challenges in system design while remaining adaptable to different technological implementations.

Stay tuned for the next installment where we’ll dive deeper into the implementation details of our core interfaces.

About Us

 

YouTube

 

https://www.youtube.com/c/JocheOjedaXAFXAMARINC

Our sites

 

Let’s discuss your XAF

 

This call/zoom will give you the opportunity to define the roadblocks in your current XAF solution. We can talk about performance, deployment or custom implementations. Together we will review you pain points and leave you with recommendations to get your app back in track

 

https://calendly.com/bitframeworks/bitframeworks-free-xaf-support-hour

 

Our free A.I courses on Udemy
Guide to Blazor Component Design and Implementation for backend devs

Guide to Blazor Component Design and Implementation for backend devs

Over time, I transitioned to using the first versions of my beloved framework, XAF. As you might know, XAF generates a polished and functional UI out of the box. Using XAF made me more of a backend developer since most of the development work wasn’t visual—especially in the early versions, where the model designer was rudimentary (it’s much better now).

Eventually, I moved on to developing .NET libraries and NuGet packages, diving deep into SOLID design principles. Fun fact: I actually learned about SOLID from DevExpress TV. Yes, there was a time before YouTube when DevExpress posted videos on technical tasks!

Nowadays, I feel confident creating and publishing my own libraries as NuGet packages. However, my “old monster” was still lurking in the shadows: UI components. I finally decided it was time to conquer it, but first, I needed to choose a platform. Here were my options:

  1. Windows Forms: A robust and mature platform but limited to desktop applications.
  2. WPF: A great option with some excellent UI frameworks that I love, but it still feels a bit “Windows Forms-ish” to me.
  3. Xamarin/Maui: I’m a big fan of Xamarin Forms and Xamarin/Maui XAML, but they’re primarily focused on device-specific applications.
  4. Blazor: This was the clear winner because it allows me to create desktop applications using Electron, embed components into Windows Forms, or even integrate with MAUI.

Recently, I’ve been helping my brother with a project in Blazor. (He’s not a programmer, but I am.) This gave me an opportunity to experiment with design patterns to get the most out of my components, which started as plain HTML5 pages.

Without further ado, here are the key insights I’ve gained so far.

Building high-quality Blazor components requires attention to both the C# implementation and Razor markup patterns. This guide combines architectural best practices with practical implementation patterns to create robust, reusable components.

1. Component Architecture and Organization

Parameter Organization

Start by organizing parameters into logical groups for better maintainability:

public class CustomForm : ComponentBase
{
    // Layout Parameters
    [Parameter] public string Width { get; set; }
    [Parameter] public string Margin { get; set; }
    [Parameter] public string Padding { get; set; }
    
    // Validation Parameters
    [Parameter] public bool EnableValidation { get; set; }
    [Parameter] public string ValidationMessage { get; set; }
    
    // Event Callbacks
    [Parameter] public EventCallback<bool> OnValidationComplete { get; set; }
    [Parameter] public EventCallback<string> OnSubmit { get; set; }
}

Corresponding Razor Template

<div class="form-container" style="width: @Width; margin: @Margin; padding: @Padding">
    <form @onsubmit="HandleSubmit">
        @if (EnableValidation)
        {
            <div class="validation-message">
                @ValidationMessage
            </div>
        }
        @ChildContent
    </form>
</div>

2. Smart Default Values and Template Composition

Component Implementation

public class DataTable<T> : ComponentBase
{
    [Parameter] public int PageSize { get; set; } = 10;
    [Parameter] public bool ShowPagination { get; set; } = true;
    [Parameter] public string EmptyMessage { get; set; } = "No data available";
    [Parameter] public IEnumerable<T> Items { get; set; } = Array.Empty<T>();
    [Parameter] public RenderFragment HeaderTemplate { get; set; }
    [Parameter] public RenderFragment<T> RowTemplate { get; set; }
    [Parameter] public RenderFragment FooterTemplate { get; set; }
}

Razor Implementation

<div class="table-container">
    @if (HeaderTemplate != null)
    {
        <header class="table-header">
            @HeaderTemplate
        </header>
    }
    
    <div class="table-content">
        @if (!Items.Any())
        {
            <div class="empty-state">@EmptyMessage</div>
        }
        else
        {
            @foreach (var item in Items)
            {
                @RowTemplate(item)
            }
        }
    </div>
    
    @if (ShowPagination)
    {
        <div class="pagination">
            <!-- Pagination implementation -->
        </div>
    }
</div>

3. Accessibility and Unique IDs

Component Implementation

public class FormField : ComponentBase
{
    private string fieldId = $"field-{Guid.NewGuid():N}";
    private string labelId = $"label-{Guid.NewGuid():N}";
    private string errorId = $"error-{Guid.NewGuid():N}";
    
    [Parameter] public string Label { get; set; }
    [Parameter] public string Error { get; set; }
    [Parameter] public bool Required { get; set; }
}

Razor Implementation

<div class="form-field">
    <label id="@labelId" for="@fieldId">
        @Label
        @if (Required)
        {
            <span class="required" aria-label="required">*</span>
        }
    </label>
    
    <input id="@fieldId" 
           aria-labelledby="@labelId"
           aria-describedby="@errorId"
           aria-required="@Required" />
    
    @if (!string.IsNullOrEmpty(Error))
    {
        <div id="@errorId" class="error-message" role="alert">
            @Error
        </div>
    }
</div>

4. Virtualization and Performance

Component Implementation

public class VirtualizedList<T> : ComponentBase
{
    [Parameter] public IEnumerable<T> Items { get; set; }
    [Parameter] public RenderFragment<T> ItemTemplate { get; set; }
    [Parameter] public int ItemHeight { get; set; } = 50;
    [Parameter] public Func<ItemsProviderRequest, ValueTask<ItemsProviderResult<T>>> ItemsProvider { get; set; }
}

Razor Implementation

<div class="virtualized-container" style="height: 500px; overflow-y: auto;">
    <Virtualize Items="@Items"
                ItemSize="@ItemHeight"
                ItemsProvider="@ItemsProvider"
                Context="item">
        <ItemContent>
            <div class="list-item" style="height: @(ItemHeight)px">
                @ItemTemplate(item)
            </div>
        </ItemContent>
        <Placeholder>
            <div class="loading-placeholder" style="height: @(ItemHeight)px">
                <div class="loading-animation"></div>
            </div>
        </Placeholder>
    </Virtualize>
</div>

Best Practices Summary

1. Parameter Organization

  • Group related parameters with clear comments
  • Provide meaningful default values
  • Use parameter validation where appropriate

2. Template Composition

  • Use RenderFragment for customizable sections
  • Provide default templates when needed
  • Enable granular control over component appearance

3. Accessibility

  • Generate unique IDs for form elements
  • Include proper ARIA attributes
  • Support keyboard navigation

4. Performance

  • Implement virtualization for large datasets
  • Use loading states and placeholders
  • Optimize rendering with appropriate conditions

Conclusion

Building effective Blazor components requires attention to both the C# implementation and Razor markup. By following these patterns and practices, you can create components that are:

  • Highly reusable
  • Performant
  • Accessible
  • Easy to maintain
  • Flexible for different use cases

Remember to adapt these practices to your specific needs while maintaining clean component design principles.

Solid Nirvana: The Ephemeral State of SOLID Code

Solid Nirvana: The Ephemeral State of SOLID Code

The Ephemeral State of SOLID Code: Capturing the Perfect Snapshot

In the world of software development, the SOLID principles are often upheld as the gold standard for designing maintainable and scalable code. These principles — Single Responsibility, Open/Closed, Liskov Substitution, Interface Segregation, and Dependency Inversion — form the bedrock of robust object-oriented design. However, achieving a state where code fully adheres to these principles is a fleeting moment, much like capturing a perfect snapshot in time.

What Does It Mean for Code to Be in a SOLID State?

A SOLID state in source code is a condition where the code perfectly aligns with all five SOLID principles. This means:

  • Single Responsibility Principle (SRP): Every class has one, and only one, reason to change.
  • Open/Closed Principle (OCP): Software entities should be open for extension but closed for modification.
  • Liskov Substitution Principle (LSP): Subtypes must be substitutable for their base types.
  • Interface Segregation Principle (ISP): No client should be forced to depend on methods it does not use.
  • Dependency Inversion Principle (DIP): Depend on abstractions, not concretions.

In this state, the codebase is a model of clarity, flexibility, and robustness. But this state is inherently transient.

The Moment of SOLID Perfection

The reality of software development is that code is in a constant state of flux. New features are added, bugs are fixed, and refactoring is a continuous process. During these periods of active development, maintaining perfect adherence to SOLID principles is challenging. The code may temporarily violate one or more principles as developers refactor or introduce new functionality.

The truly SOLID state can thus be seen as a snapshot — a moment frozen in time when the code perfectly adheres to all five principles. This moment typically occurs:

  • Post-Refactoring: After a significant refactoring effort, where the focus has been on aligning the code with SOLID principles.
  • Before Major Changes: Just before starting a new major feature or overhaul, the existing codebase might be in a perfect SOLID state.
  • Code Reviews: Following a rigorous code review process, where adherence to SOLID principles is explicitly checked and enforced.
  • Milestone Deliveries: Before delivering a major milestone or release, when the code is thoroughly tested and cleaned up.

The Nature of Active Development

Active development is a chaotic process. As new requirements emerge and priorities shift, developers might temporarily sacrifice adherence to SOLID principles for the sake of rapid progress or to meet deadlines. This is a natural part of the development cycle. The key is to recognize that while the code may deviate from these principles during active development, the goal is to continually steer it back towards a SOLID state.

The SOLID State as Nirvana

Achieving a perfect SOLID state can be likened to reaching nirvana — an ideal that is almost impossible to fully attain. Just as nirvana represents a state of ultimate peace and enlightenment, a perfectly SOLID codebase represents the pinnacle of software design. However, this state is incredibly difficult to reach and even harder to maintain. Therefore, it is more practical to view adherence to SOLID principles as a spectrum rather than a binary state.

Measuring SOLID Adherence

Instead of aiming for an elusive perfect state, it’s more pragmatic to measure adherence to SOLID principles using metrics. Tools and techniques can help quantify how well your code aligns with each principle, providing a percentage that reflects its current state. These metrics can include:

  • Class Responsibility: Assessing the number of responsibilities each class has to evaluate adherence to SRP.
  • Change Impact Analysis: Measuring the extent to which modifications to the code require changes in other parts of the system, reflecting adherence to OCP.
  • Subtype Tests: Ensuring subclasses can replace their base classes without altering the correctness of the program, in line with LSP.
  • Interface Utilization: Analyzing the usage of interfaces to ensure they are not overly broad, adhering to ISP.
  • Dependency Metrics: Evaluating dependencies between high-level and low-level modules, supporting DIP.

By regularly measuring these metrics, developers can maintain a clear view of how their code is evolving in relation to SOLID principles. This approach allows for continuous improvement and helps teams prioritize refactoring efforts where they are most needed.

Embracing the Snapshot

Understanding that a perfectly SOLID state is a temporary snapshot can help developers maintain a healthy perspective. It’s crucial to strive for SOLID principles as a guiding star but also to accept that deviations are part of the journey. Regular refactoring sessions, continuous integration practices, and diligent code reviews are essential practices to frequently bring the code back to a SOLID state.

Conclusion

In conclusion, a SOLID state of source code is a valuable but ephemeral achievement, akin to reaching nirvana in the realm of software development. It represents a moment of perfection in the ongoing evolution of a software project. By recognizing this, developers can better manage their expectations and maintain a balance between striving for perfection and the practical realities of software development. Embrace the snapshot of SOLID perfection when it occurs, but also understand that the true measure of a healthy codebase is its ability to evolve while frequently realigning with these timeless principles, using metrics and percentages to guide the way.