Extending Interfaces in the Sync Framework: Best Practices and Trade-offs

Extending Interfaces in the Sync Framework: Best Practices and Trade-offs

In modern software development, extending the functionality of a framework while maintaining its integrity and usability can be a complex task. One common scenario involves extending interfaces to add new events or methods. In this post, we’ll explore the impact of extending interfaces within the Sync Framework, specifically looking at IDeltaStore and IDeltaProcessor interfaces to include SavingDelta and SavedDelta events, as well as ProcessingDelta and ProcessedDelta events. We’ll discuss the options available—extending existing interfaces versus adding new interfaces—and examine the side effects of each approach.

Background

The Sync Framework is designed to synchronize data across different data stores, ensuring consistency and integrity. The IDeltaStore interface typically handles delta storage operations, while the IDeltaProcessor interface manages delta (change) processing. To enhance the functionality, you might want to add events such as SavingDelta, SavedDelta, ProcessingDelta, and ProcessedDelta to these interfaces.

Extending Existing Interfaces

Extending existing interfaces involves directly adding new events or methods to the current interface definitions. Here’s an example:

public interface IDeltaStore {
    void SaveData(Data data);
    // New events
    event EventHandler<DeltaEventArgs> SavingDelta;
    event EventHandler<DeltaEventArgs> SavedDelta;
}

public interface IDeltaProcessor {
    void ProcessDelta(Delta delta);
    // New events
    event EventHandler<DeltaEventArgs> ProcessingDelta;
    event EventHandler<DeltaEventArgs> ProcessedDelta;
}

Pros of Extending Existing Interfaces

  • Simplicity: The existing implementations need to be updated to include the new functionality, making the overall design simpler.
  • Direct Integration: The new events are directly available in the existing interface, making them easy to use and understand within the current framework.

Cons of Extending Existing Interfaces

  • Breaks Existing Implementations: All existing classes implementing these interfaces must be updated to handle the new events. This can lead to significant refactoring, especially in large codebases.
  • Violates SOLID Principles: Adding new responsibilities to existing interfaces can violate the Single Responsibility Principle (SRP) and Interface Segregation Principle (ISP), leading to bloated interfaces.
  • Potential for Bugs: The necessity to modify all implementing classes increases the risk of introducing bugs and inconsistencies.

Adding New Interfaces

An alternative approach is to create new interfaces that extend the existing ones, encapsulating the new events. Here’s how you can do it:

public interface IDeltaStore {
    void SaveData(Data data);
}

public interface IDeltaStoreWithEvents : IDeltaStore {
    event EventHandler<DeltaEventArgs> SavingDelta;
    event EventHandler<DeltaEventArgs> SavedDelta;
}

public interface IDeltaProcessor {
    void ProcessDelta(Delta delta);
}

public interface IDeltaProcessorWithEvents : IDeltaProcessor {
    event EventHandler<DeltaEventArgs> ProcessingDelta;
    event EventHandler<DeltaEventArgs> ProcessedDelta;
}

Pros of Adding New Interfaces

  • Adheres to SOLID Principles: This approach keeps the existing interfaces clean and focused, adhering to the SRP and ISP.
  • Backward Compatibility: Existing implementations remain functional without modification, ensuring backward compatibility.
  • Flexibility: New functionality can be selectively adopted by implementing the new interfaces where needed.

Cons of Adding New Interfaces

  • Complexity: Introducing new interfaces can increase the complexity of the codebase, as developers need to understand and manage multiple interfaces.
  • Redundancy: There can be redundancy in code, where some classes might need to implement both the original and new interfaces.
  • Learning Curve: Developers need to be aware of and understand the new interfaces, which might require additional documentation and training.

Conclusion

Deciding between extending existing interfaces and adding new ones depends on your specific context and priorities. Extending interfaces can simplify the design but at the cost of violating SOLID principles and potentially breaking existing code. On the other hand, adding new interfaces preserves existing functionality and adheres to best practices but can introduce additional complexity.

In general, if maintaining backward compatibility and adhering to SOLID principles are high priorities, adding new interfaces is the preferred approach. However, if you are working within a controlled environment where updating existing implementations is manageable, extending the interfaces might be a viable option.

By carefully considering the trade-offs and understanding the implications of each approach, you can make an informed decision that best suits your project’s needs.

Design Patterns for Library Creators in Dotnet

Design Patterns for Library Creators in Dotnet

Hello there! Today, we’re going to delve into the fascinating world of design patterns. Don’t worry if you’re not a tech whiz – we’ll keep things simple and relatable. We’ll use the SyncFramework as an example, but our main focus will be on the design patterns themselves. So, let’s get started!

What are Design Patterns?

Design patterns are like blueprints – they provide solutions to common problems that occur in software design. They’re not ready-made code that you can directly insert into your program. Instead, they’re guidelines you can follow to solve a particular problem in a specific context.

SOLID Design Principles

One of the most popular sets of design principles is SOLID. It’s an acronym that stands for five principles that help make software designs more understandable, flexible, and maintainable. Let’s break it down:

  1. Single Responsibility Principle: A class should have only one reason to change. In other words, it should have only one job.
  2. Open-Closed Principle: Software entities should be open for extension but closed for modification. This means we should be able to add new features or functionality without changing the existing code.
  3. Liskov Substitution Principle: Subtypes must be substitutable for their base types. This principle is about creating new derived classes that can replace the functionality of the base class without breaking the application.
  4. Interface Segregation Principle: Clients should not be forced to depend on interfaces they do not use. This principle is about reducing the side effects and frequency of required changes by splitting the software into multiple, independent parts.
  5. Dependency Inversion Principle: High-level modules should not depend on low-level modules. Both should depend on abstractions. This principle allows for decoupling.

Applying SOLID Principles in SyncFramework

The SyncFramework is a great example of how these principles can be applied. Here’s how:

  • Single Responsibility Principle: Each component of the SyncFramework has a specific role. For instance, one component is responsible for tracking changes, while another handles conflict resolution.
  • Open-Closed Principle: The SyncFramework is designed to be extensible. You can add new data sources or change the way data is synchronized without modifying the core framework.
  • Liskov Substitution Principle: The SyncFramework uses base classes and interfaces that allow for substitutable components. This means you can replace or modify components without affecting the overall functionality.
  • Interface Segregation Principle: The SyncFramework provides a range of interfaces, allowing you to choose the ones you need and ignore the ones you don’t.
  • Dependency Inversion Principle: The SyncFramework depends on abstractions, not on concrete classes. This makes it more flexible and adaptable to changes.

 

And that’s a wrap for today! But don’t worry, this is just the beginning. In the upcoming series of articles, we’ll dive deeper into each of these principles. We’ll explore how they’re applied in the source code of the SyncFramework, providing real-world examples to help you understand these concepts better. So, stay tuned for more exciting insights into the world of design patterns! See you in the next article!

 

Related articles

If you want to learn more about data synchronization you can checkout the following blog posts:

  1. Data synchronization in a few words – https://www.jocheojeda.com/2021/10/10/data-synchronization-in-a-few-words/
  2. Parts of a Synchronization Framework – https://www.jocheojeda.com/2021/10/10/parts-of-a-synchronization-framework/
  3. Let’s write a Synchronization Framework in C# – https://www.jocheojeda.com/2021/10/11/lets-write-a-synchronization-framework-in-c/
  4. Synchronization Framework Base Classes – https://www.jocheojeda.com/2021/10/12/synchronization-framework-base-classes/
  5. Planning the first implementation – https://www.jocheojeda.com/2021/10/12/planning-the-first-implementation/
  6. Testing the first implementation – https://youtu.be/l2-yPlExSrg
  7. Adding network support – https://www.jocheojeda.com/2021/10/17/syncframework-adding-network-support/

 

Fake it until you make it: using custom HttpClientHandler to emulate a client server architecture

Fake it until you make it: using custom HttpClientHandler to emulate a client server architecture

Last week, I decided to create a playground for the SyncFramework to demonstrate how synchronization works. The sync framework itself is not designed in a client-server architecture, but as a set of APIs that you can use to synchronize data.

Synchronization scenarios usually involve a client-server architecture, but when I created the SyncFramework, I decided that network communication was something outside the scope and not directly related to data synchronization. So, instead of embedding the client-server concept in the SyncFramework, I decided to create a set of extensions to handle these scenarios. If you want to take a look at the network extensions, you can see them here.

Now, let’s return to the playground. The main requirement for me, besides showing how the synchronization process works, was not having to maintain an infrastructure for it. You know, a Sync Server and a few databases that I would have to constantly delete. So, I decided to use Blazor WebAssembly and SQLite databases running in the browser. If you want to know more about how SQLite databases can run in the browser, take a look at this article.

Now, there’s still a problem. How do I run a server on the browser? I know it’s somehow possible, but I did not have the time to do the research. So, I decided to create my own HttpClientHandler.

How the HttpClientHandler works

HttpClientHandler offers a number of attributes and methods for controlling HTTP requests and responses. It serves as the fundamental mechanism for HttpClient’s ability to send and receive HTTP requests and responses.

The HttpClientHandler manages aspects like the maximum number of redirects, redirection policies, handling cookies, and automated decompression of HTTP traffic. It can be set up and supplied to HttpClient to regulate the HTTP requests made by HttpClient.

HttpClientHandler might be helpful in testing situations when it’s necessary to imitate or mock HTTP requests and responses. The SendAsync method of HttpMessageHandler, from which HttpClientHandler also descended, can be overridden in a new class to deliver any response you require for your test.

here is a basic example

public class TestHandler : HttpMessageHandler
{
    protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
    {
        // You can check the request details and return different responses based on that.
        // For simplicity, we're always returning the same response here.
        var responseMessage = new HttpResponseMessage(HttpStatusCode.OK)
        {
            Content = new StringContent("Test response.")
        };
        return await Task.FromResult(responseMessage);
    }
}

And here’s how you’d use this handler in a test:

[Test]
public async Task TestHttpClient()
{
    var handler = new TestHandler();
    var client = new HttpClient(handler);

    var response = await client.GetAsync("http://example.com");
    var responseContent = await response.Content.ReadAsStringAsync();

    Assert.AreEqual("Test response.", responseContent);
}

The TestHandler in this illustration consistently sends back an HTTP 200 response with the body “Test response.” In a real test, you might use SendAsync with more sophisticated logic to return several responses depending on the specifics of the request. By doing so, you may properly test your code’s handling of different answers without actually sending HTTP queries.

Going back to our main story

Now that we know we can catch the HTTP request and handle it locally, we can write an HttpClientHandler that takes the request from the client nodes and processes them locally. Now, we have all the pieces to make the playground work without a real server. You can take a look at the implementation of the custom handler for the playground here

Until next time, happy coding )))))

 

 

 

 

 

 

 

Entity Framework Core & lazy loading

Entity Framework Core & lazy loading

In Entity Framework 7, lazy loading is a technique used to delay the loading of related entities until they are actually needed. This can help to improve the performance of an application by reducing the amount of data that is retrieved from the database upfront.

To implement lazy loading in EF7, the “virtual” modifier is used on the navigation properties of an entity class. Navigation properties are used to represent relationships between entities, such as one-to-many or many-to-many.

For example, consider the following code snippet for a “Course” entity class and a “Student” entity class in EF7, with a one-to-many relationship between them:

public class Course
{
    public int CourseId { get; set; }
    public string CourseName { get; set; }
    public virtual ICollection<Student> Students { get; set; }
}

public class Student
{
    public int StudentId { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public int CourseId { get; set; }
    public virtual Course Course { get; set; }
}

 

In this example, the “Students” navigation property in the “Course” class and the “Course” navigation property in the “Student” class are both marked as “virtual”. This allows EF7 to override these properties with a proxy at runtime, enabling lazy loading for the related entities.

To use lazy loading in an EF7 application, the “DbContext.LazyLoadingEnabled” property must be set to “true”. When lazy loading is enabled, related entities will not be loaded from the database until they are actually accessed.

For example, the following code demonstrates how lazy loading can be used to retrieve a list of courses and their students:

using (var context = new SchoolContext())
{
    context.LazyLoadingEnabled = true;
    var courses = context.Courses.ToList();
    foreach (var course in courses)
    {
        Console.WriteLine("Course: " + course.CourseName);
        foreach (var student in course.Students)
        {
            Console.WriteLine("Student: " + student.FirstName + " " + student.LastName);
        }
    }
}

In this code, the list of courses is retrieved from the database and stored in the “courses” variable. The students for each course are not retrieved until they are accessed in the inner loop. This allows the application to retrieve only the data that is needed, improving performance and reducing the amount of data transferred from the database.

Lazy loading can be a useful tool for optimizing the performance of an EF7 application, but it is important to consider the trade-offs and use it appropriately. Lazy loading can increase the number of database queries and the overall complexity of an application, and it may not always be the most efficient approach.

 

Implementing database synchronization with entity framework core

Implementing database synchronization with entity framework core

Ok, so far, our synchronization framework is only implemented for an in-memory database that we use for testing purposes.

Now let’s implement a different use case, lets add synchronization functionality to an entity framework core DbContext.

As I explained before, the key part of synchronizing data using delta encoding is to be able to track the differences that happen to a data object, in this case, a relational database.

these are the task that we need to do to accomplish our goal

  1. Find out how entity framework converts the changes that happen to the objects to SQL commands
  2. Decide what information we need to track and save as a delta
  3. Create the infrastructure to save deltas (IDeltaStore)
  4. Create the infrastructure to process deltas (IDeltaProcessor)
  5. Implement the synchronization node functionality in an Entity Framework DbContext(ISyncClientNode)
  6. Create a test scenario

 

1 Find out how entity framework converts the changes that happen to the objects to SQL commands

In our companies (BitFrameworks & Xari) we have been working in data synchronization for a while, but all this work has been done in the XPO realm.

We know that in most ORMs frameworks there is a layer of the ORM that is in charge of translating the changes made to objects into SQL commands, the trick is to locate this layer. So while I was trapped in Mexico waiting for a flight back to Phoenix, I decided to dig into entity framework’s core GitHub report, this is what I found https://github.com/dotnet/efcore/blob/b18a7efa7c418e43184db08c6d1488d6600054cb/src/EFCore.Relational/Update/Internal/BatchExecutor.cs#L161

public virtual async Task<int> ExecuteAsync(
           IEnumerable<ModificationCommandBatch> commandBatches,
           IRelationalConnection connection,
           CancellationToken cancellationToken = default)

As you can see one of the parameters is an IEnumerable of ModificationCommandBatch https://github.com/dotnet/efcore/blob/main/src/EFCore.Relational/Update/ModificationCommandBatch.cs this command batch exposes a read-only list of modification commands (ModificationCommand)

https://github.com/dotnet/efcore/blob/cc53b3e80755e5d882bb21ef10e0e0e33194d9bd/src/EFCore.Relational/Update/ModificationCommandBatch.cs#L30

public abstract class ModificationCommandBatch
{
    /// <summary>
    ///     The list of conceptual insert/update/delete <see cref="ModificationCommands" />s in the batch.
    /// </summary>
    public abstract IReadOnlyList<IReadOnlyModificationCommand> ModificationCommands { get; }

now let’s take look into the ModificationCommand https://github.com/dotnet/efcore/blob/main/src/EFCore.Relational/Update/ModificationCommand.cs this class provides all the information about the changes that will be converted into SQL commands, which means that if we serialize this object and save it as a delta we can then send it to another node and replicate the changes…VOILA!!!

Now here is a stone in our path, the class https://github.com/dotnet/efcore/blob/main/src/EFCore.Relational/Update/ModificationCommand.cs is not serializable or to say it in a better way NOT easily serializable, so let’s stop here for a moment and move to a different task

So now we know where the changes that we need to keep track of are, now let’s try to understand how those changes are converted into SQL commands and then executed into the database.

2 Decide what information we need to track and save as a delta

Entity framework core uses dependency injection to be able to handle different database engines so the idea here is that there are a lot of small services that can be replaced in other to create a different implementation, for example, SQLite, SqlServer, Postgres, etc …

After a lot of digging, I found that the service that is in charge of generating the update commands (insert, update and delete) UpdateSqlGenerator

https://github.com/dotnet/efcore/blob/main/src/EFCore.Relational/Update/UpdateSqlGenerator.cs

this class implements IUpdateSqlGenerator https://github.com/dotnet/efcore/blob/main/src/EFCore.Relational/Update/IUpdateSqlGenerator.cs and as you can see all methods receive a string builder and a ModificationCommand so this is the service  in charge of translating the ModificationCommand into SQL commands and SQL commands are easy to serialize because they are just text, so this is what we are going to serialize and save as a delta

    public interface IUpdateSqlGenerator
    {
        /// <summary>
        ///     Generates SQL that will obtain the next value in the given sequence.
        /// </summary>
        /// <param name="name">The name of the sequence.</param>
        /// <param name="schema">The schema that contains the sequence, or <see langword="null" /> to use the default schema.</param>
        /// <returns>The SQL.</returns>
        string GenerateNextSequenceValueOperation(string name, string? schema);

        /// <summary>
        ///     Generates a SQL fragment that will get the next value from the given sequence and appends it to
        ///     the full command being built by the given <see cref="StringBuilder" />.
        /// </summary>
        /// <param name="commandStringBuilder">The builder to which the SQL fragment should be appended.</param>
        /// <param name="name">The name of the sequence.</param>
        /// <param name="schema">The schema that contains the sequence, or <see langword="null" /> to use the default schema.</param>
        void AppendNextSequenceValueOperation(
            StringBuilder commandStringBuilder,
            string name,
            string? schema);

        /// <summary>
        ///     Appends a SQL fragment for the start of a batch to
        ///     the full command being built by the given <see cref="StringBuilder" />.
        /// </summary>
        /// <param name="commandStringBuilder">The builder to which the SQL fragment should be appended.</param>
        void AppendBatchHeader(StringBuilder commandStringBuilder);

        /// <summary>
        ///     Appends a SQL command for deleting a row to the commands being built.
        /// </summary>
        /// <param name="commandStringBuilder">The builder to which the SQL should be appended.</param>
        /// <param name="command">The command that represents the delete operation.</param>
        /// <param name="commandPosition">The ordinal of this command in the batch.</param>
        /// <returns>The <see cref="ResultSetMapping" /> for the command.</returns>
        ResultSetMapping AppendDeleteOperation(
            StringBuilder commandStringBuilder,
            IReadOnlyModificationCommand command,
            int commandPosition);

        /// <summary>
        ///     Appends a SQL command for inserting a row to the commands being built.
        /// </summary>
        /// <param name="commandStringBuilder">The builder to which the SQL should be appended.</param>
        /// <param name="command">The command that represents the delete operation.</param>
        /// <param name="commandPosition">The ordinal of this command in the batch.</param>
        /// <returns>The <see cref="ResultSetMapping" /> for the command.</returns>
        ResultSetMapping AppendInsertOperation(
            StringBuilder commandStringBuilder,
            IReadOnlyModificationCommand command,
            int commandPosition);

        /// <summary>
        ///     Appends a SQL command for updating a row to the commands being built.
        /// </summary>
        /// <param name="commandStringBuilder">The builder to which the SQL should be appended.</param>
        /// <param name="command">The command that represents the delete operation.</param>
        /// <param name="commandPosition">The ordinal of this command in the batch.</param>
        /// <returns>The <see cref="ResultSetMapping" /> for the command.</returns>
        ResultSetMapping AppendUpdateOperation(
            StringBuilder commandStringBuilder,
            IReadOnlyModificationCommand command,
            int commandPosition);
    }

3 Create the infrastructure to save deltas (Implementing IDeltaStore)

Now is time to create a delta store, this is an easy one since we only need to inherit from our delta store base and save the information in an entity framework DbContext, so here is the implementation

https://github.com/egarim/SyncFramework/blob/main/src/EntityFrameworkCore/BIT.Data.Sync.EfCore/EFDeltaStore.cs

if you want to compare it with other delta store implementations you can take a look at the in-memory version here

https://github.com/egarim/SyncFramework/blob/main/src/BIT.Data.Sync/Imp/MemoryDeltaStore.cs

4 Create the infrastructure to process deltas (implementing IDeltaProcessor)

So far, we know what we need to store in the deltas which basically is SQL commands and their parameters so it means to process those SQL Commands our delta processor needs to create a database connection and execute SQL commands

https://github.com/egarim/SyncFramework/blob/main/src/EntityFrameworkCore/BIT.Data.Sync.EfCore/EFDeltaProcessor.cs

public EFDeltaProcessor(DbContext dBContext) 
{
    _dBContext = dBContext;
  
}
public EFDeltaProcessor(string connectionstring, string DbEngineAlias, string ProviderInvariantName)
{

    this.CurrentDbEngine = DbEngineAlias;
    this.connectionString = connectionstring;

    try
    {
        factory = DbProviderFactories.GetFactory(ProviderInvariantName);
    }
    catch (Exception ex)
    {
        Debug.WriteLine(ex.Message);
        throw new Exception("There was a problem creating the database connection using DbProviderFactories.GetFactory. Please your make sure the DbProviderFactory for your database is registered https://docs.microsoft.com/en-us/dotnet/api/system.data.common.dbproviderfactories.registerfactory?view=net-5.0", ex);
    }
    //TODO check provider registration later

    //DbProviderFactories.RegisterFactory("Microsoft.Data.SqlClient", SqlClientFactory.Instance);
}

there are a few things to notice in that class, first, it has 2 constructors because we need 2 different ways to create the connection to the database, one using the entity framework DbContext and one using ADO.NET DbProviderFactory

All the magic happens in the ProcessDeltas method, this method is in charge of, extract the content of the deltas and transforming them into SQL commands and parameters, and then executing the command.

please notice that the content of each delta is an instance of ModificationCommandData

https://github.com/egarim/SyncFramework/blob/main/src/EntityFrameworkCore/BIT.Data.Sync.EfCore/Data/ModificationCommandData.cs

which is a class that allows us to store multiple SQL commands (for different database engines) and their parameters

5 Implement the synchronization node functionality in an Entity Framework DbContext(ISyncClientNode)

At the moment we are able to produce and process deltas for entity framework relational, so the next step is to implement the functionality of synchronization client node by implementing the following interface

https://github.com/egarim/SyncFramework/blob/main/src/BIT.Data.Sync/Client/ISyncClientNode.cs

namespace BIT.Data.Sync.Client
{
    public interface ISyncClientNode
    {
        IDeltaProcessor DeltaProcessor { get; }
        IDeltaStore DeltaStore { get; }
        ISyncFrameworkClient SyncFrameworkClient { get; }
        string Identity { get;  }

    }
}

https://github.com/egarim/SyncFramework/blob/main/src/EntityFrameworkCore/BIT.Data.Sync.EfCore/SyncFrameworkDbContext.cs

The server-side

I’m not going to show the implementation of the server since that implementation is generic and uses the same delta store and delta processor that we created at the beginning of this article. for more information check the following links

Adding network support

https://www.jocheojeda.com/2021/10/17/syncframework-adding-network-support/

Testing network support

https://www.youtube.com/watch?v=mSl0n0O5QIg&t=4s

 

The next post its going to be a video testing a simple synchronization scenario, see you in the next post!!!