by Joche Ojeda | Jul 4, 2023 | Uncategorized
Last week, I had two presentations, during both of which I was to present an example of data synchronization using the open-source framework we developed in our office, Xari/BitFrameworks. you can read more about the framework here https://github.com/egarim/SyncFramework
I practiced the demo several times and felt confident about it. After all, I was the one who made the last commit to the source. When the conference began, it was my time to shine. I eagerly spoke about the merits of the framework, then it was time for the technical demo.
Everything started well, but after a while, the framework stopped synchronizing data. I was baffled; I had practiced my demo multiple times. What could have gone wrong?
At the end of the demo, I did what everyone does when their presentation fails—I blamed the demo gods.
After resting for a few hours, I reset my computer and cleaned the Visual Studio solution. I practiced several more times and was ready for round two.
I repeated my presentation and reached the technical demo. There was nothing to fear, right? The previous failure was just a fluke… or so I thought. The demo failed even earlier this time. I cursed the demo gods and finished my presentation.
It was a hard week. I didn’t get the dopamine rush I usually got from presentations. It was a rainy Saturday morning, perfect weather to contemplate my failure. I decided not to give up and wrote some integration tests to determine what went wrong.
So why did my demo fail? The demo was about database synchronization using delta encoding theory. For a successful synchronization, I needed to process the deltas in the exact order they were sent to the server. I had a GUID type of index to sort the deltas, which seemed correct because every time I ran the integration test, it worked fine.
Still puzzled by why the demo failed, I decided to go deeper and wrote an integration test for the GUID generation process. I even wrote an article about it, which you can read here.
On my GUID, common problems using GUID identifiers
Now I was even more puzzled. The tests passed sometimes and failed other times. After a while, I realized there was a random element in how GUIDs are generated that introduced a little-known factor: probabilistic errors.
Probabilistic errors in software refer to a category of errors that occur due to the inherent randomness and uncertainty in some algorithms or systems. These errors are not deterministic, i.e., they may not happen every time the same operation is performed. Instead, they occur with a certain probability.
Here are a few examples of situations that could lead to probabilistic errors:
1. **Concurrency and Race Conditions**: In concurrent systems, the order in which operations are executed can be unpredictable and can lead to different outcomes. If the software is not designed to handle all possible execution orders, it can lead to probabilistic errors, such as race conditions, where the result depends on the timing of certain operations.
2. **Network Failures and Distributed Systems**: In distributed systems or systems that rely on network communications, network failures can lead to probabilistic errors. These can include lost messages, delays, and partial failures. As these events are inherently unpredictable, they can lead to errors that occur with a certain probability.
3. **Randomized Algorithms**: Some algorithms, such as those used in machine learning or optimization, involve an element of randomness. These algorithms can sometimes produce incorrect results due to this randomness, leading to probabilistic errors.
4. **Use of Unreliable Hardware**: Hardware failures can lead to probabilistic errors. For example, memory corruption, disk failures, or unreliable network hardware can cause unpredictable and probabilistic errors in the software that uses them.
5. **Birthday Paradox**: In probability theory, the birthday problem or birthday paradox concerns the probability that, in a set of n randomly chosen people, some pair of them will have the same birthday. Similarly, when generating random identifiers (like GUIDs), there is a non-zero chance that two of them might collide, even if the chance is extremely small.
Probabilistic errors can be difficult to diagnose and fix, as they often cannot be reproduced consistently. They require careful design and robust error handling to mitigate. Techniques for dealing with probabilistic errors can include redundancy, error detection and correction codes, robust software design principles, and extensive testing.
So tell me, have it ever happened to you? how did you detected the error? and how did you fix it?
Until next time, happy coding!!!!
by Joche Ojeda | Jul 3, 2023 | Programming Situations
A GUID (Globally Unique Identifier) is a unique reference number used as an identifier in computer systems. GUIDs are usually 128-bit numbers and are created using specific algorithms that are designed to make each GUID unique.
Characteristics of GUIDs:
- Uniqueness: The primary characteristic of a GUID is its uniqueness. The intent of a GUID is to be able to uniquely identify a resource, entity, or a record in any context, across multiple systems and databases, without overlap.
- Size: A GUID is typically a 128-bit number, which means there are a huge number of possible GUIDs (2 to the power of 128, or about 3.4 × 10^38).
- Format: GUIDs are usually expressed as 32 hexadecimal digits, grouped in a specific way and separated by hyphens, e.g.,
3F2504E0-4F89-11D3-9A0C-0305E82C3301
.
- No Semantic Meaning: A GUID itself does not carry any information about the data it represents. It is simply a unique identifier.
Why are GUIDs useful?
- Distributed Systems: GUIDs are especially useful in distributed systems for ensuring unique identifiers across different parts of the system without needing to communicate with a central authority.
- No Central Authority Needed: With GUIDs, there’s no need for a central authority to manage the issuance of unique identifiers. Any part of your system can generate a GUID and be fairly certain of its uniqueness.
- Database Operations: GUIDs are also used as unique identifiers in databases. They are useful for primary keys, and they help avoid collisions that might occur during database operations like replication.
- Safer Data Exposure: Because a GUID does not disclose information about the data it represents, it is safer to expose in public-facing operations, such as in URLs for individual data records, compared to an identifier that might be incrementing or otherwise guessable.
However, using GUIDs also has its trade-offs. Their large size compared to a simple integer ID means they take up more storage space and index memory, and are slower to compare. Also, because they are typically generated randomly, they can lead to fragmentation in the database, which can negatively impact performance.
Disadvantages
While GUIDs (Globally Unique Identifiers) have several advantages, such as providing a unique identifier across different systems and domains without the need for a central authority, they do come with their own set of problems and limitations. Here are some common issues associated with using GUIDs:
- Size: A GUID is 128 bits, much larger than an integer or long. This increased size can lead to more memory and storage usage, especially in large systems. They also take longer to compare than integers.
- Non-sequential: GUIDs are typically generated randomly, so they are not sequential. This can lead to fragmentation in databases, where data that is frequently accessed together is stored in scattered locations. This can slow down database performance.
- Readability: GUIDs are not human-friendly, which makes them difficult to debug or troubleshoot. For example, if you’re using GUIDs in URLs, it’s hard for users to manually enter them or understand them.
- Collision Risk: While the risk of generating a duplicate GUID is incredibly small, it is not zero. Especially in systems that generate a very large number of GUIDs, the probability of collision (though still extremely small) is greater than zero. This is known as the “birthday problem” in probability theory.
- No Information Content: GUIDs don’t provide any information about the data they represent. Unlike structured identifiers, you can’t encode any information in a GUID.
- Network Sorting: GUIDs can have different sort orders, depending on whether they’re sorted as strings or as raw byte arrays, and this can lead to confusion and mistakes.
To mitigate some of these problems, some systems use GUIDs in a modified or non-standard way. For example, COMB (combined GUID/timestamp) GUIDs or sequential GUIDs add a sequential element to reduce database fragmentation, but these come with their own trade-offs and complexities.
COMB GUIDS
A COMB (Combined Guid/Timestamp) GUID is a strategy that combines the uniqueness of GUIDs with the sequential nature of timestamps to mitigate some of the issues associated with GUIDs, particularly the database fragmentation issue.
A typical GUID is 128 bits and is usually displayed as 32 hexadecimal characters. When generating a COMB GUID, part of the GUID is replaced with the current timestamp. The portion of the GUID that is replaced and the format of the timestamp can vary depending on the specific implementation, but a common strategy is to replace the least significant bits of the GUID with the current timestamp.
Here’s a broad overview of how it works:
- Generate a regular GUID.
- Get the current timestamp (in a specific format, such as the number of milliseconds since a particular epoch).
- Replace a part of the GUID with the timestamp.
Because the timestamp is always increasing, the resulting COMB GUIDs will also increase over time, meaning that when new rows are inserted into a database, they are likely to be added to the end of the table or index, thus minimizing fragmentation.
However, there are some trade-offs to consider:
- Uniqueness: Because part of the GUID is being replaced with a timestamp, there is a slightly higher chance of collision compared to a regular GUID. However, as long as the non-timestamp portion of the GUID is sufficiently large and random, the chance of collision is still extremely small.
- Size: A COMB GUID is the same size as a regular GUID, so it doesn’t mitigate the issue of GUIDs being larger than simpler identifiers like integers.
- Readability and information content: Like regular GUIDs, COMB GUIDs are not human-friendly and don’t provide information about the data they represent.
- Sorting dependance: In most cases, COMB GUIDs are generated by a custom algorithm that adds a timestamp. This means that you also need an algorithm to extract the timestamp and execute the sorting based on these timestamp values. Additionally, you might need to implement your algorithm twice: once for the client side and once for the database engine
It’s worth noting that different systems and languages may have their own libraries or functions for generating COMB GUIDs. For example, in .NET, the NHibernate ORM has a comb
identifier generator that generates COMB GUIDs.
And before I say good bye to this article, here is a test project to probe my point
https://github.com/egarim/OhMyGuid
And here are the test results

Until the next time, happy coding ))
by Joche Ojeda | May 30, 2023 | Uncategorized
Let’s do a quick overview of the main features of XPO before we dive into details.
XPO (eXpress Persistent Objects) is a full-featured Object-Relational Mapping (ORM) framework developed by DevExpress. It is used to provide a mapping between the relational database tables and the objects used in a .NET application.
XPO allows developers to interact with a database in a more natural, object-oriented manner and eliminates the need to write complex SQL statements. It provides a high-level API for working with databases, automating common tasks such as connecting to a database, querying data, and committing changes.
With XPO, developers can focus on the business logic of the application rather than worrying about the low-level details of working with databases.
Throughout the book, we will explore each feature of XPO in depth, providing a comprehensive understanding of its capabilities. However, here are some of the most notable features of XPO that we will highlight:
- Object-Relational Mapping (ORM): XPO provides a mapping between the relational database tables and the objects used in a .NET application. This makes it easier for developers to interact with databases in a more natural, object-oriented manner.
- High-Level API: XPO provides a high-level API for working with databases, automating common tasks such as connecting to a database, querying data, and committing changes.
- Data Types Mapping: XPO automatically maps .NET data types to their corresponding SQL data types and vice versa, eliminating the need for manual data type conversion.
- LINQ Support: XPO includes built-in LINQ (Language Integrated Query) support, making it easier to write complex, fine-tuned queries.
- Customizable SQL Generation: XPO allows developers to customize the SQL generated by the framework, providing greater control over the database operations.
- Lazy Loading: XPO supports lazy loading, meaning that related objects can be loaded on demand, reducing the amount of data that needs to be loaded at once and improving performance.
- Change Tracking: XPO tracks changes made to objects and automatically updates the database as needed, eliminating the need to write manual update statements.
- Validation: XPO provides built-in support for validating objects, making it easier to enforce business rules and ensure data integrity.
- Caching: XPO provides caching capabilities, allowing frequently used data to be cached and reducing the number of database queries required.
- Support for Multiple Databases: XPO supports a wide range of relational databases, including SQL Server, Oracle, MySQL, PostgreSQL, and more.
Enhancing XPO with Metadata : Annotations
In C#, annotations are attributes that can be applied to classes, properties, methods, and other program elements to add metadata or configuration information.
In XPO, annotations are used extensively to configure the behavior of persistent classes.
XPO provides a number of built-in annotations that can be used to customize the behavior of persistent classes, making it easier to work with relational data using object-oriented programming techniques.
Some of the most commonly used annotations include:
- Persistent: Specifies the name of the database table that the persistent class is mapped to.
- PersistentAlias: Specifies the name of the database column that a property is mapped to.
- Size: Specifies the maximum size of a database column that a property is mapped to.
- Key: Marks a property as a key property for the persistent class.
- NonPersistent: Marks a property as not persistent, meaning it is not mapped to a database column.
- Association: Specifies the name of a database column that contains a foreign key for a one-to-many or many-to-many relationship.
In addition to the built-in annotations, XPO also provides a mechanism for defining custom annotations. You can define a custom annotation by defining a class that is inherited from the Attribute class.
Annotations can be applied using a variety of mechanisms, including directly in code, via configuration files, or through attributes on other annotations.
We will witness the practical implementation of annotations throughout the book, including our own custom annotations. However, we wanted to introduce them early on as they play a crucial role in the efficient mapping and management of data within XPO.
Now that we have refreshed our understanding of XPO’s main goal and features, let’s delve into the “O”, the “R”, and the “M” of an ORM.
And that’s all for this post, until next time ))
We are excited to announce that we are currently in the process of writing a comprehensive book about DevExpress XPO. As we work on this project, we believe it is essential to involve our readers and gather their valuable feedback. Therefore, we have decided to share articles from the book as we complete them, giving you an opportunity to provide input and suggestions that we can consider for inclusion in the final release. Keep in mind that the content presented is subject to change. We greatly appreciate your participation in this collaborative effort.
Related Articles
SOLID design pattern and XPO
What is an O.R.M (Object-Relational Mapping)
ADO The origin of data access in .NET
Relational database systems: the holy grail of data
by Joche Ojeda | May 29, 2023 | Uncategorized
SOLID is an acronym that stands for five fundamental principles of object-oriented programming and design. These principles were first introduced by Robert C. Martin (also known as Uncle Bob) and have since become a cornerstone of software development best practices. Each letter in SOLID represents a principle that, when applied correctly, leads to more maintainable and modular code.
Let’s dive into each of the SOLID principles and understand how they contribute to building high-quality software systems:
Single Responsibility Principle (SRP):
The SRP states that a class should have only one reason to change. In other words, a class should have a single responsibility or a single job. This principle encourages developers to break down complex systems into smaller, cohesive modules. By ensuring that each class has a focused responsibility, it becomes easier to understand, test, and modify the code without affecting other parts of the system.
Open-Closed Principle (OCP):
The OCP promotes the idea that software entities (classes, modules, functions, etc.) should be open for extension but closed for modification. This principle emphasizes the importance of designing systems that can be easily extended with new functionality without modifying existing code. By relying on abstractions, interfaces, and inheritance, developers can add new features by writing new code rather than changing the existing one. This approach reduces the risk of introducing bugs or unintended side effects.
Liskov Substitution Principle (LSP):
The LSP states that objects of a superclass should be replaceable with objects of its subclasses without affecting the correctness of the program. In simpler terms, if a class is a subtype of another class, it should be able to be used interchangeably with its parent class without causing any issues. This principle ensures that inheritance hierarchies are well-designed, avoiding situations where subclass behavior contradicts or breaks the functionality defined by the superclass. Adhering to the LSP leads to more flexible and reusable code.
Interface Segregation Principle (ISP):
The ISP advises developers to design interfaces that are specific to the needs of the clients that use them. It suggests that clients should not be forced to depend on interfaces they don’t use. By creating small and focused interfaces, rather than large and monolithic ones, the ISP enables clients to be decoupled from unnecessary dependencies. This principle enhances modularity, testability, and maintainability, as changes to one part of the system are less likely to impact other parts.
Dependency Inversion Principle (DIP):
The DIP encourages high-level modules to depend on abstractions rather than concrete implementations. It states that high-level modules should not depend on low-level modules; both should depend on abstractions. This principle promotes loose coupling between components, making it easier to substitute or modify dependencies without affecting the overall system. By relying on interfaces or abstract classes, the DIP facilitates extensibility, testability, and the ability to adapt to changing requirements.
By applying the SOLID principles, software engineers can create codebases that are modular, flexible, and easy to maintain. These principles provide a roadmap for designing systems that are resilient to change, promote code reusability, and improve collaboration among development teams. SOLID principles are not strict rules but rather guidelines that help developers make informed design decisions and create high-quality software systems.
It’s worth mentioning that the SOLID principles should not be applied blindly in all situations. Context matters, and there may be scenarios where strict adherence to one principle might not be the best approach. However, understanding and incorporating these principles into the software design process can significantly improve the overall quality of the codebase.
SOLID and XPO
XPO is designed with SOLID design principles in mind. Here’s how XPO applies each of the SOLID principles:
Single Responsibility Principle (SRP)
XPO uses separate classes for each major concern such as mapping, persistence, connection providers, and data access. Each class has a clearly defined purpose and responsibility.
Open-Closed Principle (OCP)
XPO is extensible and customizable, allowing you to create your own classes and derive them from the XPO base classes. XPO also provides a range of extension points and hooks to allow for customization and extension without modifying the core XPO code.
Liskov Substitution Principle (LSP)
XPO follows this principle by providing a uniform API that works with all persistent objects, regardless of their concrete implementation. This allows you to write code that works with any persistent object, without having to worry about the specific implementation details.
Interface Segregation Principle (ISP)
XPO provides a number of interfaces that define specific aspects of its behavior, allowing clients to use only the interfaces they need. This reduces the coupling between the clients and the XPO library.
Dependency Inversion Principle (DIP)
XPO was developed prior to the widespread popularity of Dependency Injection (DI) patterns and frameworks.
As a result, XPO does not incorporate DI as a built-in feature. However, this does not mean that XPO cannot be used in conjunction with DI. While XPO itself does not provide direct support for DI, you can still integrate it with popular DI containers or frameworks, such as the .NET Core DI container or any other one.
By integrating XPO with a DI container, you can leverage the benefits of DI principles in your application while utilizing XPO’s capabilities for database access and mapping. The DI container can handle the creation and management of XPO-related objects, facilitating loose coupling, improved modularity, and simplified testing.
A clear example is the XPO Extensions for ASP.NET Core DI:
using DevExpress.Xpo.DB;
Using Microsoft.Extensions.DependencyInjection; // ...
var builder = WebApplication.CreateBuilder(args); builder.Services.AddXpoDefaultUnitOfWork(true, options => options.UseConnectionString(builder.Configuration.GetConnectionString("MSSqlServer")) .UseAutoCreationOption(AutoCreateOption.DatabaseAndSchema) .UseEntityTypes(new Type[] { typeof(User) }));
More information about XPO and DI here: https://docs.devexpress.com/XPO/403009/connect-to-a-data-store/xpo-extensions-for-aspnet-core-di
And that’s all for this post, until next time ))
We are excited to announce that we are currently in the process of writing a comprehensive book about DevExpress XPO. As we work on this project, we believe it is essential to involve our readers and gather their valuable feedback. Therefore, we have decided to share articles from the book as we complete them, giving you an opportunity to provide input and suggestions that we can consider for inclusion in the final release. Keep in mind that the content presented is subject to change. We greatly appreciate your participation in this collaborative effort.
Related Articles
What is an O.R.M (Object-Relational Mapping)
ADO The origin of data access in .NET
Relational database systems: the holy grail of data
by Joche Ojeda | May 26, 2023 | ADO, ADO.NET, ORM
ORM is a technique for converting data between incompatible type systems using object-oriented programming languages. In the context of database management, an ORM provides a way to interact with a database using objects and methods, rather than writing raw SQL queries. This allows for a higher level of abstraction, reducing the amount of repetitive and error-prone code that needs to be written, and making it easier to maintain the application.
ORM and RDBMS
Object-Relational Mapping (ORM) tools and RDBMS have different approaches to querying data.
RDBMS uses SQL which is a low-level, language-agnostic query language that is used to communicate with relational databases. When using SQL, developers write raw SQL statements that are executed directly against the database. The results of the query are then returned in the form of a table or set of tables, which must be manually mapped to objects in code.
On the other hand, an ORM tool abstracts the underlying SQL code, allowing developers to query data using an object-oriented syntax that is similar to the programming language used in the application.
The ORM tool automatically generates the necessary SQL statements and executes them against the database, and then maps the resulting data to objects in code.
One of the main advantages of using an ORM tool is that it eliminates the need for manual data mapping, making it easier for developers to write data-access code and reducing the chances of errors. Additionally, because ORM tools use a higher-level syntax, they can also be more intuitive and easier to use than raw SQL.
However, ORM tools can also introduce performance overhead and complexity, as they must manage the mapping between the database and objects in code. In some cases, raw SQL may still be necessary to achieve the desired performance or to perform complex data manipulation that is not supported by the ORM tool.
As a simple example we will use the database table “customer”. It has four columns:
- id: A unique identifier for each customer, usually an auto-incrementing integer.
- name: The name of the customer, represented as a string.
- email: The email address of the customer, represented as a string.
- address: The address of the customer, represented as a string.
Each row in the table represents a single customer, and each column represents a piece of information about that customer. The table is used to store the data for the customer objects created in the code. The ORM maps the table to a class, allowing you to interact with the data using objects and methods.
Here is an example of how an ORM can map a database table named “customer” to a class representation in C#:
class Customer
{
public int Id { get; set; }
public string Name { get; set; }
public string Email { get; set; }
public string Address { get; set; }
}
// The ORM maps the customer table in the database to the Customer class.
// The columns of the table become properties of the class, and each row in the table becomes an instance of the class.
In this example, the ORM abstracts away the underlying database and provides a high-level, object-oriented interface for working with the data.
The main advantage of using an ORM (Object-Relational Mapping) instead of using the direct results from the database in C# is increased abstraction and convenience. This provides several benefits:
- Improved readability and maintainability: Code written using an ORM is easier to understand and maintain, as it uses a higher-level, object-oriented interface, rather than raw SQL.
- Reduced code duplication: With an ORM, you can encapsulate database-related logic in reusable classes and methods, reducing the amount of repetitive code that needs to be written.
- Increased safety: An ORM can help prevent SQL injection attacks by automatically escaping user input, ensuring that data is safely inserted into the database.
- Increased flexibility: An ORM provides a layer of abstraction between the application code and the database, allowing you to switch between different database systems without changing the application code.
Direct database access
var connection = new SqlConnection("connection string");
connection. Open();
var command = new SqlCommand("SELECT * FROM customers", connection);
var reader = command.ExecuteReader();
List < Customer > customers = new List < Customer > ();
while (reader. Read()) {
var customer = new Customer {
Id = reader.GetInt32(0),
Name = reader.GetString(1),
Email = reader.GetString(2),
Address = reader.GetString(3)
};
customers. Add(customer);
}
reader.Close();
connection. Close();
ORM access
var customers = ORM.Query<Customer>().ToList();
As you can see, using an ORM simplifies the code and makes it more readable, while also providing additional safety and flexibility benefits.
Different types of O.R.M
Micro O.R.M
A micro ORM is a type of Object-Relational Mapping (ORM) library that is designed to be lightweight and simple, offering a minimalistic and easy-to-use interface for interacting with a relational database.
Micro ORMs are typically faster and require less configuration and setup than full-featured ORMs, but also have fewer features and less advanced capabilities.
Micro ORMs provide a higher-level, object-oriented interface for working with databases, allowing you to interact with the data using classes, properties, and methods, rather than writing raw SQL queries as well but they typically offer a smaller, more focused set of features than full-featured ORMs, including support for basic CRUD (Create, Read, Update, Delete) operations, and often provide basic querying and data mapping capabilities.
Micro ORMs are often used in small projects or in environments where performance is critical. They are also commonly used in microservices and other architectures that prioritize simplicity, scalability, and ease of deployment.
Full-fledged Object-Relational Mapping (ORM)
A full-fledged Object-Relational Mapping (ORM) is a software library that provides a convenient and efficient way of working with relational databases.
A full-fledged ORM typically has the following characteristics:
- Object Mapping: The ORM maps database tables to object-oriented classes, allowing developers to interact with the database using objects instead of tables.
- Query Generation: The ORM generates SQL statements based on the object-oriented query syntax, reducing the need for developers to write and manage raw SQL.
- Data Validation: The ORM provides built-in data validation and type checking, reducing the risk of incorrect data being written to the database.
- Caching: The ORM may provide caching of data to improve performance, reducing the number of databases round trips.
- Transactions: The ORM supports transactions, making it easier to ensure that database operations are atomic and consistent.
- Database Independence: The ORM is typically designed to be database agnostic, allowing developers to switch between different database platforms without changing the application code.
- Lazy Loading: The ORM supports lazy loading, which can improve performance by loading related objects only when they are needed.
- Performance: A comprehensive ORM is crafted to deliver satisfactory performance, and we can customize each aspect of the ORM to enhance its performance. However, it is important to note that a solid understanding of how the ORM operates is necessary to make these customizations effective.
One of the disadvantages of using an Object-Relational Mapping (ORM) tool is that you may not be able to create specialized, fine-tuned queries with the same level of precision as you can with raw SQL. This is because ORMs typically abstract the underlying database and expose a higher-level, more object-oriented API for querying data.
For example, it may not be possible to use specific SQL constructs, such as subqueries or window functions, in your ORM queries. Additionally, some ORMs may not provide a way to optimize query performance, such as by specifying index hints or controlling the execution plan.
As a result, developers who need to write specialized or fine-tuned queries may find it easier and more efficient to write raw SQL statements. However, this comes at the cost of having to manually manage the mapping between the database and the application, which is one of the main benefits of using an ORM in the first place.
ORM and Querying data
Before the existence of LINQ, Object-Relational Mapping (ORM) tools used a proprietary query language to query data. This meant that each ORM had to invent its own criteria or query language, and developers had to learn the specific syntax of each ORM in order to write queries.
For example, in XPO, developers used the Criteria API to write queries. The Criteria API allowed developers to write queries in a type-safe and intuitive manner, but the syntax was specific to XPO and not easily transferable to other ORMs.
Similarly, in NHibernate, developers used the Hibernate Query Language (HQL) to write queries. Like the Criteria API in XPO, HQL was a proprietary query language specific to NHibernate, and developers had to learn the syntax in order to write queries.
This lack of a standardized query language was one of the main limitations of ORMs before the introduction of LINQ. LINQ provided a common, language-agnostic query language that could be used with a variety of data sources, including relational databases. This made it easier for developers to write data-access code, as they only had to learn one query language, regardless of the ORM or data source being used.
The introduction of LINQ, a new era for O.R.M in dot net
Language Integrated Query (LINQ) is a Microsoft .NET Framework component that was introduced in .NET Framework 3.5. It is a set of language and library features that enables developers to write querying and manipulation operations over data in a more intuitive and readable way.
LINQ was created to address the need for a more unified and flexible approach to querying and manipulating data in .NET applications. Prior to LINQ, developers had to write separate code to access and manipulate data in different data sources, such as databases, XML documents, and in-memory collections. This resulted in a fragmented and difficult-to-maintain codebase, with different syntax and approaches for different data sources.
With LINQ, developers can now write a single querying language that works with a wide range of data sources, including databases, XML, in-memory collections, and more.
In addition, LINQ also provides a number of other benefits, such as improved performance, better type safety, and integrated support for data processing operations, such as filtering, grouping, and aggregating.
using (var context = new DbContext())
{
var selectedCustomers = context.Customers
.Where(c => c.Address.Contains("San Salvador"))
.Select(c => new
{
c.Id,
c.Name,
c.Email,
c.Address
})
.ToList();
foreach(var customer in selectedCustomers)
foreach(var customer in selectedCustomers)
{
Console.WriteLine($"Id: {customer.Id}, Name: {customer.Name}, Email: {customer.Email}, Address: {customer. Address}");
}
}
And that’s all for this post, until next time ))
We are excited to announce that we are currently in the process of writing a comprehensive book about DevExpress XPO. As we work on this project, we believe it is essential to involve our readers and gather their valuable feedback. Therefore, we have decided to share articles from the book as we complete them, giving you an opportunity to provide input and suggestions that we can consider for inclusion in the final release. Keep in mind that the content presented is subject to change. We greatly appreciate your participation in this collaborative effort.
Related Articles
ADO The origin of data access in .NET
Relational database systems: the holy grail of data