by Joche Ojeda | May 5, 2025 | Boring systems, ERP
After returning home from an extended journey through the United States, Greece, and Turkey, I found myself contemplating a common challenge over my morning coffee. There are numerous recurring problems in system design and ORM (Object-Relational Mapping) implementation that developers face repeatedly.
To address these challenges, I’ve decided to tackle a system that most professionals are familiar with—an ERP (Enterprise Resource Planning) system—and develop a design that achieves three critical goals:
- Performance Speed: The system must be fast and responsive
- Technology Agnosticism: The architecture should be platform-independent
- Consistent Performance: The system should maintain its performance over time
Design Decisions
To achieve these goals, I’m implementing the following key design decisions:
- Utilizing the SOLID design principles to ensure maintainability and extensibility
- Building with C# and net9 to leverage its modern language features
- Creating an agnostic architecture that can be reimplemented in various technologies like DevExpress XAF or Entity Framework
Day 1: Foundational Structure
In this first article, I’ll propose an initial folder structure that may evolve as the system develops. I’ll also describe a set of base classes and interfaces that will form the foundation of our system.
You can find all the source code for this solution in the designated repository.
The Core Layer
Today we’re starting with the core layer—a set of interfaces that most entities will implement. The system design follows SOLID principles to ensure it can be easily reimplemented using different technologies.
Base Interfaces
Here’s the foundation of our interface hierarchy:
- IEntity: Core entity interface defining the Id property
- IAuditable: Interface for entities with audit information
- IArchivable: Interface for entities supporting soft delete
- IVersionable: Interface for entities with effective dating
- ITimeTrackable: Interface for entities requiring time tracking
Service Interfaces
To complement our entity interfaces, we’re also defining service interfaces:
- IAuditService: Interface for audit-related operations
- IArchiveService: Interface for archiving operations
Repo
egarim/SivarErp: Open Source ERP
Next Steps
In upcoming articles, I’ll expand on this foundation by implementing concrete classes, developing the domain layer, and demonstrating how this architecture can be applied to specific ERP modules.
The goal is to create a reference architecture that addresses the recurring challenges in system design while remaining adaptable to different technological implementations.
Stay tuned for the next installment where we’ll dive deeper into the implementation details of our core interfaces.
About Us
YouTube
https://www.youtube.com/c/JocheOjedaXAFXAMARINC
Our sites
Let’s discuss your XAF
This call/zoom will give you the opportunity to define the roadblocks in your current XAF solution. We can talk about performance, deployment or custom implementations. Together we will review you pain points and leave you with recommendations to get your app back in track
https://calendly.com/bitframeworks/bitframeworks-free-xaf-support-hour
Our free A.I courses on Udemy
by Joche Ojeda | Feb 23, 2025 | A.I
I’ve been thinking about this topic for a while and have collected numerous notes and ideas about how to present abstractions that allow large language models (LLMs) to interact with various systems – whether that’s your database, operating system, word documents, or other applications.
Before diving deeper, let’s review some fundamental concepts:
Key Concepts
First, let’s talk about APIs (Application Programming Interface). In simple terms, an API is a way to expose methods, functions, and procedures from your application, independent of the programming language being used.
Next is the REST API concept, which is a method of exposing your API using HTTP verbs. As IT professionals, we hear these terms – HTTP, REST, API – almost daily, but we might not fully grasp their core concepts. Let me explain how they relate to software automation using AI.
HTTP (Hypertext Transfer Protocol) is fundamentally a way for two applications to communicate using text. This is its beauty – text serves as the basic layer of understanding between systems, meaning almost any system or programming language can produce a client or server that can interact via HTTP.
REST (Representational State Transfer) is a methodology for systems to communicate and either change or read the state of another system.
Levels of System Interaction
When implementing LLMs for system automation, we first need to determine our desired level of interaction. Here are several approaches:
- Human-like Interaction: An LLM can interact with your operating system using mouse and keyboard inputs, effectively mimicking human behavior.
- REST API Integration: Your application can communicate using HTTP verbs and the REST protocol.
- SDK Implementation: You can create a software development kit that describes your application’s functionality and expose this to the LLM.
The connection method will vary depending on your chosen technology. For instance:
- Microsoft Semantic Kernel allows you to create plugins that interact with your system through REST API, database, or SDK.
- Microsoft AI extensions require you to decide on your preferred interaction level before implementation.
- The Model Context Protocol is a newer approach that enables application exposure for LLM agents, with Claude from Anthropic being a notable example.
Implementation Considerations
When automating your system, you need to consider:
- Available Integration Options: Not all systems provide an SDK or API, which can limit automation possibilities.
- Interaction Protocol Choice: You’ll need to decide between REST API, HTTP, or Model Context Protocol.
This overview should help you understand the various levels of resolution needed to automate your application. What’s your preferred method for integrating LLMs with your applications? I’d love to hear your thoughts and experiences.
by Joche Ojeda | Jan 21, 2025 | Uncategorized
During my recent AI research break, I found myself taking a walk down memory lane, reflecting on my early career in data analysis and ETL operations. This journey brought me back to an interesting aspect of software development that has evolved significantly over the years: the management of shared libraries.
The VB6 Era: COM Components and DLL Hell
My journey began with Visual Basic 6, where shared libraries were managed through COM components. The concept seemed straightforward: store shared DLLs in the Windows System directory (typically C:\Windows\System32) and register them using regsvr32.exe. The Windows Registry kept track of these components under HKEY_CLASSES_ROOT.
However, this system had a significant flaw that we now famously know as “DLL Hell.” Let me share a practical example: Imagine you have two systems, A and B, both using Crystal Reports 7. If you uninstall either system, the other would break because the shared DLL would be removed. Version control was primarily managed by location, making it a precarious system at best.
Enter .NET Framework: The GAC Revolution
When Microsoft introduced the .NET Framework, it brought a sophisticated solution to these problems: the Global Assembly Cache (GAC). Located at C:\Windows\Microsoft.NET\assembly\ (for .NET 4.0 and later), the GAC represented a significant improvement in shared library management.
The most revolutionary aspect was the introduction of assembly identity. Instead of relying solely on filenames and locations, each assembly now had a unique identity consisting of:
- Simple name (e.g., “MyCompany.MyLibrary”)
- Version number (e.g., “1.0.0.0”)
- Culture information
- Public key token
A typical assembly full name would look like this:
MyCompany.MyLibrary, Version=1.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089
This robust identification system meant that multiple versions of the same assembly could coexist peacefully, solving many of the versioning nightmares that plagued the VB6 era.
The Modern Approach: Private Dependencies
Fast forward to 2025, and we’re living in what I call the “brave new world” of .NET for multi-operative systems. The landscape has changed dramatically. Storage is no longer the premium resource it once was, and the trend has shifted away from shared libraries toward application-local deployment.
Modern applications often ship with their own private version of the .NET runtime and dependencies. This approach eliminates the risks associated with shared components and gives applications complete control over their runtime environment.
Reflection on Technology Evolution
While researching Blazor’s future and seeing discussions about Microsoft’s technology choices, I’m reminded that technology evolution is a constant journey. Organizations move slowly in production environments, and that’s often for good reason. The shift from COM components to GAC to private dependencies wasn’t just a technical evolution – it was a response to real-world problems and changing resources.
This journey from VB6 to modern .NET reveals an interesting pattern: sometimes the best solution isn’t sharing resources but giving each application its own isolated environment. It’s fascinating how the decreasing cost of storage and increasing need for reliability has transformed our approach to dependency management.
As I return to my AI research, this trip down memory lane serves as a reminder that while technology constantly evolves, understanding its history helps us appreciate the solutions we have today and better prepare for the challenges of tomorrow.