by Joche Ojeda | Jan 21, 2025 | Uncategorized
During my recent AI research break, I found myself taking a walk down memory lane, reflecting on my early career in data analysis and ETL operations. This journey brought me back to an interesting aspect of software development that has evolved significantly over the years: the management of shared libraries.
The VB6 Era: COM Components and DLL Hell
My journey began with Visual Basic 6, where shared libraries were managed through COM components. The concept seemed straightforward: store shared DLLs in the Windows System directory (typically C:\Windows\System32) and register them using regsvr32.exe. The Windows Registry kept track of these components under HKEY_CLASSES_ROOT.
However, this system had a significant flaw that we now famously know as “DLL Hell.” Let me share a practical example: Imagine you have two systems, A and B, both using Crystal Reports 7. If you uninstall either system, the other would break because the shared DLL would be removed. Version control was primarily managed by location, making it a precarious system at best.
Enter .NET Framework: The GAC Revolution
When Microsoft introduced the .NET Framework, it brought a sophisticated solution to these problems: the Global Assembly Cache (GAC). Located at C:\Windows\Microsoft.NET\assembly\ (for .NET 4.0 and later), the GAC represented a significant improvement in shared library management.
The most revolutionary aspect was the introduction of assembly identity. Instead of relying solely on filenames and locations, each assembly now had a unique identity consisting of:
- Simple name (e.g., “MyCompany.MyLibrary”)
- Version number (e.g., “1.0.0.0”)
- Culture information
- Public key token
A typical assembly full name would look like this:
MyCompany.MyLibrary, Version=1.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089
This robust identification system meant that multiple versions of the same assembly could coexist peacefully, solving many of the versioning nightmares that plagued the VB6 era.
The Modern Approach: Private Dependencies
Fast forward to 2025, and we’re living in what I call the “brave new world” of .NET for multi-operative systems. The landscape has changed dramatically. Storage is no longer the premium resource it once was, and the trend has shifted away from shared libraries toward application-local deployment.
Modern applications often ship with their own private version of the .NET runtime and dependencies. This approach eliminates the risks associated with shared components and gives applications complete control over their runtime environment.
Reflection on Technology Evolution
While researching Blazor’s future and seeing discussions about Microsoft’s technology choices, I’m reminded that technology evolution is a constant journey. Organizations move slowly in production environments, and that’s often for good reason. The shift from COM components to GAC to private dependencies wasn’t just a technical evolution – it was a response to real-world problems and changing resources.
This journey from VB6 to modern .NET reveals an interesting pattern: sometimes the best solution isn’t sharing resources but giving each application its own isolated environment. It’s fascinating how the decreasing cost of storage and increasing need for reliability has transformed our approach to dependency management.
As I return to my AI research, this trip down memory lane serves as a reminder that while technology constantly evolves, understanding its history helps us appreciate the solutions we have today and better prepare for the challenges of tomorrow.
by Joche Ojeda | Jan 9, 2025 | dotnet
While researching useful features in .NET 9 that could benefit XAF/XPO developers, I discovered something particularly interesting: Version 7 GUIDs (RFC 9562 specification). These new GUIDs offer a crucial feature – they’re sortable.
This discovery brought me back to an issue I encountered two years ago while working on the SyncFramework. We faced a peculiar problem where Deltas were correctly generated but processed in the wrong order in production environments. The occurrences seemed random, and no clear pattern emerged. Initially, I thought using Delta primary keys (GUIDs) to sort the Deltas would ensure they were processed in their generation order. However, this assumption proved incorrect. Through testing, I discovered that GUID generation couldn’t be trusted to be sequential. This issue affected multiple components of the SyncFramework. Whether generating GUIDs in C# or at the database level, there was no guarantee of sequential ordering. Different database engines could sort GUIDs differently. To address this, I implemented a sequence service as a solution.Enter .NET 9 with its Version 7 GUIDs (conforming to RFC 9562 specification). These new GUIDs are genuinely sequential, making them reliable for sorting operations.
To demonstrate this improvement, I created a test solution for XAF with a custom base object. The key implementation occurs in the OnSaving method:
protected override void OnSaving()
{
base.OnSaving();
if (!(Session is NestedUnitOfWork) && Session.IsNewObject(this) && oid.Equals(Guid.Empty))
{
oid = Guid.CreateVersion7();
}
}
Notice the use of CreateVersion7()
instead of the traditional NewGuid()
. For comparison, I also created another domain object using the traditional GUID generation:
protected override void OnSaving()
{
base.OnSaving();
if (!(Session is NestedUnitOfWork) && Session.IsNewObject(this) && oid.Equals(Guid.Empty))
{
oid = Guid.NewGuid();
}
}
When creating multiple instances of the traditional GUID domain object, you’ll notice that the greater the time interval between instance creation, the less likely the GUIDs will maintain sequential ordering.
GUID Version 7

GUID Old Version

This new feature in .NET 9 could significantly simplify scenarios where sequential ordering is crucial, eliminating the need for additional sequence services in many cases. Here is the repo on GitHubHappy coding until next time!
Related article
On my GUID, common problems using GUID identifiers | Joche Ojeda
by Joche Ojeda | Jul 3, 2024 | Uncategorized
Hey there, fellow developers! Today, let’s talk about a practice that can revolutionize the way we create, test, and perfect our software: dogfooding. If you’re wondering what dogfooding means, don’t worry, it’s not about what you feed your pets. In the tech world, “eating your own dog food” means using the software you develop in your day-to-day operations. Let’s dive into how this can be a game-changer for us.
Why Should We Dogfood?
- Catch Bugs Early: By using our own software, we become our first line of defense against bugs and glitches. Real-world usage uncovers issues that might slip through traditional testing. We get to identify and fix these problems before they ever reach our users.
- Enhance Quality Assurance: There’s no better way to ensure our software meets high standards than by relying on it ourselves. When our own work depends on our product, we naturally aim for higher quality and reliability.
- Improve User Experience: When we step into the shoes of our users, we experience firsthand what works well and what doesn’t. This unique perspective allows us to design more intuitive and user-friendly software.
- Create a Rapid Feedback Loop: Using our software internally means continuous and immediate feedback. This quick loop helps us iterate faster, refining features and squashing bugs swiftly.
- Build Credibility and Trust: When we show confidence in our software by using it ourselves, it sends a strong message to our users. It demonstrates that we believe in what we’ve created, enhancing our credibility and trustworthiness.
Real-World Examples
- Microsoft: They’re known for using early versions of Windows and Office within their own teams. This practice helps them catch issues early and improve their products before public release.
- Google: Googlers use beta versions of products like Gmail and Chrome. This internal testing helps them refine their offerings based on real-world use.
- Slack: Slack’s team relies on Slack for communication, constantly testing and improving the platform from the inside.
How to Start Dogfooding
- Integrate it Into Daily Work: Start by using your software for internal tasks. Whether it’s a project management tool, a communication app, or a new feature, make it part of your team’s daily routine.
- Encourage Team Participation: Get everyone on board. The more diverse the users, the more varied the feedback. Encourage your team to report bugs, suggest improvements, and share their experiences.
- Set Up Feedback Channels: Create dedicated channels for feedback. This could be as simple as a Slack channel or a more structured feedback form. Ensure that the feedback loop is easy and accessible.
- Iterate Quickly: Use the feedback to make quick improvements. Prioritize issues that affect usability and functionality. Show your team that their feedback is valued and acted upon.
Overcoming Challenges
- Avoid Bias: While familiarity is great, it can also lead to bias. Pair internal testing with external beta testers to get a well-rounded perspective.
- Manage Resources: Smaller teams might find it challenging to allocate resources for internal use. Start small and gradually integrate more aspects of your software into daily use.
- Consider Diverse Use Cases: Remember, your internal environment might not replicate all the conditions your users face. Keep an eye on diverse scenarios and edge cases.
Conclusion
Dogfooding is more than just a quirky industry term. It’s a powerful practice that can elevate the quality of our software, speed up our development cycles, and build stronger trust with our users. By using our software as our customers do, we gain invaluable insights that can lead to better, more reliable products. So, let’s embrace the dogfood, turn our critical eye inward, and create software that we’re not just proud of but genuinely rely on. Happy coding, and happy dogfooding! 🐶💻
Feel free to share your dogfooding experiences in the comments below. Let’s learn from each other and continue to improve our craft together!
by Joche Ojeda | Apr 28, 2024 | A.I
Introduction to Semantic Kernel
Hey there, fellow curious minds! Let’s talk about something exciting today—Semantic Kernel. But don’t worry, we’ll keep it as approachable as your favorite coffee shop chat.
What Exactly Is Semantic Kernel?
Imagine you’re in a magical workshop, surrounded by tools. Well, Semantic Kernel is like that workshop, but for developers. It’s an open-source Software Development Kit (SDK) that lets you create AI agents. These agents aren’t secret spies; they’re little programs that can answer questions, perform tasks, and generally make your digital life easier.
Here’s the lowdown:
- Open-Source: Think of it as a community project. People from all walks of tech life contribute to it, making it better and more powerful.
- Software Development Kit (SDK): Fancy term, right? But all it means is that it’s a set of tools for building software. Imagine it as your AI Lego set.
- Agents: Nope, not James Bond. These are like your personal AI sidekicks. They’re here to assist you, not save the world (although that would be cool).
A Quick History Lesson
About a year ago, Semantic Kernel stepped onto the stage. Since then, it’s been striding confidently, like a seasoned performer. Here are some backstage highlights:
- GitHub Stardom: On March 17th, 2023, it made its grand entrance on GitHub. And guess what? It got more than 17,000 stars! (Around 18.2. right now) That’s like being the coolest kid in the coding playground.
- Downloads Galore: The C# kernel (don’t worry, we’ll explain what that is) had 1000000+ NuGet downloads. It’s like everyone wanted a piece of the action.
- VS Code Extension: Over 25,000 downloads! Imagine it as a magical wand for your code editor.
And hey, the .Net kernel even threw a party—it reached a 1.0 release! The Python and Java kernels are close behind with their 1.0 Release Candidates. It’s like they’re all graduating from AI university.
Why Should You Care?
Now, here’s the fun part. Why should you, someone with a lifetime of wisdom and curiosity, care about this?
- Microsoft Magic: Semantic Kernel loves hanging out with Microsoft products. It’s like they’re best buddies. So, when you use it, you get to tap into the power of Microsoft’s tech universe. Fancy, right? Learn more
- No Code Rewrite Drama: Imagine you have a favorite recipe (let’s say it’s your grandma’s chocolate chip cookies). Now, imagine you want to share it with everyone. Semantic Kernel lets you do that without rewriting the whole recipe. You just add a sprinkle of AI magic! Check it out
- LangChain vs. Semantic Kernel: These two are like rival chefs. Both want to cook up AI goodness. But while LangChain (built around Python and JavaScript) comes with a full spice rack of tools, Semantic Kernel is more like a secret ingredient. It’s lightweight and includes not just Python but also C#. Plus, it’s like the Assistant API—no need to fuss over memory and context windows. Just cook and serve!
So, my fabulous friend, whether you’re a seasoned developer or just dipping your toes into the AI pool, Semantic Kernel has your back. It’s like having a friendly AI mentor who whispers, “You got this!” And with its growing community and constant updates, Semantic Kernel is leading the way in AI development.
Remember, you don’t need a PhD in computer science to explore this—it’s all about curiosity, creativity, and a dash of Semantic Kernel magic. 🌟✨
Ready to dive in? Check out the Semantic Kernel GitHub repository for the latest updates