by Joche Ojeda | Oct 8, 2024 | A.I, Blazor, Semantic Kernel
Are you excited to bring powerful AI chat completions to your web application? I sure am! In this post, we’ll walk through how to integrate the DevExpress Chat component with the Semantic Kernel using OpenAI. This combination can make your app more interactive and intelligent, and it’s surprisingly simple to set up. Let’s dive in!
Step 1: Adding NuGet Packages
First, let’s ensure we have all the necessary packages. Open your DevExpress.AI.Samples.Blazor.csproj
file and add the following NuGet references:
<ItemGroup>
<PackageReference Include="Microsoft.KernelMemory.Abstractions" Version="0.78.241007.1" />
<PackageReference Include="Microsoft.KernelMemory.Core" Version="0.78.241007.1" />
<PackageReference Include="Microsoft.SemanticKernel" Version="1.21.1" />
</ItemGroup>
This will bring in the core components of Semantic Kernel to power your chat completions.
Step 2: Setting Up Your Kernel in Program.cs
Next, we’ll configure the Semantic Kernel and OpenAI integration. Add the following code in your Program.cs
to create the kernel and set up the chat completion service:
//Create your OpenAI client
string OpenAiKey = Environment.GetEnvironmentVariable("OpenAiTestKey");
var client = new OpenAIClient(new System.ClientModel.ApiKeyCredential(OpenAiKey));
//Adding semantic kernel
var KernelBuilder = Kernel.CreateBuilder();
KernelBuilder.AddOpenAIChatCompletion("gpt-4o", client);
var sk = KernelBuilder.Build();
var ChatService = sk.GetRequiredService<IChatCompletionService>();
builder.Services.AddSingleton<IChatCompletionService>(ChatService);
This step is crucial because it connects your app to OpenAI via the Semantic Kernel and sets up the chat completion service that will drive the AI responses in your chat.
Step 3: Creating the Chat Component
Now that we’ve got our services ready, it’s time to set up the chat component. We’ll define the chat interface in our Razor page. Here’s how you can do that:
Razor Section:
@page "/sk"
@using DevExpress.AIIntegration.Blazor.Chat
@using AIIntegration.Services.Chat;
@using Microsoft.SemanticKernel.ChatCompletion
@using System.Diagnostics
@using System.Text.Json
@using System.Text
@inject IChatCompletionService chatCompletionsService;
@inject IJSRuntime JSRuntime;
This UI will render a clean chat interface using DevExpress’s DxAIChat
component, which is connected to our Semantic Kernel chat completion service.
Code Section:
Now, let’s handle the interaction logic. Here’s the code that powers the chat backend:
@code {
ChatHistory ChatHistory = new ChatHistory();
async Task MessageSent(MessageSentEventArgs args)
{
// Add the user's message to the chat history
ChatHistory.AddUserMessage(args.Content);
// Get a response from the chat completion service
var Result = await chatCompletionsService.GetChatMessageContentAsync(ChatHistory);
// Extract the response content
string MessageContent = Result.InnerContent.ToString();
Debug.WriteLine("Message from chat completion service:" + MessageContent);
// Add the assistant's message to the history
ChatHistory.AddAssistantMessage(MessageContent);
// Send the response to the UI
var message = new Message(MessageRole.Assistant, MessageContent);
args.SendMessage(message);
}
}
With this in place, every time the user sends a message, the chat completion service will process the conversation history and generate a response from OpenAI. The result is then displayed in the chat window.
Step 4: Run Your Project
Before running the project, ensure that the correct environment variable for the OpenAI key is set (OpenAiTestKey
). This key is necessary for the integration to communicate with OpenAI’s API.
Now, you’re ready to test! Simply run your project and navigate to https://localhost:58108/sk
. Voilà! You’ll see a beautiful, AI-powered chat interface waiting for your input. ?
Conclusion
And that’s it! You’ve successfully integrated the DevExpress Chat component with the Semantic Kernel for AI-powered chat completions. Now, you can take your user interaction to the next level with intelligent, context-aware responses. The possibilities are endless with this integration—whether you’re building a customer support chatbot, a productivity assistant, or something entirely new.
Let me know how your integration goes, and feel free to share what cool things you build with this!
here is the full implementation GitHub
by Joche Ojeda | Sep 26, 2024 | A.I, DevExpress, Semantic Kernel
If you’re a Blazor developer looking to integrate AI-powered chat functionality into your applications, the new DevExpress DxAIChat component offers a turnkey solution. It’s designed to make building chat interfaces as easy as possible, with out-of-the-box support for simple chats, virtual assistants, and even Retrieval-Augmented Generation (RAG) scenarios.
The best part? You don’t have to start from scratch—DevExpress provides a range of pre-built examples, making it easy to get started and customize to your needs. Whether you’re aiming for a basic chatbot or a more complex AI assistant, this component has you covered.
To use the examples you can use any open A.I compatible service like Ollama, Open A.I and Azure OpenAI, in current devexpress example they use Azure like this
using DevExpress.AIIntegration;
...
string azureOpenAIEndpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT");
string azureOpenAIKey = Environment.GetEnvironmentVariable("AZURE_OPENAI_API_KEY");
string deploymentName = "YourModelDeploymentName"
...
builder.Services.AddDevExpressBlazor();
builder.Services.AddDevExpressAI((config) => {
config.RegisterChatClientOpenAIService
new AzureOpenAIClient(
new Uri(azureOpenAIEndpoint),
new AzureKeyCredential(azureOpenAIKey)
), deploymentName);
//or call the following method to use self-hosted Ollama models
//config.RegisterChatClientOllamaAIService("http://localhost:11434/api/chat", "llama3.1");
});
I tested with OpenA.I API instead of azure, so my code looks like this
string azureOpenAIEndpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT");
string azureOpenAIKey = Environment.GetEnvironmentVariable("AZURE_OPENAI_API_KEY");
string OpenAiKey= Environment.GetEnvironmentVariable("OpenAiTestKey");
builder.Services.AddDevExpressBlazor();
builder.Services.AddDevExpressAI((config) => {
//var client = new AzureOpenAIClient(
// new Uri(azureOpenAIEndpoint),
// new AzureKeyCredential(azureOpenAIKey));
//Open Ai models ID are a bit different than azure, Azure=gtp4o OpenAI=gpt-4o
var client = new OpenAIClient(new System.ClientModel.ApiKeyCredential(OpenAiKey));
config.RegisterChatClientOpenAIService(client, "gpt-4o");
config.RegisterOpenAIAssistants(client, "gpt-4o");
});
Notice the IDs of the models in Azure and Open A.I are different
- Azure=gtp4o
- OpenAI=gpt-4o
This are the URLs for the different example
- Chat : https://localhost:53340/
- Assistant/RAG: https://localhost:53340/assistant
- Streaming: https://localhost:53340/streaming
I’m super happy that DevExpress components are doing all the heavy lifting and boilerplate code for us, I have developed the same scenarios using semantic kernel even when there is not so much code to write you still have the challenge of develop a responsive U.I
For more information and to see the examples in action, check out the full article.
by Joche Ojeda | Sep 4, 2024 | A.I, Semantic Kernel, XPO
In today’s AI-driven world, the ability to quickly and efficiently store, retrieve, and manage data is crucial for developing sophisticated applications. One tool that helps facilitate this is the Semantic Kernel, a lightweight, open-source development kit designed for integrating AI models into C#, Python, or Java applications. It enables rapid enterprise-grade solutions by serving as an effective middleware.
One of the key concepts in Semantic Kernel is memory—a collection of records, each containing a timestamp, metadata, embeddings, and a key. These memory records can be stored in various ways, depending on how you implement the interfaces. This flexibility allows you to define the storage mechanism, which means you can choose any database solution that suits your needs.
In this blog post, we’ll walk through how to use the IMemoryStore interface in Semantic Kernel and implement a custom memory store using DevExpress XPO, an ORM (Object-Relational Mapping) tool that can interact with over 14 database engines with a single codebase.
Why Use DevExpress XPO ORM?
DevExpress XPO is a powerful, free-to-use ORM created by DevExpress that abstracts the complexities of database interactions. It supports a wide range of database engines such as SQL Server, MySQL, SQLite, Oracle, and many others, allowing you to write database-independent code. This is particularly helpful when dealing with a distributed or multi-environment system where different databases might be used.
By using XPO, we can seamlessly create, update, and manage memory records in various databases, making our application more flexible and scalable.
Implementing a Custom Memory Store with DevExpress XPO
To integrate XPO with Semantic Kernel’s memory management, we’ll implement a custom memory store by defining a database entry class and a database interaction class. Then, we’ll complete the process by implementing the IMemoryStore interface.
Step 1: Define a Database Entry Class
Our first step is to create a class that represents the memory record. In this case, we’ll define an XpoDatabaseEntry
class that maps to a database table where memory records are stored.
public class XpoDatabaseEntry : XPLiteObject {
private string _oid;
private string _collection;
private string _timestamp;
private string _embeddingString;
private string _metadataString;
private string _key;
[Key(false)]
public string Oid { get; set; }
public string Key { get; set; }
public string MetadataString { get; set; }
public string EmbeddingString { get; set; }
public string Timestamp { get; set; }
public string Collection { get; set; }
protected override void OnSaving() {
if (this.Session.IsNewObject(this)) {
this.Oid = Guid.NewGuid().ToString();
}
base.OnSaving();
}
}
This class extends XPLiteObject
from the XPO library, which provides methods to manage the record lifecycle within the database.
Step 2: Create a Database Interaction Class
Next, we’ll define an XpoDatabase
class to abstract the interaction with the data store. This class provides methods for creating tables, inserting, updating, and querying records.
internal sealed class XpoDatabase {
public Task CreateTableAsync(IDataLayer conn) {
using (Session session = new(conn)) {
session.UpdateSchema(new[] { typeof(XpoDatabaseEntry).Assembly });
session.CreateObjectTypeRecords(new[] { typeof(XpoDatabaseEntry).Assembly });
}
return Task.CompletedTask;
}
// Other database operations such as CreateCollectionAsync, InsertOrIgnoreAsync, etc.
}
This class acts as a bridge between Semantic Kernel and the database, allowing us to manage memory entries without having to write complex SQL queries.
Step 3: Implement the IMemoryStore Interface
Finally, we implement the IMemoryStore
interface, which is responsible for defining how the memory store behaves. This includes methods like UpsertAsync
, GetAsync
, and DeleteCollectionAsync
.
public class XpoMemoryStore : IMemoryStore, IDisposable {
public static async Task ConnectAsync(string connectionString) {
var memoryStore = new XpoMemoryStore(connectionString);
await memoryStore._dbConnector.CreateTableAsync(memoryStore._dataLayer).ConfigureAwait(false);
return memoryStore;
}
public async Task CreateCollectionAsync(string collectionName) {
await this._dbConnector.CreateCollectionAsync(this._dataLayer, collectionName).ConfigureAwait(false);
}
// Other methods for interacting with memory records
}
The XpoMemoryStore
class takes advantage of XPO’s ORM features, making it easy to create collections, store and retrieve memory records, and perform batch operations. Since Semantic Kernel doesn’t care where memory records are stored as long as the interfaces are correctly implemented, you can now store your memory records in any of the databases supported by XPO.
Advantages of Using XPO with Semantic Kernel
- Database Independence: You can switch between multiple databases without changing your codebase.
- Scalability: XPO’s ability to manage complex relationships and large datasets makes it ideal for enterprise-grade solutions.
- ORM Abstraction: With XPO, you avoid writing SQL queries and focus on high-level operations like creating and updating objects.
Conclusion
In this blog post, we’ve demonstrated how to integrate DevExpress XPO ORM with the Semantic Kernel using the IMemoryStore
interface. This approach allows you to store AI-driven memory records in a wide variety of databases while maintaining a flexible, scalable architecture.
In future posts, we’ll explore specific use cases and how you can leverage this memory store in real-world applications. For the complete implementation, you can check out my GitHub fork.
Stay tuned for more insights and examples!
by Joche Ojeda | Jul 28, 2024 | PropertyEditors, XAF
Introduction
The eXpressApp Framework (XAF) from DevExpress is a versatile application framework that supports multiple UI platforms, including Windows Forms and Blazor. Maintaining separate property editors for each platform can be cumbersome. This article explores how to create unified property editors for both Windows Forms and Blazor by leveraging WebView for Windows Forms and the Monaco Editor, the editor used in Visual Studio Code.
Blazor Implementation

Windows forms Implementation

Prerequisites
Before we begin, ensure you have the following installed:
- Visual Studio 2022 or later
- .NET 8.0 SDK or later
- DevExpress XAF 22.2 or later
Step 1: Create a XAF Application for Windows Forms and Blazor
- Create a New Solution:
- Open Visual Studio and create a new solution.
- Add two projects to this solution:
- A Windows Forms project.
- A Blazor project.
- Set Up XAF:
- Follow the DevExpress documentation to set up XAF in both projects. Official documentation here
Step 2: Create a Razor Class Library
- Create a Razor Class Library:
- Add a new Razor Class Library project to the solution.
- Name it XafVsCodeEditor.
- Design the Monaco Editor Component:
We are done with the shared library that we will reference in both Blazor and Windows projects.
Step 3: Integrate the Razor Class Library into Windows Forms
- Add NuGet References:
- In the Windows Forms project, add the following NuGet packages:
- Microsoft.AspNetCore.Components.WebView.WindowsForms
- XafVsCodeEditor (the Razor Class Library created earlier).
- You can see all the references in the csproj file.
- Change the Project Type: In order to add the ability to host Blazor components, we need to change the project SDK from Microsoft.NET.Sdk to Microsoft.NET.Sdk.Razor.
- Add Required Files:
- wwwroot: folder to host CSS, JavaScript, and the index.html.
- _Imports.razor: this file adds global imports. Source here.
- index.html: one of the most important files because it hosts a special blazor.webview.js to interact with the WebView. See here.
Official Microsoft tutorial is available here.
Step 4: Implementing the XAF Property Editors
I’m not going to show the full steps to create the property editors. Instead, I will focus on the most important parts of the editor. Let’s start with Windows.
In Windows Forms, the most important method is when you create the instance of the control, in this case, the WebView. As you can see, this is where you instantiate the services that will be passed as a parameter to the component, in our case, the data model. You can find the full implementation of the property editor for Windows here and the official DevExpress documentation here.
protected override object CreateControlCore()
{
control = new BlazorWebView();
control.Dock = DockStyle.Fill;
var services = new ServiceCollection();
services.AddWindowsFormsBlazorWebView();
control.HostPage = "wwwroot\\index.html";
var tags = MonacoEditorTagHelper.AddScriptTags;
control.Services = services.BuildServiceProvider();
parameters = new Dictionary<string, object>();
if (PropertyValue == null)
{
PropertyValue = new MonacoEditorData() { Language = "markdown" };
}
parameters.Add("Value", PropertyValue);
control.RootComponents.Add<MonacoEditorComponent>("#app", parameters);
control.Size = new System.Drawing.Size(300, 300);
return control;
}
Now, for the property editor for Blazor, you can find the full source code here and the official DevExpress documentation here.
protected override IComponentModel CreateComponentModel()
{
var model = new MonacoEditorDataModel();
model.ValueChanged = EventCallback.Factory.Create<IMonacoEditorData>(this, value => {
model.Value = value;
OnControlValueChanged();
WriteValue();
});
return model;
}
One of the most important things to notice here is that in version 24 of XAF, Blazor property editors have been simplified so they require fewer layers of code. The magical databinding happens because in the data model there should be a property of the same value and type as one of the parameters in the Blazor component.
Step 5: Running the Application
Before we run our solution, we need to add a domain object that implements a property of type IMonacoData, which is the interface we associated with our property editor. Here is a sample domain object that has a property of type MonacoEditorData:
[DefaultClassOptions]
public class DomainObject1 : BaseObject, IXafEntityObject
{
public DomainObject1(Session session) : base(session) { }
public override void AfterConstruction()
{
base.AfterConstruction();
}
MonacoEditorData monacoEditorData;
string text;
public MonacoEditorData MonacoEditorData
{
get => monacoEditorData;
set => SetPropertyValue(nameof(MonacoEditorData), ref monacoEditorData, value);
}
[Size(SizeAttribute.DefaultStringMappingFieldSize)]
public string Text
{
get => text;
set => SetPropertyValue(nameof(Text), ref text, value);
}
public void OnCreated()
{
this.MonacoEditorData = new MonacoEditorData("markdown", "");
MonacoEditorData.PropertyChanged += SourceEditor_PropertyChanged;
}
void IXafEntityObject.OnSaving()
{
this.Text = this.MonacoEditorData.Code;
}
void IXafEntityObject.OnLoaded()
{
this.MonacoEditorData = new MonacoEditorData("markdown", this.Text);
MonacoEditorData.PropertyChanged += SourceEditor_PropertyChanged;
}
private void SourceEditor_PropertyChanged(object sender, PropertyChangedEventArgs e)
{
this.Text = this.MonacoEditorData.Code;
}
}
As you can see, DomainObject1 implements the interface IXafEntityObject. We are using the interface events to load and save the content of the editor to the text property.
Now, build and run the solution. You should now have a Windows Forms application that hosts a Blazor property editor using WebView and the Monaco Editor, as well as a Blazor application using the same property editor.
You can find a working example here.
Conclusion
By leveraging WebView and the Monaco Editor, you can create unified property editors for both Windows Forms and Blazor in XAF applications. This approach simplifies maintenance and provides a consistent user experience across different platforms. With the flexibility of Blazor and the robustness of Windows Forms, you can build powerful and versatile property editors that cater to a wide range of user needs.
by Joche Ojeda | Jun 21, 2024 | Database, ORM
Why Compound Keys in Database Tables Are No Longer Valid
Introduction
In the realm of database design, compound keys were once a staple, largely driven by the need to adhere to normalization forms. However, the evolving landscape of technology and data management calls into question the continued relevance of these multi-attribute keys. This article explores the reasons why compound keys may no longer be the best choice and suggests a shift towards simpler, more maintainable alternatives like object identifiers (OIDs).
The Case Against Compound Keys
Complexity in Database Design
- Normalization Overhead: Historically, compound keys were used to satisfy normalization requirements, ensuring minimal redundancy and dependency. While normalization is still important, the rigidity it imposes can lead to overly complex database schemas.
- Business Logic Encapsulation: When compound keys include business logic, they can create dependencies that complicate data integrity and maintenance. Changes in business rules often necessitate schema alterations, which can be cumbersome.
Maintenance Challenges
- Data Integrity Issues: Compound keys can introduce challenges in maintaining data integrity, especially in large and complex databases. Ensuring the uniqueness and consistency of multi-attribute keys can be error-prone.
- Performance Concerns: Queries involving compound keys can become less efficient, as indexing and searching across multiple columns can be more resource-intensive compared to single-column keys.
The Shift Towards Object Identifiers (OIDs)
Simplified Design
- Single Attribute Keys: Using OIDs as primary keys simplifies the schema. Each row can be uniquely identified by a single attribute, making the design more straightforward and easier to understand.
- Decoupling Business Logic: OIDs help in decoupling the business logic from the database schema. Changes in business rules do not necessitate changes in the primary key structure, enhancing flexibility.
Easier Maintenance
- Improved Data Integrity: With a single attribute as the primary key, maintaining data integrity becomes more manageable. The likelihood of key conflicts is reduced, simplifying the validation process.
- Performance Optimization: OIDs allow for more efficient indexing and query performance. Searching and sorting operations are faster and less resource-intensive, improving overall database performance.
Revisiting Normalization
Historical Context
- Storage Constraints: Normalization rules were developed when data storage was expensive and limited. Reducing redundancy and optimizing storage was paramount.
- Modern Storage Solutions: Today, storage is relatively cheap and abundant. The strict adherence to normalization may not be as critical as it once was.
Balancing Act
- De-normalization for Performance: In modern databases, a balance between normalization and de-normalization can be beneficial. De-normalization can improve performance and simplify query design without significantly increasing storage costs.
- Practical Normalization: Applying normalization principles should be driven by practical needs rather than strict adherence to theoretical models. The goal is to achieve a design that is both efficient and maintainable.
ORM Design Preferences
Object-Relational Mappers (ORMs)
- Design with OIDs in Mind: Many ORMs, such as XPO from DevExpress, were originally designed to work with OIDs rather than compound keys. This preference simplifies database interaction and enhances compatibility with object-oriented programming paradigms.
- Support for Compound Keys: Although these ORMs support compound keys, their architecture and default behavior often favor the use of single-column OIDs, highlighting the practical advantages of simpler key structures in modern application development.
Conclusion
The use of compound keys in database tables, driven by the need to fulfill normalization forms, may no longer be the best practice in modern database design. Simplifying schemas with object identifiers can enhance maintainability, improve performance, and decouple business logic from the database structure. As storage becomes less of a constraint, a pragmatic approach to normalization, balancing performance and data integrity, becomes increasingly important. Embracing these changes, along with leveraging ORM tools designed with OIDs in mind, can lead to more robust, flexible, and efficient database systems.