Using DevExpress Chat Component and Semantic Kernel ResponseFormat to show a product carousel

Using DevExpress Chat Component and Semantic Kernel ResponseFormat to show a product carousel

Today, when I woke up, it was sunny but really cold, and the weather forecast said that snow was expected.

So, I decided to order ramen and do a “Saturday at home” type of project. My tools of choice for this experiment are:

1) DevExpress Chat Component for Blazor

I’m thrilled they have this component. I once wrote my own chat component, and it’s a challenging task, especially given the variety of use cases.

2) Semantic Kernel

I’ve been experimenting with Semantic Kernel for a while now, and let me tell you—it’s a fantastic tool if you’re in the .NET ecosystem. It’s so cool to have native C# code to interact with AI services in a flexible way, making your code mostly agnostic to the AI provider—like a WCF for AIs.

Goal of the Experiment

The goal for today’s experiment is to render a list of products as a carousel within a chat conversation.

Configuration

To accomplish this, I’ll use prompt execution settings in Semantic Kernel to ensure that the response from the LLM is always in JSON format as a string.

var Settings = new OpenAIPromptExecutionSettings 
{ 
    MaxTokens = 500, 
    Temperature = 0.5, 
    ResponseFormat = "json_object" 
};

The key part here is the response format. The chat completion can respond in two ways:

  • Text: A simple text answer.
  • JSON Object: This format always returns a JSON object, with the structure provided as part of the prompt.

With this approach, we can deserialize the LLM’s response to an object that helps conditionally render the message content within the DevExpress Chat Component.

Structure

Here’s the structure I’m using:

public class MessageData
{
    public string Message { get; set; }
    public List Options { get; set; }
    public string MessageTemplateName { get; set; }
}

public class OptionSet
{
    public string Name { get; set; }
    public string Description { get; set; }
    public List Options { get; set; }
}

public class Option
{
    public string Image { get; set; }
    public string Url { get; set; }
    public string Description { get; set; }
};
  • MessageData: This structure will always be returned by our LLM.
  • Option: A list of options for a message, which also serves as data for possible responses.
  • OptionSet: A list of possible responses to feed into the prompt execution settings.

Prompt Execution Settings

One more step on the Semantic Kernel side is configuring the prompt execution settings:

var Settings = new OpenAIPromptExecutionSettings 
{ 
    MaxTokens = 500, 
    Temperature = 0.5, 
    ResponseFormat = "json_object" 
};

Settings.ChatSystemPrompt = $"You need to answer using this JSON format with this structure {Structure} " +
                            $"Before giving an answer, check if it exists within this list of option sets {OptionSets}. " +
                            $"If your answer does not include options, the message template value should be 'Message'; otherwise, it should be 'Options'.";

In the prompt, we specify the structure {Structure} we want as a response, provide a list of possible options for the message in the {OptionSets} variable, and add a final line to guide the LLM on which template type to use.

Example Requests and Responses

For example, when executing the following request:

  • Prompt: “Show me a list of Halloween costumes for cats.”

We’ll get this response from the LLM:

{
    "Message": "Please select one of the Halloween costumes for cats",
    "Options": [
        {"Image": "./images/catblack.png", "Url": "https://cat.com/black", "Description": "Black cat costume"},
        {"Image": "./images/catwhite.png", "Url": "https://cat.com/white", "Description": "White cat costume"},
        {"Image": "./images/catorange.png", "Url": "https://cat.com/orange", "Description": "Orange cat costume"}
    ],
    "MessageTemplateName": "Options"
}

With this JSON structure, we can conditionally render messages in the chat component as follows:

<DxAIChat CssClass="my-chat" MessageSent="MessageSent">
    <MessageTemplate>
        <div>
            @{
                if (@context.Typing)
                {
                    <span>Loading...</span>
                }
                else
                {
                    MessageData md = null;
                    try
                    {
                        md = JsonSerializer.Deserialize<MessageData>(context.Content);
                    }
                    catch
                    {
                        md = null;
                    }
                    if (md == null)
                    {
                        <div class="my-chat-content">
                            @context.Content
                        </div>
                    }
                    else
                    {
                        if (md.MessageTemplateName == "Options")
                        {
                            <div class="centered-carousel">
                                <Carousel class="carousel-container" Width="280" IsFade="true">
                                    @foreach (var option in md.Options)
                                    {
                                        <CarouselItem>
                                            <ChildContent>
                                                <div>
                                                    <img src="@option.Image" alt="demo-image" />
                                                    <Button Color="Color.Primary" class="carousel-button">@option.Description</Button>
                                                </div>
                                            </ChildContent>
                                        </CarouselItem>
                                    }
                                </Carousel>
                            </div>
                        }
                        else if (md.MessageTemplateName == "Message")
                        {
                            <div class="my-chat-content">
                                @md.Message
                            </div>
                        }
                    }
                }
            }
        </div>
    </MessageTemplate>
</DxAIChat>

End Solution Example

Here’s an example of the final solution:

You can find the full source code here: https://github.com/egarim/devexpress-ai-chat-samples and a short video here https://youtu.be/dxMnOWbe3KA

 

An Introduction to Dynamic Proxies and Their Application in ORM Libraries with Castle.Core

An Introduction to Dynamic Proxies and Their Application in ORM Libraries with Castle.Core

Castle.Core: A Favourite Among C# Developers

Castle.Core, a component of the Castle Project, is an open-source project that provides common abstractions, including logging services. It has garnered popularity in the .NET community, boasting over 88 million downloads.

Dynamic Proxies: Acting as Stand-Ins

In the realm of programming, a dynamic proxy is a stand-in or surrogate for another object, controlling access to it. This proxy object can introduce additional behaviours such as logging, caching, or thread-safety before delegating the call to the original object.

The Impact of Dynamic Proxies

Dynamic proxies are instrumental in intercepting method calls and implementing aspect-oriented programming. This aids in managing cross-cutting concerns like logging and transaction management.

Castle DynamicProxy: Generating Proxies at Runtime

Castle DynamicProxy, a feature of Castle.Core, is a library that generates lightweight .NET proxies dynamically at runtime. It enables operations to be performed before and/or after the method execution on the actual object, without altering the class code.

Dynamic Proxies in the Realm of ORM Libraries

Dynamic proxies find significant application in Object-Relational Mapping (ORM) Libraries. ORM allows you to interact with your database, such as SQL Server, Oracle, or MySQL, in an object-oriented manner. Dynamic proxies are employed in ORM libraries to create lightweight objects that mirror database records, facilitating efficient data manipulation and retrieval.

Here’s a simple example of how to create a dynamic proxy using Castle.Core:


using Castle.DynamicProxy;

public class SimpleInterceptor : IInterceptor
{
    public void Intercept(IInvocation invocation)
    {
        Console.WriteLine("Before target call");
        try
        {
            invocation.Proceed(); //Calls the decorated instance.
        }
        catch (Exception)
        {
            Console.WriteLine("Target threw an exception!");
            throw;
        }
        finally
        {
            Console.WriteLine("After target call");
        }
    }
}

public class SomeClass
{
    public virtual void SomeMethod()
    {
        Console.WriteLine("SomeMethod in SomeClass called");
    }
}

public class Program
{
    public static void Main()
    {
        ProxyGenerator generator = new ProxyGenerator();
        SimpleInterceptor interceptor = new SimpleInterceptor();
        SomeClass proxy = generator.CreateClassProxy(interceptor);
        proxy.SomeMethod();
    }
}

Conclusion

Castle.Core and its DynamicProxy feature are invaluable tools for C# programmers, enabling efficient handling of cross-cutting concerns through the creation of dynamic proxies. With over 825.5 million downloads, Castle.Core’s widespread use in the .NET community underscores its utility. Whether you’re a novice or an experienced C# programmer, understanding and utilizing dynamic proxies, particularly in ORM libraries, can significantly boost your programming skills. Dive into Castle.Core and dynamic proxies in your C# projects and take your programming skills to the next level. Happy coding!