My Journey Exploring the Oqtane Framework

My Journey Exploring the Oqtane Framework

Mental notes on architecture, learning by reading source, and what’s next.

OK — so it’s time for a new article. Lately, I’ve been diving deep into the Oqtane framework, and it’s been a beautiful journey. It reminds me of my early days with XAF from Developer Express—when I learned to think in software architecture and modern design patterns by simply reading the code.Back then, documentation was scarce. The advice was: “Look at the code.” I did—and that shaped a big part of my software education. It taught me that good source code is often self-explanatory.

Even though XAF is still our main tool at the office (Xari & BIT Frameworks), we’re expanding. We’re researching new divisions for Flutter and React, since some projects already use those fronts with an XAF backend. I also wanted to explore building client-server apps with a single .NET codebase that includes mobile—another reason Oqtane caught my eye.

Why Oqtane Caught My Attention

The Oqtane team is very responsive on GitHub. You can open a discussion and get thoughtful replies quickly. The source code is clean and educational—perfect for learning by reading. There are plenty of talks and videos on architecture and module development; some are a bit dated, but if you cross-check with the code, you’ll be fine.

I’ve learned there are two steps to mastering a framework: (1) immerse yourself in material (videos, code, docs), and (2) explain it to someone else. These notes do both—part research, part knowledge sharing.

Oqtane Video References

A Missing Clip Worth Finding

There’s one clip I couldn’t locate where Shaun Walker explains that .NET already provides the pieces for modern, multi-platform, server-and-client applications—but the ecosystem is fragmented. Oqtane unifies those pieces into a single .NET codebase. If I find it, I’ll make a highlight and share it.

On Learning and Time

I’m trying to publish as much as I can now because I’m about to start a new chapter: I’ll be joining the University of St. Petersburg to learn Russian as my second language. It’s a tough language—very different from Spanish or Italian—so I’ll likely have less time to write for a while. Better to document these experiments now than let them sit in my notes for months.

That’s it for today. I hope these clips and notes help you understand Oqtane the way they helped me. Stay tuned—and happy coding!

 

From Airport Chaos to Spec Clarity: How Writing Requirements Saved My Sanity

From Airport Chaos to Spec Clarity: How Writing Requirements Saved My Sanity

I thought vibe coding was chaotic at home. Try doing it while traveling halfway across the world.

Between layovers, hotel lobbies, and unpredictable Wi-Fi, I convinced myself I could keep momentum by letting AI carry the weight. Just toss it some prompts, let it generate code, and keep vibing in transit. Sounds good, right?

It wasn’t. Instead of progress, I found myself trapped in the same entropy loop as before—except now with added airport noise and bad coffee. It finally hit me: coding wasn’t the hard part anymore. The real challenge was lowering the chaos of my ideas into clear, executable requirements.

The Travel Chaos of Vibe Coding

While bouncing from Saint Petersburg to El Salvador, I leaned on vibe coding like a crutch. I threw half-formed prompts at the AI:

  • “Build me a service that works offline.”
  • “Hook this into a booking flow.”
  • “Make it sync when online again.”

And, of course, the AI delivered: endless snippets, scaffolds, and fragments. But none of it fit together. It was like watching a band jam without ever agreeing on the key. Six hours in, all I had was a disjointed mess—again.

Enter GitHub Spec Kit and New Perspectives

Somewhere between flights, I stumbled on GitHub Spec Kit, thanks to a Visual Studio Code podcast episodeLet it Cook – Introducing Spec Kit for Spec-Driven Development! (Episode 13).

Not long after, I tuned into the Merge Conflict podcast: All in on Spec-Driven Development (Episode 479), where James Montemagno and Frank Kruger broke down what spec-driven workflows really mean for developers.

Spec Kit showed me a different angle: instead of treating the AI like a mind reader, treat it like a contractor. Write clear specs, break them down into tasks, and then let the AI handle execution.

James and Frank went further. They contrasted waterfall (where everything is specified upfront) with agile (where progress is iterative and requirements evolve). Their point was simple but profound: no matter the methodology, you can’t skip requirements. Even agile depends on clarity at each iteration.

The Programmer’s True Role

That’s when it clicked: my job as a human programmer isn’t to crank out lines of code anymore. The AI can do that faster than I ever could. My job is to reduce entropy.

I take vague ideas, half-baked business rules, and chaotic travel thoughts—and refine them into something structured. That’s the blueprint AI thrives on. Without it, I’m asking the model to improvise a symphony from random notes. With it, I get clean, working solutions in minutes.

Why Requirements Are the Real Magic

Spec Kit and similar tools are amazing, but they don’t remove the hardest part—they expose it. Writing good requirements is the bottleneck. Once that’s done, the rest flows.

Think of it this way:

  • Vibe coding while traveling = chaos squared.
  • Spec-driven clarity = progress even in noisy, unpredictable environments.

It’s not about choosing waterfall or agile. It’s about embracing the timeless truth that clarity upfront—whether in a full spec or a tight user story—is what makes AI effective.


Conclusion

My journey from vibe coding on the road to spec-driven clarity taught me that code is no longer the hardest problem. The real magic lies in writing requirements that reduce chaos and give AI a fighting chance to deliver.

So next time you feel tempted to vibe code—whether at home or 30,000 feet in the air—pause. Write the requirement. Structure the idea. Then let the AI do what it does best: execute clarity at scale.

Because in the end, humans reduce entropy. AI executes it.

Related Articles 

From Vibe Coding to Vibe Documenting: How I Turned 6 Hours of Chaos into 8 Minutes of Clarity | Joche Ojeda

Using DevExpress Chat Component and Semantic Kernel ResponseFormat to show a product carousel

Using DevExpress Chat Component and Semantic Kernel ResponseFormat to show a product carousel

Today, when I woke up, it was sunny but really cold, and the weather forecast said that snow was expected.

So, I decided to order ramen and do a “Saturday at home” type of project. My tools of choice for this experiment are:

1) DevExpress Chat Component for Blazor

I’m thrilled they have this component. I once wrote my own chat component, and it’s a challenging task, especially given the variety of use cases.

2) Semantic Kernel

I’ve been experimenting with Semantic Kernel for a while now, and let me tell you—it’s a fantastic tool if you’re in the .NET ecosystem. It’s so cool to have native C# code to interact with AI services in a flexible way, making your code mostly agnostic to the AI provider—like a WCF for AIs.

Goal of the Experiment

The goal for today’s experiment is to render a list of products as a carousel within a chat conversation.

Configuration

To accomplish this, I’ll use prompt execution settings in Semantic Kernel to ensure that the response from the LLM is always in JSON format as a string.

var Settings = new OpenAIPromptExecutionSettings 
{ 
    MaxTokens = 500, 
    Temperature = 0.5, 
    ResponseFormat = "json_object" 
};

The key part here is the response format. The chat completion can respond in two ways:

  • Text: A simple text answer.
  • JSON Object: This format always returns a JSON object, with the structure provided as part of the prompt.

With this approach, we can deserialize the LLM’s response to an object that helps conditionally render the message content within the DevExpress Chat Component.

Structure

Here’s the structure I’m using:

public class MessageData
{
    public string Message { get; set; }
    public List Options { get; set; }
    public string MessageTemplateName { get; set; }
}

public class OptionSet
{
    public string Name { get; set; }
    public string Description { get; set; }
    public List Options { get; set; }
}

public class Option
{
    public string Image { get; set; }
    public string Url { get; set; }
    public string Description { get; set; }
};
  • MessageData: This structure will always be returned by our LLM.
  • Option: A list of options for a message, which also serves as data for possible responses.
  • OptionSet: A list of possible responses to feed into the prompt execution settings.

Prompt Execution Settings

One more step on the Semantic Kernel side is configuring the prompt execution settings:

var Settings = new OpenAIPromptExecutionSettings 
{ 
    MaxTokens = 500, 
    Temperature = 0.5, 
    ResponseFormat = "json_object" 
};

Settings.ChatSystemPrompt = $"You need to answer using this JSON format with this structure {Structure} " +
                            $"Before giving an answer, check if it exists within this list of option sets {OptionSets}. " +
                            $"If your answer does not include options, the message template value should be 'Message'; otherwise, it should be 'Options'.";

In the prompt, we specify the structure {Structure} we want as a response, provide a list of possible options for the message in the {OptionSets} variable, and add a final line to guide the LLM on which template type to use.

Example Requests and Responses

For example, when executing the following request:

  • Prompt: “Show me a list of Halloween costumes for cats.”

We’ll get this response from the LLM:

{
    "Message": "Please select one of the Halloween costumes for cats",
    "Options": [
        {"Image": "./images/catblack.png", "Url": "https://cat.com/black", "Description": "Black cat costume"},
        {"Image": "./images/catwhite.png", "Url": "https://cat.com/white", "Description": "White cat costume"},
        {"Image": "./images/catorange.png", "Url": "https://cat.com/orange", "Description": "Orange cat costume"}
    ],
    "MessageTemplateName": "Options"
}

With this JSON structure, we can conditionally render messages in the chat component as follows:

<DxAIChat CssClass="my-chat" MessageSent="MessageSent">
    <MessageTemplate>
        <div>
            @{
                if (@context.Typing)
                {
                    <span>Loading...</span>
                }
                else
                {
                    MessageData md = null;
                    try
                    {
                        md = JsonSerializer.Deserialize<MessageData>(context.Content);
                    }
                    catch
                    {
                        md = null;
                    }
                    if (md == null)
                    {
                        <div class="my-chat-content">
                            @context.Content
                        </div>
                    }
                    else
                    {
                        if (md.MessageTemplateName == "Options")
                        {
                            <div class="centered-carousel">
                                <Carousel class="carousel-container" Width="280" IsFade="true">
                                    @foreach (var option in md.Options)
                                    {
                                        <CarouselItem>
                                            <ChildContent>
                                                <div>
                                                    <img src="@option.Image" alt="demo-image" />
                                                    <Button Color="Color.Primary" class="carousel-button">@option.Description</Button>
                                                </div>
                                            </ChildContent>
                                        </CarouselItem>
                                    }
                                </Carousel>
                            </div>
                        }
                        else if (md.MessageTemplateName == "Message")
                        {
                            <div class="my-chat-content">
                                @md.Message
                            </div>
                        }
                    }
                }
            }
        </div>
    </MessageTemplate>
</DxAIChat>

End Solution Example

Here’s an example of the final solution:

You can find the full source code here: https://github.com/egarim/devexpress-ai-chat-samples and a short video here https://youtu.be/dxMnOWbe3KA

 

AI-Powered XtraReports in XAF: Unlocking DevExpress Enhancements

AI-Powered XtraReports in XAF: Unlocking DevExpress Enhancements

Today is Friday, so I decided to take it easy with my integration research. When I woke up, I decided that I just wanted to read the source code of DevExpress AI integrations to get inspired. I began by reading the official blog post about AI and reporting (DevExpress Blog Post). Then, as usual, I proceeded to fork the repository to make my own modifications.

After completing the typical cloning procedure in Visual Studio, I realized that to use the AI functionalities of XtraReport, you don’t need any special version of the report viewer.

The only requirement is to have the NuGet reference as shown below:


    <ItemGroup>
        <PackageReference Include="DevExpress.AIIntegration.Blazor.Reporting.Viewer" Version="24.2.1-alpha-24260" />
    </ItemGroup>
    

Then, add the report integration as shown below:


    config.AddBlazorReportingAIIntegration(config =>
    {
        config.SummarizeBehavior = SummarizeBehavior.Abstractive;
        config.AvailableLanguages = new List<LanguageItem>
        {
            new LanguageItem { Key = "de", Text = "German" },
            new LanguageItem { Key = "es", Text = "Spanish" },
            new LanguageItem { Key = "en", Text = "English" },
            new LanguageItem { Key = "ru", Text = "Russian" },
            new LanguageItem { Key = "it", Text = "Italian" }
        };
    });
    

After completing these steps, your report viewer will display a little star in the options menu, where you can invoke the AI operations.

You can find the source code for this example in my GitHub repository: https://github.com/egarim/XafSmartEditors

Till next time, XAF out!!!