The Mirage of a Memory Leak (or: why “it must be the framework” is usually wrong)

The Mirage of a Memory Leak (or: why “it must be the framework” is usually wrong)

There is a familiar moment in every developer’s life.

Memory usage keeps creeping up.
The process never really goes down.
After hours—or days—the application feels heavier, slower, tired.

And the conclusion arrives almost automatically:

“The framework has a memory leak.”
“That component library is broken.”
“The GC isn’t doing its job.”

It’s a comforting explanation.

It’s also usually wrong.

Memory Leaks vs. Memory Retention

In managed runtimes like .NET, true memory leaks are rare.
The garbage collector is extremely good at reclaiming memory.
If an object is unreachable, it will be collected.

What most developers call a “memory leak” is actually
memory retention.

  • Objects are still referenced
  • So they stay alive
  • Forever

From the GC’s point of view, nothing is wrong.

From your point of view, RAM usage keeps climbing.

Why Frameworks Are the First to Be Blamed

When you open a profiler and look at what’s alive, you often see:

  • UI controls
  • ORM sessions
  • Binding infrastructure
  • Framework services

So it’s natural to conclude:

“This thing is leaking.”

But profilers don’t answer why something is alive.
They only show that it is alive.

Framework objects are usually not the cause — they are just sitting at the
end of a reference chain that starts in your code.

The Classic Culprit: Bad Event Wiring

The most common “mirage leak” is caused by events.

The pattern

  • A long-lived publisher (static service, global event hub, application-wide manager)
  • A short-lived subscriber (view, view model, controller)
  • A subscription that is never removed

That’s it. That’s the leak.

Why it happens

Events are references.
If the publisher lives for the lifetime of the process, anything it
references also lives for the lifetime of the process.

Your object doesn’t get garbage collected.

It becomes immortal.

The Immortal Object: When Short-Lived Becomes Eternal

An immortal object is an object that should be short-lived
but can never be garbage collected because it is still reachable from a GC
root.

Not because of a GC bug.
Not because of a framework leak.
But because our code made it immortal.

Static fields, singletons, global event hubs, timers, and background services
act as anchors. Once a short-lived object is attached to one of these, it
stops aging.

GC Root
  └── static / singleton / service
        └── Event, timer, or callback
              └── Delegate or closure
                    └── Immortal object
                          └── Large object graph

From the GC’s perspective, everything is valid and reachable.
From your perspective, memory never comes back down.

A Retention Dependency Tree That Cannot Be Collected

GC Root
  └── static GlobalEventHub.Instance
        └── GlobalEventHub.DataUpdated (event)
              └── delegate → CustomerViewModel.OnDataUpdated
                    └── CustomerViewModel
                          └── ObjectSpace / DbContext
                                └── IdentityMap / ChangeTracker
                                      └── Customer, Order, Invoice, ...

What you see in the memory dump:

  • thousands of entities
  • ORM internals
  • framework objects

What actually caused it:

  • one forgotten event unsubscription

The Lambda Trap (Even Worse, Because It Looks Innocent)

The code

public CustomerViewModel(GlobalEventHub hub)
{
    hub.DataUpdated += (_, e) =>
    {
        RefreshCustomer(e.CustomerId);
    };
}

This lambda captures this implicitly.
The compiler creates a hidden closure that keeps the instance alive.

“But I Disposed the Object!”

Disposal does not save you here.

  • Dispose does not remove event handlers
  • Dispose does not break static references
  • Dispose does not stop background work automatically

IDisposable is a promise — not a magic spell.

Leak-Hunting Checklist

Reference Roots

  • Are there static fields holding objects?
  • Are singletons referencing short-lived instances?
  • Is a background service keeping references alive?

Events

  • Are subscriptions always paired with unsubscriptions?
  • Are lambdas hiding captured references?

Timers & Async

  • Are timers stopped and disposed?
  • Are async loops cancellable?

Profiling

  • Follow GC roots, not object counts
  • Inspect retention paths
  • Ask: who is holding the reference?

Final Thought

Frameworks rarely leak memory.

We do.

Follow the references.
Trust the GC.
Question your wiring.

That’s when the mirage finally disappears.

 

ONNX: Revolutionizing Interoperability in Machine Learning

ONNX: Revolutionizing Interoperability in Machine Learning

ONNX: Revolutionizing Interoperability in Machine Learning

 

The field of machine learning (ML) and artificial intelligence (AI) has witnessed a groundbreaking innovation in the form of ONNX (Open Neural Network Exchange). This open-source model format is redefining the norms of model sharing and interoperability across various ML frameworks. In this article, we explore the ONNX models, the history of the ONNX format, and the role of the ONNX Runtime in the ONNX ecosystem.

What is an ONNX Model?

ONNX stands as a universal format for representing machine learning models, bridging the gap between different ML frameworks and enabling models to be exported and utilized across diverse platforms.

The Genesis and Evolution of ONNX Format

ONNX emerged from a collaboration between Microsoft and Facebook in 2017, with the aim of overcoming the fragmentation in the ML world. Its adoption by major frameworks like TensorFlow and PyTorch was a key milestone in its evolution.

ONNX Runtime: The Engine Behind ONNX Models

ONNX Runtime is a performance-focused engine for running ONNX models, optimized for a variety of platforms and hardware configurations, from cloud-based servers to edge devices.

Where Does ONNX Runtime Run?

ONNX Runtime is cross-platform, running on operating systems such as Windows, Linux, and macOS, and is adaptable to mobile platforms and IoT devices.

ONNX Today

ONNX stands as a vital tool for developers and researchers, supported by an active open-source community and embodying the collaborative spirit of the AI and ML community.

 

ONNX and its runtime have reshaped the ML landscape, promoting an environment of enhanced collaboration and accessibility. As we continue to explore new frontiers in AI, ONNX’s role in simplifying model deployment and ensuring compatibility across platforms will be instrumental in advancing the field.

ML vs BERT vs GPT: Understanding Different AI Model Paradigms

ML vs BERT vs GPT: Understanding Different AI Model Paradigms

In the dynamic world of artificial intelligence (AI) and machine learning (ML), diverse models such as ML.NET, BERT, and GPT each play a pivotal role in shaping the landscape of technological advancements. This article embarks on an exploratory journey to compare and contrast these three distinct AI paradigms. Our goal is to provide clarity and insight into their unique functionalities, technological underpinnings, and practical applications, catering to AI practitioners, technology enthusiasts, and the curious alike.

1. Models Created Using ML.NET:

  • Purpose and Use Case: Tailored for a wide array of ML tasks, ML.NET is versatile for .NET developers for customized model creation.
  • Technology: Supports a range of algorithms, from conventional ML techniques to deep learning models.
  • Customization and Flexibility: Offers extensive customization in data processing and algorithm selection.
  • Scope: Suited for varied ML tasks within .NET-centric environments.

2. BERT (Bidirectional Encoder Representations from Transformers):

  • Purpose and Use Case: Revolutionizes language understanding, impacting search and contextual language processing.
  • Technology: Employs the Transformer architecture for holistic word context understanding.
  • Pre-trained Model: Extensively pre-trained, fine-tuned for specialized NLP tasks.
  • Scope: Used for tasks requiring deep language comprehension and context analysis.

3. GPT (Generative Pre-trained Transformer), such as ChatGPT:

  • Purpose and Use Case: Known for advanced text generation, adept at producing coherent and context-aware text.
  • Technology: Relies on the Transformer architecture for subsequent word prediction in text.
  • Pre-trained Model: Trained on vast text datasets, adaptable for broad and specialized tasks.
  • Scope: Ideal for text generation and conversational AI, simulating human-like interactions.

Conclusion:

Each of these AI models – ML.NET, BERT, and GPT – brings unique strengths to the table. ML.NET offers machine learning solutions in .NET frameworks, BERT transforms natural language processing with deep language context understanding, and GPT models lead in text generation, creating human-like text. The choice among these models depends on specific project requirements, be it advanced language processing, custom ML solutions, or seamless text generation. Understanding these models’ distinctions and applications is crucial for innovative solutions and advancements in AI and ML.