Skip to main content
  1. Blog
  2. Article

ijlal-loutfi
on 23 March 2026

Hot code burns


 The supply chain case for letting your containers cool before you ship them

The breach we got, and the one that’s coming

In September 2025, dozens of popular JavaScript packages,  like chalk and debug, were compromised on the npm registry. These packages are so ubiquitous they end up in everything: front-end apps, back-end microservices, and CI tooling. Developers didn’t do anything wrong, they just ran the same command they always do: npm install chalk. But then the malware arrived silently.

This wasn’t a bug in an operating system. It wasn’t a virus on someone’s laptop. It was a supply chain attack: someone had poisoned the ingredients developers use to build their software. Nothing exotic,  just one developer getting phished, one malicious publish, and millions of downstream consumers letting it in because it looked like a legitimate update.

Indeed, it was a legitimate update. The publisher didn’t intend to include malware, or know it was there.

That was just npm. Now imagine the same technique targeting the system libraries your containers depend on before your application even runs, things like libcurl, zlib, or openssl. It would compromise the foundation underneath everything else you run or build.

Welcome to the temperature problem of supply chain security. The industry is shipping code that’s still too hot to handle.

Two philosophies for building containers

A growing share of modern software runs inside containers. But whether the code inside has had time to cool, or whether it’s served straight off the upstream burner, varies dramatically across the industry.

The nightly-rebuild approach

One increasingly popular philosophy works like this: take the latest version of every package from upstream, rebuild the container image from scratch every night, use tooling to sign it, verify it, and minimize its footprint. On paper, it looks bulletproof. If the source is clean, you can ship good code quickly. If a bug is fixed upstream, you get the patch in your next nightly rebuild.

But if the source is poisoned?

You’ve just built and signed a perfectly minimal, fully traceable, enterprise-grade malware delivery mechanism. With reproducibility, no less. The backdoor doesn’t care about your beautiful infrastructure.

You’re serving code straight from the upstream oven. No cooling rack. No resting time. No one checked the temperature.

The intentional update approach

Ubuntu takes a different path. Stable releases ship every two years. Package versions are frozen and security fixes are applied through surgical backports, which means patching vulnerabilities without pulling in new features or unreviewed upstream changes. Updates ship intentionally, with context.

It’s not flashy,  but it’s calm, deliberate, and predictable.

You don’t get nightly rebuilds: you get stability and confidence, because the code has already earned its place. It has cooled. It has been tested by time, by scrutiny, and by production workloads that depend on it behaving exactly as expected.

No approach to supply chain security is foolproof. But if someone tries to slip a backdoor into libcurl? An Ubuntu-based container likely never pulled that update, because nothing in the release plan required it. While teams chasing upstream HEAD are plating up code that’s still burning hot, an intentional-update model is quietly unaffected.

That container might be running a version of curl from 2022,  not because Ubuntu is behind, but because the maintainers know exactly what that version does. And more importantly, what it doesn’t. It cooled a long time ago. And cool code is predictable code.

Who ships the backdoor first?

Consider a scenario: a malicious patch gets merged upstream. It’s subtle, it’s signed, and it passes CI. As a result, it looks clean and is published to the world, all while being piping hot.

A nightly-rebuild pipeline pulls the latest upstream automatically. The image gets built, scanned, still zero CVEs, because it’s brand new code. It’s signed,  minimal, and it’s perfectly malicious. Served at full temperature, no questions asked.

An intentional-update distribution like Ubuntu? The pinned version is older, but it is predictable and stable. Ubuntu maintainers let the code cool, and the poison revealed itself in the upstream before it ever reached the plate.

The problem of zero CVEs

Security scanners love to flag CVEs. Found one? You’re in danger. Found zero? All clear.

But the world is more subtle: old code has more CVEs because people have studied it longer, while new code has zero CVEs because no one has examined it yet. For new code, zero CVEs doesn’t mean secure,  it means unexamined. It means the code is too fresh for anyone to know what’s inside.

If you’re rebuilding nightly from upstream, you’re pulling in code before it has even had a chance to be scrutinized. You’re signing first and asking questions later. You’re tasting the dish before it had time to cool.

Real security is earned slowly

Security isn’t just a scan result. It’s a discipline, and discipline requires  deliberate restraint. In other words, caution before adopting upstream code, discipline in changing what already works, and skepticism in extending trust.

Ubuntu’s approach assumes upstream can be wrong, might be hasty, and may even be compromised. So the Ubuntu maintainers curate. They taste first, serve later. They let code cool before it leaves the kitchen, and they never serve anything they haven’t inspected themselves.

The nightly-rebuild model bets on minimalism, transparency, and freshness, until freshness becomes a liability. Until “hot off the press” means “too hot to trust.”

When freshness becomes a liability

The practice that makes containers “clean”, nightly rebuilds from upstream,  is the same mechanism that pulls a backdoor in. The practice that makes containers seem  frozen, backported packages, is what keeps that door closed.

One kitchen grabs every ingredient the moment it arrives and cooks immediately. The other inspects, waits, and only uses what it already knows is safe.

When the next supply chain compromise hits a core system library, it’s worth asking: which of these two approaches will ship you the malware and which one will help you avoid it all together?

Rebuilding is not verification

You can rebuild every package, scan every layer, and sign every artifact. But if you don’t control the intent of the code, if you don’t know where it came from, why it changed, or who slipped something into the diff, you’re just rebuilding someone else’s malware, but faster and with better infrastructure.

Rebuilding is replication. It’s only useful when you already know what you’re replicating. If you rebuild compromised code faithfully, you’re not verifying anything. You’re doing the attacker’s CI for them. You’re reheating someone else’s poison and calling it a fresh meal.

A different kind of CVE

When everyone chases “zero known CVEs,” we ignore wider risks. We stop asking, “Is this image vulnerable?” We need to start asking, “Is this image too trusting?”

CVE counts are a lagging indicator. The breach arrives before the scanner lights up. And the real vulnerability isn’t the package,  it’s the philosophy. It’s the assumption that upstream is always safe to consume the moment it’s published.

Conclusion

This isn’t about any single vendor or project. It’s about how the industry treats trust and temperature.

The intentional-update model assumes upstream can’t always be trusted, so it moves deliberately. It lets code cool. The nightly-rebuild model assumes upstream is trustworthy and must be kept current, so it moves constantly. It serves everything hot.

Sometimes the most secure component in your pipeline is the one you haven’t touched in eighteen months. Not because it’s forgotten, but because it’s had time to cool, and it earned the right to stay.

In software supply chain security, the best code isn’t always the freshest. It’s the code that cooled long enough for the truth to surface. So let your code cool. It tastes better anyways. 

Related posts


Canonical
23 March 2026

Canonical joins the Rust Foundation as a Gold Member

Canonical announcements Article

Canonical’s Gold-level investment in the Rust Foundation supports the long-term health of the Rust programming language and highlights its growing role in building resilient systems on Ubuntu and beyond. AMSTERDAM, THE NETHERLANDS — March 23, 2026 (Open Source SecurityCon, KubeCon Europe 2026) — Today Canonical announced that it has joine ...


Canonical
20 March 2026

Canonical partners with Snyk for scanning chiseled Ubuntu containers

Canonical announcements Article

Canonical, the publisher of Ubuntu, is pleased to announce a new partnership with developer-focused cybersecurity company Snyk. Snyk Container, Snyk’s container security solution, now offers native support for scanning chiseled Ubuntu containers. This partnership will create a path to a more secure container ecosystem, where developers wi ...


Miona Aleksic
20 March 2026

Introducing MicroCloud Cluster Manager

Cloud and server Article

Canonical introduces the beta release of MicroCloud Cluster Manager, a new way to discover, organize, and operate your MicroCloud environments from a single, unified interface. ...