3.3k
Connect
  • GitHub
  • Mastodon
  • Twitter
  • Slack
  • Linkedin

Nix in the Wild

Nix in the Wild: SysEleven

Ross Turk | 06 August 2025
How SysEleven Used Nix for Reproducible Kubernetes Builds

Arik Grahl didn’t set out to become his company’s resident expert in reproducible software builds.

But that’s exactly what happened when SysEleven replaced Helm charts with Nix to improve the reliability, traceability, and developer experience of its Kubernetes deployments.

As one of Germany’s most respected cloud service providers, SysEleven needed reliable package management for Kubernetes: both for deploying its own Kubernetes workloads and for supporting its managed Kubernetes services.

Grahl, a senior software engineer with a knack for solving hard problems, searched in vain for a solution. He kept circling back to an option he knew well: Nix.

First, Grahl recalled that Nix requires declaring package dependencies up front, which eliminates implicit dependencies and makes builds traceable and easier to audit. Second, Nix supports pinning packages to specific versions, giving reproducible builds at scale.

Helm brings powerful templating capabilities to the table, but some of its features can introduce behavior that isn't fully deterministic at render time. In the final analysis, Grahl notes, Helm is designed primarily for template rendering rather than package management. What SysEleven needed, above all, was a deterministic way to manage Kubernetes packages. “Helm made it harder to guarantee completely predictable builds,” Grahl explains.

That's when he looked to a trusted solution: Nix. “I knew already from my experience that Nix could do this. I thought it was the right solution for us.”

Why Nix Just Made Sense

Grahl got started on his Nix journey thanks to a catastrophic laptop failure, which encouraged him to move his setup from Canonical’s Ubuntu Linux distribution to NixOS. “In one day, I was able to replicate my entire existing Ubuntu setup on NixOS,” he says, explaining that Nix’s declarative model made it possible to define everything he needed—e.g., system and user packages, init services, his user environment, even his dotfiles—in a configuration.nix file. NixOS used this artifact to build the system exactly as defined.

“It was like magic. The advantages of [NixOS for] reproducibility and purity were immediately clear to me.”

After encountering challenges with the unpredictability of some Helm builds, Grahl describes having what senior software engineers call a moment of clarity: Nix! Nix would be perfect for this problem!

It’s like this: When used on its own, Helm appears declarative at first glance, but running commands like helm install or helm upgrade actually applies rendered templates as live changes to your Kubernetes cluster. Additionally, some of Helm’s built-in functions—such as now, randAlphaNum, and lookup—can introduce elements like time, randomness, or current cluster state into the rendered output before it is sent to the Kubernetes API.

By contrast, Nix can be used alongside Helm to ensure builds are driven from pinned inputs in a deterministic and reproducible manner.

Nix builds don’t “see” your shell, your environment, or your CI runner … because they happen in a hermetic sandbox. The upshot is that if you don’t declare something in a Nix expression or flake—whether it’s a compiler like gcc, a tool like git, an environment variable, or even an rc file—it won’t be available in the build sandbox. This applies not just to packages and their dependencies, but ambient system state, too.

This kind of isolation is exactly what any sane package system needs, especially those for Kubernetes. Even though the benefits of using Nix were clear to Grahl, he worried that its reputation as a complex and unfamiliar build system might be intimidating to coworkers.

Repackaging Kubernetes CRDs with Nix and Kubenix

Grahl’s initial experiments with Nix were simple but strategic, focusing first on repackaging Kubernetes Custom Resource Definitions (CRDs) using kubenix. He saw CRDs as a quick win: though nominally purely declarative, they’re notoriously difficult to package and version reproducibly.

Defining CRDs with kubenix would make it possible to pin the exact source and content of each one, treating them as part of a declarative deployment environment. At a minimum, this would eliminate one of the most common types of runtime failures, like when a Helm chart would pull in a different version of a CRD than the one the operator was built against. It would also lay the groundwork for making SysEleven’s Kubernetes operator deployments reproducible: the company could use Nix to define, build, and deploy its entire operator stack—not just CRDs, but also the operator controller and all transitive dependencies—as a single, reproducible unit.

This was powerful stuff. But from Grahl’s perspective, this first project had another, more important upside: it allowed his team to start small with Nix … while still developing an appreciation for its power and potential.

“I worried about convincing [colleagues],” he admits, acknowledging that Nix has a reputation for being hard to pick up. “But by beginning with [CRDs], they only had to understand the basics: the syntax, and what Nix offers as a functional language. At that stage, we were mainly using Nix to generate static YAML files—Kubernetes manifests. For most of the early tasks, it was enough to write a simple, vanilla derivation.”

Nixifying Kubernetes Operators

By building on the success they’d had with kubenix and CRDs, Grahl and his team set their sights on a more ambitious use case: packaging their own Kubernetes operators, written in Go, as part of a pilot “Nixification” effort. Using Nix’s buildGoModule library, they were able to define reproducible builds for their custom-built Kubernetes controllers: components designed both to observe cluster state and make changes to bring it in line with declarative definitions.

“Nix was a good fit for this use case because the operator and the controller could be defined as inputs in the context of a declarative build process,” Grahl explains. “So each part of the system was specified clearly, from the source code to the deployment files, and the result could be built in a reproducible way.”

That insight shaped the next phase of the Nixification project. Grahl’s team used Nix’s dockerTools.buildLayeredImage library to produce OCI container images that included statically compiled Go binaries for their custom-built Kubernetes controllers. This would permit SysEleven to replace traditional approaches (like using Dockerfiles or CI-based image builds) with reproducible, Nix-defined image builds.

It works like this: dockerTools.buildLayeredImage gives you a way to define the contents of a container as a Nix expression… then uses Nix itself to resolve, build, and assemble those contents directly into an OCI-compliant image. On top of deterministic container builds, this pattern also enables content-addressable layers, such that each layer maps to a specific dependency. This speeds up rebuilds, since only the layers tied to changed dependencies are rebuilt—everything else can be reused as-is.

Grahl and his team extended this pattern to capture the full deployment artifact set—i.e., source, compiled binary, image, and manifests—in a single Nix flake. Each flake exposes three outputs:

  • A statically compiled Go binary that implements the controller logic;
  • An OCI container image built from the Go binary; and
  • The complete set of Kubernetes manifests used to deploy the controller.

From there, they assembled these flakes into a platform-wide catalog—a monorepo-style structure that includes every internal and third-party component needed to deploy the stack. Because each artifact is built from source and tracked (in the monorepo) via input-addressed derivations, this gave them the ability to update their Kubernetes platform stack atomically, while at the same time enjoying full traceability.

“At first, we only wrapped Helm Charts,” Grahl says. “But we saw that we could build the controller, the container image, and the manifests all from one flake. And because each of those outputs was defined declaratively, their builds were pure and reproducible. Then we realized we could define the entire stack in the same way.”

Reproducible Builds, SBOMs, and Full-Stack Traceability with Nix

That kind of full-stack traceability unlocked a powerful new opportunity: the ability to generate software bills-of-materials (SBOM). Because each Nix flake already declared its inputs—from upstream dependencies, to controller source code, to runtime configuration—one could now derive an exact listing of everything built into its Kubernetes deployment artifacts. Grahl’s team could associate each artifact with a specific set of hashes, identify known vulnerabilities in transitive dependencies, and share this information with customers.

“This kind of visibility simply wasn’t possible before,” Grahl observes. “It’s a direct result of adopting Nix.”

It also prefigured a radical architectural shift: Rather than producing image builds as snowflake CI jobs, one could define them as first-class, declarative artifacts—built, versioned, and served entirely within the Nix ecosystem. To deliver on this vision, Grahl’s team developed an internal OCI registry backed by the Nix store, allowing them to serve container images as input-addressable artifacts with absolute provenance.

The best part is that with this pattern, the boundary between development and deployment … ceases to exist: It uses the same declarative inputs in both contexts. The artifacts teams build and test locally are identical to the containers deployed in prod … because they’re built from the same exact derivations.

“There’s a point where you stop thinking of Nix as just a build tool,” Grahl says. “You see that it gives you a way we define what your software is—not just how you build it, but how you explain it, share it, deploy it.”

Scaling Nix Adoption: Smoother Than Expected Onboarding

At SysEleven, Grahl took on the role of the company’s Nix evangelist. Sharing his experience with the team, Grahl hosted a series of hands-on demos and internal workshops, and also paired with teams as requested.

The strategy of “Nixifying” CRD files was a helpful starting point. As static YAML files that describe the schema—not the behavior—of custom Kubernetes resources, CRDs provided a low-stakes onramp into learning Nix’s declarative model. Engineers didn’t need to translate application logic or author complex build expressions; they could express YAML data in the Nix language and observe how changes to inputs were reflected in the manifests generated by kubenix. With this bottom-up approach, they weren’t overwhelmed at the outset and could pick up features of the Nix language organically, as the need arose.

Grahl says developers with backgrounds in functional programming took to Nix quickly. Surprisingly, SREs likewise tended to adapt easily: to them, Nix expressions felt analogous to YAML—i.e., a structured format for expressing configuration and logic. And in the SRE role, they didn’t need to grasp the functional concepts underpinning the Nix language. Both agreed that Nix’s unfamiliar syntax was less of a hurdle than the “hard shift-left” imposed by its mental model—the requirement to declare the entire build graph explicitly, up front.

Grahl’s main advice is … don’t fight Nix: be open to letting go of prior assumptions. Early frustrations tend to give way once teams internalize the Nix Way—especially the emphasis on declaring all inputs up front and treating builds as pure functions of those inputs. “Nix forces you to do the work up front, but it’s work you’d have to do eventually anyway,” he says. “Once you accept that and stop fighting it, everything gets better.”

Conclusion: From Pilot to Platform

Nix didn’t just solve the most persistent build and deployment problems the community faces with Kubernetes, currently. It gave Grahl and his team a way to manage their Kubernetes infrastructure as a declarative system, with every input and dependency explicitly declared and versioned. This had ripple effects across that platform’s development lifecycle … and fundamentally changed how teams across the company build, test, deploy, and think about infrastructure.

Today Nix gives:

  • Predictable builds. With all dependencies pinned and declared up front, teams can now trust that what they build locally will behave the same in both CI and production.

  • SBOMs for free. Because every artifact is built from a Nix flake, complete dependency trees are always available. This means the ability to generate SBOMs is a natural byproduct of Nix’s declarative model.

  • Reliable GitOps and CI/CD pipelines. Declarative inputs mean pipelines fail less often, deployments are more predictable, and teams spend less time debugging because they can always reproduce issues.

  • Reproducible Infrastructure as code. Nix goes beyond versioning Kubernetes manifests, it defines the full build graph as code. Every input, dependency, and output is declared, pinned, and reproducible.

The cardinal DevOps virtues—viz., build determinism, traceability, and portability—are consequences that follow from Nix’s core design. As properties of Nix and its model, each of these virtues presupposes and reinforces the others. The result is a system where build artifacts aren’t just reproducible, but provably so.

“At some point, you start seeing Nix as a way to define exactly what your software is,” Grahl explains. “Because if you can define exactly what is in your software, you can always rebuild it, anytime, anywhere. And if you can describe your runtime the same way, you can always run it, anytime, anywhere.”