Connect
  • GitHub
  • Mastodon
  • Twitter
  • Slack
  • Linkedin

Blog

Package Managers and Package Management: A Guide for the Perplexed

Steve Swoyer | 08 August 2025
Package Managers and Package Management: A Guide for the Perplexed

Traditional package managers are a lot like managed care for code.

A managed care provider routes you to in-network options, handles approvals and paperwork, and authorizes treatment. Their job is to get you the care you need while also balancing a set of hard constraints.

Just as a managed care provider doesn’t always get you the treatment, doctors, or brand-name drugs you request, a package manager doesn’t always get you the specific versions of dependencies you want.

Its overriding credo isn’t “give the user exactly what they want,” it’s “first do no harm.”

A New Hope

It doesn’t have to be this way. apt and other system package managers debuted almost 30 years ago, conceived for a world where most development on Linux and the BSDs was in C and C++, with Perl and shell scripts used to stitch everything together. Back then, you either brought your computer to the network—e.g.. a LAN party, an office intranet—or you used dial-up to fetch tarballs. The world wasn’t yet a software-defined supply chain of languages, runtimes, and platforms—where builds, distribution, policy, and trust are expressed as code, and packages cross OS, VM, container, and language boundaries.

In today’s world, there isn’t just one packaging problem. There are, instead, multiple overlapping problems—each of which is solved by its own package or environment manager.

System package managers like apt, dnf, Zypp, and Homebrew coexist with language or ecosystem-specific tools such as npm, pip, Cargo, and Maven. Elsewhere, solutions like Conda, uv, Poetry, Bun, and Deno collapse package and environment management into a single tool.

Meanwhile, OCI containers and desktop app bundles (like Flatpak, Snap, and AppImage) standardize how teams ship apps or code with their runtimes. These formats achieve reproducible, host-agnostic behavior at runtime by bundling dependencies—even if that means some redundancy vs. system-wide packages.

Each solves different problems with software packaging and package management; none solves the most important problem of all: that of guaranteeing reproducible behavior at build and runtime, so software always builds and runs the same in local dev, CI, and prod, on-prem or in the cloud—today, tomorrow or next year.

This is exactly the problem that Nix and Flox—powered by Nix—were designed to solve. They both model build or runtime environments as purely functional derivations, the outputs of which live in an immutable, hash-addressed store. With both Nix and Flox, the same declarative specs always yield the same dependencies. This design makes it possible to reproduce a build or runtime environment on any machine, anywhere, at any time.

A Crash Course in the History and Role of Package Managers

A quarter century ago, on Unix/BSD and early Linux distributions, most software came in the form of source tarballs: you’d unpack them, run ./configure && make && make install, and then waste minutes, hours, even days wrestling with missing headers, incompatible versions of shared libraries, and compiler quirks across dev setups and platforms. When they debuted, package managers like apt, pacman, and rpm seemed like a godsend: they standardized how systems install software, resolve dependencies, and distribute precompiled binaries—or, in source-based systems, by automating those build steps.

So apt, rpm, and other package managers solved a problem at the operating-system level, but couldn’t keep up with the growing variety and complexity of the software built using those systems.

As languages evolved into full-blown ecosystems, they introduced their own layers of dependency management. This set the stage for the emergence of language-specific tools like pip for Python and npm for Node.js. Each language started carving out its own ecosystem, complete with registries, dependency graphs, and build and packaging tools that operated independently of the host system.

Today, different types of package managers exist to solve different types of problems. Some manage system-wide software, others handle programming language packages, and some focus on creating isolated development environments. In general, package managers do four main things:

  • They resolve dependencies;
  • Install packages; and
  • Handle updates.

Some language ecosystems, like Python and Node.js, isolate dependencies via virtual environments or version managers; others, like Rust and Go, achieve comparable isolation via project-local manifests and reproducible builds, rather than dedicated environments. Beyond language and system package managers, tools like Snap, Flatpak, and AppImage package entire application runtimes, aiming to deliver reproducible behavior across machines. Containers take this logic even further, extending isolation down to the process level; encapsulating filesystem state, environment variables, networking; and even user permissions within their own namespace. This creates a hermetic boundary between the application and the host.

The Difference Between Package and Environment Managers

In this section and the ones that follow, I use Python and JavaScript/TypeScript as exemplars because their ecosystems expose (more than most) the practical tensions between package management and runtime isolation. Both rely heavily on native extensions—modules written in C, C++, or Rust that compile against system headers and libraries—and both run atop cross-platform runtimes (the CPython interpreter or the Node.js runtime) that must bridge different operating systems, architectures, and ABIs.

Most language- or toolchain-specific ecosystems divide responsibility between tools that manage packages and tools that manage environments.

Take Python, for example. There’s no single tool that manages the whole of a Python project’s dependency graph—including indirect, system-level, and non-language dependencies. Instead, Python developers tend to combine several utilities, each of which is responsible for a different part of the lifecycle. For instance, venv creates isolated environments; pip installs and resolves packages inside them; and requirements.txt files record version pins but stop short of full dependency locking.

To achieve reproducible build- and run-time behaviors, Python developers rely on tools such as uv, Poetry, PDM, or pip-tools. These tools collapse the familiar boundary between where packages live and how they get installed, although they don’t extend to system libraries or external runtimes. They work by generating lockfiles that record the complete Python dependency graph computed by the resolver. Lockfiles pin specific versions of packages, record transitive Python dependencies, and compute cryptographic hashes over the wheels, crates, or tarballs to be installed.

In JavaScript, software like npm performs the package manager’s role, while tools like nvm manage Node.js runtime versions. nvm lets a developer switch between interpreter versions but does not isolate project dependencies; that task belongs to tools like npm, which install packages into a project-local node_modules tree.

The upshot is that both the Python and JavaScript ecosystems rely on pairings of tools rather than an integrated package and environment manager. Each ecosystem bifurcates environment management and package management; each expects the developer herself to coordinate these activities. This pattern isn’t specific to the Python and JavaScript/Typescript ecosystems.

True, some languages, such as Rust and Go, blur these responsibilities, at least to a degree. For example, both Cargo and Go Modules manage dependency resolution and project-level reproducibility within a single framework. But because they still rely on the host for compilers, libraries, and runtime behavior, reproducibility with Rust and Go projects ends at the project boundary: these ecosystems manage toolchain-specific dependencies, but not the runtime environment in which those dependencies live and run.

Two Distinct Approaches to Environment Management

Both venv and nvm give you a way to manage your project environments, but come at this problem from different directions: venv isolates at the Python project level, while nvm does so at the Node runtime level. The distinctions below clarify the boundaries of their responsibilities—i.e., scope, what each manages, and how Python and JavaScript/TypeScript developers use them in practice.

Scope

  • venv creates a separate Python environment per project. It changes PATH and site-packages for that project only. Think of this as project level isolation.

  • nvm (user-level) installs and selects Node.js versions at the user-level: it installs multiple Node versions under the user’s $HOME directory and switches between them by updating the shell’s PATH. The selected runtime spans projects—unless a developer pins a Node.js version in a project’s .nvmrc file.

Responsibilities

  • venv is analogous to an environment container for Python. It does not install packages; pip does that inside the venv. Note: newer tools like uv combine these steps, chiefly by creating and populating environments automatically as part of the work of managing dependencies.

  • nvm manages the Node.js runtime version. It does not manage per-project packages; rather, npm installs those in the project’s node_modules tree. Note: nvm does not manage alternative JavaScript runtimes such as Bun or Deno. Like uv, these tools collapse interpreter, package manager, and dependency resolver into one solution.

How teams use them

  • With Python, developers create a venv, then run pip to install the project’s dependencies. Newer tools such as uv and Poetry simplify this process: running a single command (uv sync or poetry install) creates an isolated environment and installs required packages automatically.

  • With JavaScript, developers use nvm (or alternatives such as n, fnm, or volta, then run npm to install the project’s dependencies. Tools like bun and deno simplify this: running bun install or deno cache resolves and caches dependencies without requiring an external tool like nvm or npm.

Mind the Gap

Tools like uv, Bun, and Deno stop at the language boundary. They give you Python and JavaScript dependencies, but don’t define compilers, source headers, C libraries, etc. that builds and processes rely on. This means that build- and run-time reproducibility cease at the project boundary. This is a feature, not a bug: it’s consistent with how all traditional OS- or ecosystem-specific package managers behave.

Systems like Nix (and software built on top of it, such as Flox) push past that limit. They describe the runtime, toolchains, libraries, and packages in a single declarative artifact. They give you a specification you can use to recreate a project’s complete runtime environment on any machine, at any time. This shifts the locus of environment management from a set of commands to a declared end state: basically, a declarative recipe for reproducing an environment. Nix and Flox materialize that state the same way each time.

Package Managers vs. Containers

Most package managers exist to solve problems with operating systems, languages, or runtimes.

This means their scope almost always stops at either the project boundary (pip, npm) or the OS boundary (apt, dnf, brew). The question of scope is critical because run- or build-time scopes tend to expand as projects move from local development to CI and production. This is why organizations often require developers to build and debug locally inside the same container images they use in deployment pipelines.

This pattern has the advantage of enabling parity between local dev and CI/prod; it has the disadvantage of limiting ergonomics. A containerized environment is basically hermetic: if developers need a compiler, debugger, system library, or other resources, these must be built into the image—or installed manually at runtime. In practice, then, most developers end up penetrating dev containers as part of their workflow.

Containers are the default way we ship and run code, along with that code’s runtime dependencies. Given this, it’s easy to confuse them as a solution for packaging software, too. But the primary reason we have containers is as a reliable, portable artifact for deploying and running software at scale.

The next section briefly explores the difference between containers and package managers.

Containers Are for Distributing Software, Not Packaging It

Containers are infrastructure artifacts optimized for running workloads in isolated environments to meet common operational needs, like fault tolerance, scaling services, or shipping software to CI and production. They strike a balance between isolation and portability, making them easier to build, run, and maintain than other forms of encapsulation, like VMs. Assuming compatible OS/architecture and OCI runtimes, container images provide a portable, reproducible deployment artifact for shipping and running software at scale.

However, a container doesn’t package software any more than a shipping container “packages” goods.

Packaging resolves dependencies and conflicts, ABI/API limits, maintainer scripts, policy, and lifecycle hooks—not just where files land. That work happens during the package build, i.e., when producing .deb, .rpm, or .apk archive files. Even when it’s time to build a container image, dependency solving and policy enforcement are the responsibility of the package manager (apt, dnf, apk etc.), not the image builder. The container build tool simply installs or copies package artifacts onto a base image, archiving them as read-only OCI layers. Like so many stacked pallets, these layers comprise the container image itself.

Containers are part of the infrastructure organizations use to run and deploy software. Isolation is integral to this role, and packaging underpins it. By itself, however, a container image ≠ a package. Like loading packaged cargo into a shipping container, it’s just a means to the end of producing an isolated, portable artifact for deployment. The packaged cargo is what ultimately matters, but the modern software supply chain is built around the container as an intermodal unit for storing, shipping, and delivering software.

The upshot is that many if not most organizations rely on containers to build, run, and debug Python and JavaScript/TypeScript projects—and not just when it’s time to deploy them to production.

What About Conda?

Conda is a solid choice for projects or workloads that primarily involve Python—especially on 64-bit Windows or macOS. Conda is widely used in scientific or data-centric computing, where it’s trusted as a reliable resource for Python stacks with major C/C++/Fortran dependencies. Both Conda and conda-forge distribute precompiled artifacts that link against standard Conda toolchains, so you don’t need a system compiler or source headers to install them. This means Python-native and C/C++/Fortran-linked stacks (with dependencies like NumPy, SciPy, etc.) generally work out of the box.

Anaconda, the commercial company behind Conda, is one of just a handful of vendors—along with Flox, Canonical (Ubuntu), SUSE (OpenSUSE), and CIQ (Rocky Linux)—that are authorized to distribute prebuilt, prepatched CUDA packages. Today, Anaconda’s defaults channel, NVIDIA’s nvidia channel, and the community-maintained conda-forge channel all provide CUDA packages, enabling teams to fetch CUDA-accelerated Python dependencies without building them from source. This is a major time-saver: PyTorch, TensorFlow, CuPy, and other CUDA-accelerated dependencies can take hours to compile.

When and Why to Use Conda

Conda’s scope, however, is confined almost exclusively to the Python ecosystem: it does not and cannot manage non-Python services or runtimes. To permit reproducibility across both space (e.g., teammates’ laptops, CI runners, prod clusters, on-prem vs. cloud) and time (i.e., one month, six months, or six years from creation), teams typically (1) pin channels and package builds, (2) use frozen channel snapshots or internal mirrors, and (3) generate conda-lock lock files with specific builds and hashes, resolving and installing against the same snapshot/mirror. For deployment-grade reproducibility, teams also usually (4) capture the result per target platform/ABI in a container image pinned by digest, or build it into in a VM/AMI.

Conda works best for Python-centric projects and workloads. If you’re managing scientific, ML, or data engineering workloads, and your dependency graph consists mostly of Python plus compiled C/C++/Fortran dependencies, Conda’s solver metadata reliably picks precompiled, ABI-correct binaries for your platform.

Conda is also useful for local Python work on macOS and Windows. On macOS, Conda builds packages against the appropriate Apple SDK, libc++, and CPU architecture. On Windows, Conda’s solver selects packages matching the MSVC runtime, UCRT, Windows SDK, and the Windows 64-bit build of each package—including CUDA-accelerated packages if applicable.

Conda and Reproducibility

Conda as a package and environment manager doesn’t span the whole of the SDLC. Its responsibilities stop at building and managing environments, not at delivering, deploying, or running them in CI or production.

For example, a Conda project built by a team locally is reproducible to the extent that (1) the artifacts and repodata it’s locked against remain unchanged and available, and (2) its Conda environment is solved and installed from same frozen channel snapshot on a system whose OS, CPU architecture, and virtual packages match that lock. Needless to say, these conditions are difficult to maintain across local dev, CI, and prod.

In practice, then, most teams opt to freeze Conda environments by shipping them inside containers. To keep parity, some teams build and debug inside the same image locally. The upshot is that Conda by itself doesn’t guarantee reproducibility: to achieve reproducible behavior at runtime, most adopters ship and run a container image pinned by digest.

The Nix and Flox Alternative

Solutions like Nix and Flox support virtual environments, cross-language/cross-toolchain package management, and lockfile-based versioning. They permit portable, reproducible runtime behavior across Linux, macOS, and WSL2. They define virtual environments that run as subshells, which is sufficient to isolate projects or workloads without (as with containers) walling developers off from their local systems.

Let’s unpack this. First,, Nix and Flox environments are “virtual” in that they’re defined as declarative artifacts: a shell.nix or flake.nix, in the case of Nix; a manifest.toml in the case of Flox. These files specify the desired end state of an environment—its dependencies, configuration settings, and run- or build-time parameters—rather than enumerating the procedural steps required to construct it.

Second, a Nix or Flox environment consists of (a) this declarative artifact, (b) a lockfile, and (c) a forest of symlinks. These symlinks refer to packages that live in an immutable, input-addressed store: /nix/store. When you activate an environment, Nix or Flox evaluate its declarative artifact and generate symlinks to packages in /nix/store. Both can resolve dependencies dynamically or from pinned versions, but in all cases they realize the environment as a network of symlinks, rather than copies of real files.

Third, Nix and Flox environments run as subshells on the host system. Activating a Nix or Flox environment doesn’t spawn a container or VM; rather, it starts a child shell session whose PATH and related variables are bound to the environment’s symlinked dependencies. This gives developers most of the isolation they need—without the hermetic isolation of working in a dev container. Because the environment runs directly on the host, developers get access to all their local resources.

Nix and Flox environments run reproducibly across the entire SDLC. You can export them to containers, co-locate them with your Git repos, or invoke them dynamically at runtime. This enables developers to build on their local machines, using exactly the same environments they push to CI and deploy to production. No containers required.

When to use which tools

Python

  • For quick, standard setup. Create a venv and install with pip. Pin versions in requirements.txt. Suitable for small projects and experiments.

  • For reproducible installs and fewer moving parts. Use tools like uv or poetry. A single command creates an isolated environment and installs from a lockfile. Prefer this for team projects and CI.

  • For non-Python dependencies. Pair Python tooling with a system package manager (apt, Homebrew) or a declarative environment (Nix, Flox). Python tools won’t fetch external libraries or tools for you.

JavaScript / TypeScript

  • For Node.js projects. Select a Node version with nvm (or n, fnm, volta), then install with npm, pnpm, or yarn.

  • If you want an integrated runtime. Use Bun or Deno. These ship JavaScript / TypeScript runtimes with built-in package managers and resolvers. bun install or deno cache sets up the project without requiring nvm and npm.

  • For per-project Node pinning. Commit .nvmrc (or a Volta toolchain file) so collaborators and CI runners pick the correct Node version automatically.

  • For non-JavaScript / TypeScript dependencies. Pair your JS/TS stack with a system package manager (apt, Homebrew) or a declarative environment (Nix, Flox). JS/TS tools won’t fetch external libraries or tools for you. Note: getting JS/TS dependencies from Nixpkgs or the Flox Catalog gives you deterministic behavior across space and time.

Polyglot projects (Python + JS)

  • For the typical web stack. Manage Python with venv + pip (or uv/poetry) and JavaScript with nvm + npm (or Bun/Deno if the toolchain allows). Treat each side as its own dependency graph, since Python and JavaScript resolve and version their dependencies independently.

  • For unified runtime pinning. Use asdf to pin Python and Node versions in one file; continue using the ecosystem-specific package managers (e.g., pip, npm) within each project.

When to favor integrated tools

  • For runtime reproducibility across local dev and CI. Prefer integrated tools like uv or poetry for Python; prefer volta (which gives you Node + npm version pinning) or Bun/Deno where appropriate. These tools provide lockfiles, deterministic installs, and require less in the way of manual coordination. Note: none of them manages external dependencies, so reproducibility stops at the language boundary.

  • For smoother project handoffs. Using integrated tools reduces setup drift across laptops and CI runners.

When you need more than project-level control

Cross-language toolchains, compilers, C libraries, source headers, external services (e.g., PostgreSQL), and other artifacts lie outside of what ecosystem-specific tools can manage. In such cases use:

  • Pinned containers by image digest to define the base environment that’s used in CI and production. Build and run locally inside that same digest to keep all dependencies consistent. Note: container sizes vary by stack. Python containers with NumPy, SciPy, PyTorch, or TensorFlow—especially with MKL or CUDA support—can reach several gigabytes. Similarly, JavaScript/TypeScript projects that include compilers, bundlers, or node-modules trees (like Webpack, Vite, or Next.js) can also grow to be quite large.

  • Conda for Python-centric stacks with compiled C, C++, or Fortran dependencies, including GPU user-space libraries. Installs come as prebuilt binaries, so no system compiler is needed, and the solver picks ABI-correct builds for your platform. For environments that will be shared across teams, shipped to CI, deployed in production, or debugged by other teams, create conda-lock files and resolve and install from frozen channel snapshots or mirrors; then ship the environment inside a container pinned by image digest. Note: even when you start from minimal bases like mambaorg or micromamba, Conda containers can grow to become quite large once scientific or CUDA-accelerated dependencies are included.

  • Nix or Flox to define that same environment as code. Both give you a declarative specification that you can either co-locate with your Git repos, export as OCI images, or invoke dynamically (with git clone, nix build, or flox activate -r) from runners. Nix and Flox extend or (optionally) replace the container model: instead of replaying a prebuilt image, they generate containers directly from a declarative definition, so every dependency is declared upfront and can be rebuilt identically on any machine.