Python Dependencies

// #Devprod

What is this Hellscape

The bitcoin core project uses python for its “functional” tests. In this context, these tests run against the high level user interface of the project, requiring a whole process to be running. The python scripts fire requests off and ensure things work from a high level. These are valuable end to end tests and the pattern is used in the floresta project as well.

I have been attempting to run the python test suite for floresta locally and it has been a struggle. The benefits of using python to write tests are clear when you compare it to like a huge bash script. A developer can quickly read a test and know what its doing. Python makes sense for functional tests because it is a good “glue” language to wire together any other processes you might be testing. It would be a pain to write all the rust boilerplate for some small dependency.

And the downsides to python? Well, as soon as you introduce one python dependency to your script, chaos ensues. I am trying to gain some empathy and intuition for what is going on here.

Go and rust have their pain-points, no one’s perfect, but their dependency management is relatively easy to use. Even when things go sideways with crazy dependency constraints, failures are clear and usually give the developer a path to fix things at least. So how’d go and rust do that? Right off the bat there are obvious differences between them and python. Go and rust are statically typed and both have their build tooling essentially built in (e.g. cargo for rust). Lock files are always present. Modules have clear definitions and requirements. But I don’t have a feel for what that is all buyin and why python doesn’t just like, copy it?

Time to learn some python history.

OG python 1.0, back in the 1990s, was unsurprisingly designed for scripts. A high-level language for rapid development and glue code. Not a systems/general purpose language. Simplicity over robustness. And scripts are generally self-contained and not distributed, so importing other code (a dependency) was a bit of an after thought. Importing other things is simply importing another script, aka a file. An import call looks for a file in the current directory and then in the directories of the interpreter’s sys.path. If there is a match, it is executed and its items are placed in a namespace for the current script (e.g. math.py would be imported under the math namespace).

Notice a difference here. Rust and go dependency tools handle grabbing and importing code in one go. Whereas python 1.0 isn’t concerned with distribution so it doesn’t specify the “grabbing” part. To make up for that, it just looks around in some “common” spots for a match. It’s on the system to grab the code, the right version, and then put it somewhere in the filesystem…this sounds hard and maybe impossible to manage once the global system state starts to get cluttered.

Python 1.5 introduced packages. Before packages, Python modules existed as individual files with no formal way to group them hierarchically. A module in python is a single file and contains definitions and statements, it’s the basic unit of code organization.

package/           # This is a package
  __init__.py       # Makes the directory a package
  module_a.py       # A module within the package
  module_b.py       # Another module
  subpackage/       # A nested subpackage
    __init__.py   # Makes the subdirectory a subpackage
    module_c.py   # A module within the subpackage

Package and module organization.

Packages are nice, you can import using dot notation, import package.module! Before packages, everything was essentially in the same flat namespace. Subdirectories were not a thing! The interpreter would only look on its path and everything had to have a unique name. It’s really hard to distribute a module with a general name like math. This feels like, totally essential these days, but helpful to see how far python has come since the 90s.

Packages give some import organization, but how are things published? How is a version given to some group of modules? Initially, this simply wasn’t a thing. No versions and a highly dynamic type system. A recipe for fun!

A python distribution contains a group of packages, modules, and other resources. It is comparable to a rust crate or go module, the smallest unit to get published. The term emerged in the early 2000s when python started to add some publishing tooling. Things started with distutils (2000), then setuptools (2004), a lot of little PEPs for things like version strings (again, wowee), until we get to PEP 518 (2017) which introduced the modern looking pyproject.toml.

distutils was introduced with python 2.0. It was added to the standard library, from distutils.core import setup. A maintainer can use it to write a script (!) to build and install a distribution. Its focus is local install, distribution is limited to packaging things into a tarball. There is very limited metadata, so no dependencies, not to mention versions.

PyPI (Python Package Index) was introduced in 2003 and distutils was extended with auto-publish features. Makes sense today, a central spot to dump distributions. But the UX was limited. A user had to manually grab tarballs from PyPI in order to install them.

setuptools was introduced in 2004. It is an extension to distutils, so it was not added to the standard library (for a bunch of reasons). So to use it, you first have to grab it. But what does it add? A lot of stuff. We are talkin dependency installation, version constraints, optional dependencies, entry points, package discovery. But it wasn’t part of the standard library, so while this sounds essential it was still splintering in the ecosystem. I can’t imagine taking on a problem like resolving dependency version constraints, and not at least be in the standard library. Ambitious.

So along comes Pip (Python Install Package) in 2008. Pip is a separate third-party app, not built into the standard library like distutils, and not a script. So closer to rust’s cargo. With that said, it wasn’t included with python until 2014. Pip wasn’t looking to fully take over distutils or setuputils, its focus is on the users of distributions and not the maintainers. But like most things python…not a super clear line in the sand (until later with things like PEP 517/518). Pip introduced requirements.txt files, which look a bit like modern lock files, but are missing some key information. They don’t have complete dependency trees and hashes (a commitment to what is used to build), so deterministic builds are not possible.

flask==1.1.2
requests==2.25.1

requirements.txt has version information, but not stuff like hash commitments.

Python spent the 2010s standardizing metadata on projects and distributions. This is codified in PEP 621 and the pyproject.toml file. But somewhat surprisingly, as of 2025, there is no standard lock file format. And I have a suspicion that this is why things are still painful. But taking a step back after that whirlwind through python, it is kinda looks like go and rust designers learned from the pains of python and jumped straight to better tooling. They aren’t hauling behind them the bazillion lines of code with no standard metadata attached to it. But probably of massive importance to the world.

I personally only use python for personal, no-dependency scripts, I still have concerns with any dependencies. But I admit, I am kinda weird. I have tried recently to layer nix on top of python to see if it will give me more confidence. The answer? Not really. There is a huge impedance mismatch between nix and python which doesn’t exist with rust or go. Rust and go have learned the lessons of python and have a standard system which locks in all dependencies ahead of time. Deterministic inputs! Great of nix. But what is nix to do with python and its dynamic runtime dependency resolution?

Nix

Python is still working on standardizing a lock file spec, but in the meantime, people have made it possible to use python with nix (not that its very fun). I am curious how these worlds are reconciled without a lock file meeting point.

Nix packages have a “fetch” phase and then a “build” phase. And ideally, the build phase is “pure”. If given the same inputs it produces the same outputs and requires no outside info (e.g. a network request). One can see the tension with python’s dynamic runtime build which might very well depend on a network request to resolve dependencies. The fetch phase can be impure, but it produces a verified (hash committed) result. A lock file is perfect for the fetch phase.

So yea, how does python even work these days? I think the best way to put it is “by hand”. There are lock file converters for the non-standard specs (e.g. poetry and uv), but I don’t think these solve all the possible way dependencies are resolved. Nix packages offers some common python dependencies, but only one version of them, just a tad not flexible. If you know your app well you can use the low level fetchPypi helper to deterministically grab your dependency tree. Or you can vendor dependencies (as in, just download the source and check them into your project bypassing everything). But in any case, talk about upkeep!

Rust and go combo their lock files with very static build steps. They each have a few escape hatches, like rust allows dynamic linking, but these are much easier to spot and harness in nix.