I gave uv a good hard look over the past couple days and it is really nice, but I have run into a couple scenarios where I expected it to do its magical thing and make things Just Work, but instead it tried to do exactly what was in the uv.lock file (that I hadn't been paying much attention to) and errored out because it couldn't make things work like that specifies (specifically, original local package repo I used was no longer available).
I think it has a general problem of not being obvious when it's going to update uv.lock based on what you tell it to do or when it's going to error out because of uv.lock based on what you tell it to do (or based on network connectivity or whatever else might come up).
The fact that it both takes direction from pyproject.toml/uv.lock and also edits those files is now giving me hives. I don't know what it's going to do.
The only issue I have is that it's project based. I usually operate in Jupiter notebooks and a very fat conda environment, I tried uv but I didn't really know how to do that, like do I create a new project for each notebook and add my 20 dependencies?
And another thing, suppose I have a library, it only requires a few dependencies. But when developing I usually fire up my fat environment to test my algorithms in various scenarios provided by different libraries I have, e.g. datasets, models, plotting tools, etc. Is that possible with uv?
Yes. uv supports old and new workflows alike. I would not use it with anaconda though, mixing pypi and conda forge is a good way to bring pain into your life.
Great post! Very much mirrors my own experience of pleasant surprise switching to uv this past month.
Another underrated benefit is inline dependencies on scripts. It is a game changer to be able to send a script that encapsulates all its dependencies. It's worth looking at these! Here is an example: https://gist.github.com/jlevy/ee975e59c8864902b288e2a44dd29f98
The biggest shortcoming I found was that it is so new it's hard to find good best practices like a sensible end-to-end template with uv, ruff, GitHub Actions to set them up, etc. So in case it helps anyone: I did publish the template I use. See git.new/uvinit or my post: https://x.com/ojoshe/status/1901380005084700793
Great article, thanks for the write-up! I'll look forward to you HowTo. In the meantime I'll be reading up on uv, installing it and trying it out for myself.
Btw, have you heard of / used mise (https://mise.jdx.dev/)? I've really enjoyed using it and I'm curious to hear your thoughts on this tool as well.
excellent summary. I am one of those who have heard the hype for uv, but then i heard the same hype for poetry, the other one I can't remember, so I always wait before trying new stuff.
Waiting for your follow up article on how to use uv
We're in a polylingual dev environment (kotlin, java, javascript, typescript, python, and likely more coming) employing a git monorepo, and, similar to your assertion about Python coders not knowing the command line (with which I agree), we've noticed that some data sciencey folks aren't familiar with git, git branching strategies, version control principles & semver, the software development lifecycle, build tools (maven/gradle, make, grunt/gulp, etc), dependency injection and inversion of control, automated testing, issue tracking systems and how they affect how you incrementally add features or fix bugs, monorepos/polyrepos, etc. Basically, they're mad scientists, off working in their secret, isolated laboratory on ad-hoc tasks, and haven't participated in releases & everything that goes along with them.
uv could step in here to really help these types of folks (and me) out.
You will need to describe more precisely what you need in the ticket because there are lany ways to organize in and around a monorepo. This way the team may be able to genalize features that can be helped in different setups.
I just reviewed uv for my team and there is one more reason against it, which isn't negligible for production-grade projects: Github Dependabot doesn't handle (yet) uv lock file. Supply chain management and vulnerability detection is such an important thing that it prevents the use of uv until it sees more adoption
I have made the switch, but I could understand why others haven't.
I switched to pdm to avoid most of those same complaints you raise. What were the pitfalls. You mentioned pdm once in the article---could you say more specifically?
Goodness. Those are the kinds of reasons I was evangelizing switching /to/ pdm from the tools mentioned in TFA. I recently did move to uv as my pdm backend, but I didn't see a big QoL improvement. I'm curious, did you spend much time with pdm or mostly make the jump straight from the aforementioned tools straight over to uv?
> Finally, uvx (and so uv tool install) suffers from a similar problem then pipx, in that it encourages you to install some tools outside of your project. This makes sense for things like yt-dlp or httpie which are self-contained independent tools. But it's a trap for dev tools that care about syntax or libs, like mypy that will be installed in a certain Python version, but then used on a project with another potentially incompatible Python version. They will break spectacularly and many users won't understand why.
I think it's specifically meant for tools like `yt-dlp` and not for tools like ruff or type checkers. Add those directly to your project, and you can run them directly from inside the venv, or using `uv run`.
Is there a conda to uv migration tutorial written by anyone?
I have installed miniconda system-wide. For any Python package that I use a lot, I install them on base environment. And on other environments. Like ipython.
For every new project, I create a conda environment, and install everything in it. Upon finishing/writing my patch, I remove that environment and clean the caches. For my own projects, I create an environment.yaml and move on.
Everything works just fine. Now, the solving with mamba is fast. I can just hand someone the code and environment.yaml, and it runs on other platforms.
Can someone say why using uv is a good idea? Has anyone written a migration guide for such use cases?
I am mightily impressed by one line dependency declaration in a file. But I don't know (yet) where the caches are stored, how to get rid of them later, etc.
I don't know any such guide and I suspect one reason is the typical anaconda setup doesn't exist. Anaconda has 4 different package managers: conda, miniconda, mamba and anaconda-project. It has several possible configuration files as well, and you can know what combinarion oe non python tools and conda channels or anaconda cloud feature the user is dependant on.
I'm guessing calling this a "project management tool" is likely to confuse readers. https://en.wikipedia.org/wiki/Project_management I know they do this on the uv site, but it's "package management" really.
uv does more than packaging, since it's doing all the provisioning, book keeping, isolating, command running and abstracting parts as well.
You can of course, consider that all that is part of package management in a way, but if you say a python package manager, uv is not what people will pictur either.
Yet I agree project management is not ideal as well.
Another blocker in my current gig is native JetBrains support. It hasn't come out yet on PyCharm. I don't use JetBrains stuff anymore but I have team members that do
my attraction to pixi is that the .toml file tells someone everything they need to know to reproduce the environment, or at least that's it's aspiration. Before pixi when I depended on npm (or git or ...) I needed to manually maintain a seperate doc for that, which is hit and miss for reliability.
None of which counters you said about indirection and leaks. It just means that there is still no robust solution for this, and I must keep attempting to practice self-discipline in what I document. ;-)
I gave uv a good hard look over the past couple days and it is really nice, but I have run into a couple scenarios where I expected it to do its magical thing and make things Just Work, but instead it tried to do exactly what was in the uv.lock file (that I hadn't been paying much attention to) and errored out because it couldn't make things work like that specifies (specifically, original local package repo I used was no longer available).
I think it has a general problem of not being obvious when it's going to update uv.lock based on what you tell it to do or when it's going to error out because of uv.lock based on what you tell it to do (or based on network connectivity or whatever else might come up).
The fact that it both takes direction from pyproject.toml/uv.lock and also edits those files is now giving me hives. I don't know what it's going to do.
The only issue I have is that it's project based. I usually operate in Jupiter notebooks and a very fat conda environment, I tried uv but I didn't really know how to do that, like do I create a new project for each notebook and add my 20 dependencies?
There is nothing stopping you from using uv the same way you use anaconda as uv is not limited to project based dev.
I will write an uv tutorial at some point but it will take time.
And another thing, suppose I have a library, it only requires a few dependencies. But when developing I usually fire up my fat environment to test my algorithms in various scenarios provided by different libraries I have, e.g. datasets, models, plotting tools, etc. Is that possible with uv?
Yes. uv supports old and new workflows alike. I would not use it with anaconda though, mixing pypi and conda forge is a good way to bring pain into your life.
Great post! Very much mirrors my own experience of pleasant surprise switching to uv this past month.
Another underrated benefit is inline dependencies on scripts. It is a game changer to be able to send a script that encapsulates all its dependencies. It's worth looking at these! Here is an example: https://gist.github.com/jlevy/ee975e59c8864902b288e2a44dd29f98
The biggest shortcoming I found was that it is so new it's hard to find good best practices like a sensible end-to-end template with uv, ruff, GitHub Actions to set them up, etc. So in case it helps anyone: I did publish the template I use. See git.new/uvinit or my post: https://x.com/ojoshe/status/1901380005084700793
Great article, thanks for the write-up! I'll look forward to you HowTo. In the meantime I'll be reading up on uv, installing it and trying it out for myself.
Btw, have you heard of / used mise (https://mise.jdx.dev/)? I've really enjoyed using it and I'm curious to hear your thoughts on this tool as well.
Yes, but I stay away from tools that work cross stacks.
Thanks for this long write-up; looking forward to an equally long write-up on HowTo properly setup and configure uv and build a workflow based on uv.
The write-up is going to be longer and split among several articles, but I will wait until the task feature is added before I start writing it.
excellent summary. I am one of those who have heard the hype for uv, but then i heard the same hype for poetry, the other one I can't remember, so I always wait before trying new stuff.
Waiting for your follow up article on how to use uv
Yes, and about pipenv, and anaconda, and so on.
Enjoyed the article; it helps confirm my choice of uv over poetry for our greenfield project. Speaking of uv as a project management tool, you might be interested in the issues that I recently filed: https://github.com/astral-sh/uv/issues?q=is%3Aissue%20state%3Aopen%20author%3Amatthewadams
We're in a polylingual dev environment (kotlin, java, javascript, typescript, python, and likely more coming) employing a git monorepo, and, similar to your assertion about Python coders not knowing the command line (with which I agree), we've noticed that some data sciencey folks aren't familiar with git, git branching strategies, version control principles & semver, the software development lifecycle, build tools (maven/gradle, make, grunt/gulp, etc), dependency injection and inversion of control, automated testing, issue tracking systems and how they affect how you incrementally add features or fix bugs, monorepos/polyrepos, etc. Basically, they're mad scientists, off working in their secret, isolated laboratory on ad-hoc tasks, and haven't participated in releases & everything that goes along with them.
uv could step in here to really help these types of folks (and me) out.
You will need to describe more precisely what you need in the ticket because there are lany ways to organize in and around a monorepo. This way the team may be able to genalize features that can be helped in different setups.
Great overview thanks
I just reviewed uv for my team and there is one more reason against it, which isn't negligible for production-grade projects: Github Dependabot doesn't handle (yet) uv lock file. Supply chain management and vulnerability detection is such an important thing that it prevents the use of uv until it sees more adoption
Seems that this is no longer an issue:
https://github.blog/changelog/2025-03-13-dependabot-version-updates-now-support-uv-in-general-availability/
Thanks for letting me know, this is great!
That's right but you loose some of the benefits of uv. In any case, Github is planning to address this in Q1 2025: https://github.com/dependabot/dependabot-core/issues/10478#issuecomment-2578570442
There's a pretty good list of reasons not to switch pdm to using uv as the backend: https://pdm-project.org/latest/usage/uv/#limitations
I have made the switch, but I could understand why others haven't.
I switched to pdm to avoid most of those same complaints you raise. What were the pitfalls. You mentioned pdm once in the article---could you say more specifically?
The article https://www.bitecode.dev/p/why-not-tell-people-to-simply-use is noy specific to pdm but the arguments hold for it as well.
Goodness. Those are the kinds of reasons I was evangelizing switching /to/ pdm from the tools mentioned in TFA. I recently did move to uv as my pdm backend, but I didn't see a big QoL improvement. I'm curious, did you spend much time with pdm or mostly make the jump straight from the aforementioned tools straight over to uv?
Fabulous piece, thanks. I've been wanting to try it.
Got a wee typo (double negative):
"Number 2 is not something you can't do much about, so the point is moot."
Thanks
> Finally, uvx (and so uv tool install) suffers from a similar problem then pipx, in that it encourages you to install some tools outside of your project. This makes sense for things like yt-dlp or httpie which are self-contained independent tools. But it's a trap for dev tools that care about syntax or libs, like mypy that will be installed in a certain Python version, but then used on a project with another potentially incompatible Python version. They will break spectacularly and many users won't understand why.
I think it's specifically meant for tools like `yt-dlp` and not for tools like ruff or type checkers. Add those directly to your project, and you can run them directly from inside the venv, or using `uv run`.
Indeed but most people probably realize it.
Is there a conda to uv migration tutorial written by anyone?
I have installed miniconda system-wide. For any Python package that I use a lot, I install them on base environment. And on other environments. Like ipython.
For every new project, I create a conda environment, and install everything in it. Upon finishing/writing my patch, I remove that environment and clean the caches. For my own projects, I create an environment.yaml and move on.
Everything works just fine. Now, the solving with mamba is fast. I can just hand someone the code and environment.yaml, and it runs on other platforms.
Can someone say why using uv is a good idea? Has anyone written a migration guide for such use cases?
I am mightily impressed by one line dependency declaration in a file. But I don't know (yet) where the caches are stored, how to get rid of them later, etc.
I don't know any such guide and I suspect one reason is the typical anaconda setup doesn't exist. Anaconda has 4 different package managers: conda, miniconda, mamba and anaconda-project. It has several possible configuration files as well, and you can know what combinarion oe non python tools and conda channels or anaconda cloud feature the user is dependant on.
This makes writing such a guide very challenging.
I'm guessing calling this a "project management tool" is likely to confuse readers. https://en.wikipedia.org/wiki/Project_management I know they do this on the uv site, but it's "package management" really.
uv does more than packaging, since it's doing all the provisioning, book keeping, isolating, command running and abstracting parts as well.
You can of course, consider that all that is part of package management in a way, but if you say a python package manager, uv is not what people will pictur either.
Yet I agree project management is not ideal as well.
Great article. You may want to run it through a spell check though
- developping
- maintening
- signaling
- independant
- contraints
Thanks a bunch.
Another blocker in my current gig is native JetBrains support. It hasn't come out yet on PyCharm. I don't use JetBrains stuff anymore but I have team members that do
Pycharm now supports uv: https://www.jetbrains.com/help/pycharm/uv.html
Crazy how fast thing are moving.
If you use Pixi by prefix.dev you can have conda and pypi too. It uses uv under the hood! ( same lockfile )
Mixing conda channels and pypi is a path to pain.
is there a better way besides pixi to mix, say, nodejs and python in a single project?
Using npm itself works fine, it's neither hard to install nor to use. Not uv level (although Evan You is working on that) but ok
pixi is a layer of indirection that will leak. If you do everything, you don't do anything well.
I would just bite the bullet and use npm. With vite if you use a web stack.
my attraction to pixi is that the .toml file tells someone everything they need to know to reproduce the environment, or at least that's it's aspiration. Before pixi when I depended on npm (or git or ...) I needed to manually maintain a seperate doc for that, which is hit and miss for reliability.
None of which counters you said about indirection and leaks. It just means that there is still no robust solution for this, and I must keep attempting to practice self-discipline in what I document. ;-)