Summary
The new REPL trickled down changes to PDB that I wanted for years.
Many fixes to
shutil
so we may be able to use it without praying, finally.A few concurrency small victories.
New annotation syntax allows comprehensions and lambdas.
Python 3.13 is still not significantly faster. Sorry.
As per tradition
Python 3.13 is a great release packed with features and improvements, but there is already an avalanche of articles that go throught the release notes. If you want a good run down, RealPython has a quality one, but there is little value for this blog to add on that.
So we are not going to speak about the new REPL, the no-GIL build, the experimental JIT, the deprecation thingy, the new typing goodies, or the better error messages (my fav, as usual).
Instead, I read the short book that they called changelog, and we are going to look at the things people didn't talk much about but piqued my interest.
Make debugging great again
Despite the spartan ergonomics, I love pdb and even wrote a nice intro to it.
But if you ever did this:
try:
1 / 0
except ZeroDivisionError as e:
breakpoint()
You know what happens when you read e
:
-> breakpoint()
(Pdb) e
*** NameError: name 'e' is not defined
This is infuriating, especially since you usually have to deal with this when you are at a low point in your life.
And it's now fixed.
Praise to Apophis, Quetzalcoat, and Jörmungandr!
But that's not all. Pdb itself got polished:
Multiline editing, finally!
Code completion, like in the new REPL.
break
accepts dotted paths, so you can easily add breakpoints in any lib dynamically.Support of pdb in zipapps. Because why not.
They fixed pdb CLI swallowing arguments. VSCode’s team will love this.
.pdbrc is finally not broken anymore. Didn’t use it? Me neither. Because it was broken.
Post-mortem mode works even for SyntaxError. Which is what you want when you
exec().
This is, by itself, a very good reason to use 3.13.
File system manipulations are getting some love
You would expect that a 30-year-old language has paths and files nailed down, but it turns out you can always find room for improvement.
The shutil module providing high-level FS operations such as recursive delete or copy has seen such tweaks: many, many, bugs were fixed (particularly regarding error handling during recursion), and options were added (e.g.: to choose how to handle symlinks).
Shutil was notoriously so finicky I always hesitated to use it. Recursive descent often broke one way or another. This means I will attempt to put it back in my toolbox, as it’s very convenient when it works.
Ditto for zipfile.Path, a pathlib-compatible wrapper for traversing zip files you didn't know existed since 3.8. And you didn't know it because it sucked, frankly. But 3.13 brings many QoL patches to it, and improves a lot on how it handles directories, which used to require a lot of manual labor. So I expect it will see more use from now on.
Finally, pathlib itself is getting a lot of tiny perf optimizations, with many operations now using strings behind the scenes instead of Path
objects. Serialization should also be faster. I like pathlib, but it was a known issue that it could be a bottleneck, so this is superb news. Haven't measured the IRL perfs myself so I'll reserve my judgment.
It's not just about being faster thought, but also nice adjustments in API. E.G.: Path.glob()
and rglob()
now accept path-like objects as patterns, Path.glob()
will return both files and directories if you end the pattern with **
and a Path.from_uri()
method has been added.
Concurrency small victories
asyncio.as_completed()
now returns an object that is both an asynchronous iterator and a plain iterator. Ok, it's not huge, but it's nice.asyncio.TaskGroup
are great, if you don't use them, do. And now when you callcreate_task()
on an inactive one, it will close the coroutine, preventing a RuntimeWarning.queue.Queue
can now be explicitly closed withshutdown()
, which helps communicate to the rest of the system it's time to stop pumping submissions to it.The multiprocessing pool max number of workers has been increased beyond 62, just in case you happen to be the guy in this Reddit thread.
The annotation change nobody asked for
Remember that annotations were initially not specific to types? It was an experiment to see what people would use it for, and therefor it accepts arbitrary Python expressions.
Well, apparently, not arbitrary enough, because it’s been relaxed so it can accept comprehensions and lambdas.
The bug ticket is quite funny:
The following code causes a
SystemError
during compilation.
class name_2[*name_5, name_3: int]:
(name_3 := name_4)
class name_4[name_5: name_5]((name_4 for name_5 in name_0 if name_3), name_2 if name_3 else name_0):
pass
You don’t say, Frank, you don’t say…
Well, I guess how we are going to abuse this is now an exercise left to the quant developers.
The disappointments
The perf improvements are not up to the hype in this release, just like for 3.12, with only a slight bump in the traditional benchmark for async operations (hope that doesn’t invalidate my optimism about pathlib). Making Python faster is way, way harder than Guido thougth. Previous failed attempts warned us about it, so I guess it's not that surprising.
As I mentioned in a previous post the incremental GC changes have indeed been reverted, since it didn't bring the expected results.
On top of that, despite the changelog being littered with import improvements, I measure no significant difference in Python startup time on my machine.
Plus there are the inevitable stuff breaking:
If you read slowly the Path.glob()
changes up in the article, you'll notice that the return value can change quite a lot, and I assume it will break someone's code.
There are other details that let me think it will tickle some people the wrong way:
Support for using
pathlib.Path
objects as context managers have been removed. Not that it was useful. And it was confusing. It’s probably good. But you know.Starting new threads and processes through
os.fork()
during interpreter shutdown (such as fromatexit
handlers) is no longer supported.Positional arguments
maxsplit
,count
andflags
forre.split(
),re.sub()
andre.subn()
are marked as deprecated in favor of their keyword version. Not removed yet, it's just a warning. It does seem a bit extreme to me.Same for all but the first argument of
sqlite3.connect().
Deprecate undocumented glob.glob0() and glob.glob1() functions. I hated them as it confused all my students so I'm ok with it, but still.
.pth
files with names starting with a dot or hidden file attribute are now ignored for security reasons.The C API header files have been cleaned up, and I'm guessing there will be dragons.
As usual, it's a balance between keeping the language modern, clean, free of tech debt, and the ecosystem stable enough to not kill the community productivity.
There will always be people stating the language is stuck in the past, and those who will ask why is Python breaking stuff again. No way to win this. Core Dev is a thankless job.
Random nice things
Let's finish on a positive note. I don't really have a category where to group all those:
python -m venv
adds a .gitignore file that auto ignores the venv.json.dumps()
withindent
will now use the C JSON encoder, making it much faster. Also parsing errors due to the infamous trailing comma are now much clearer.atexit()
works better with multiprocessing. Probably linked to previous deprecation :)Dataclasses now calls exec() once per dataclass, instead of once per method being added. This can speed up dataclass creation by up to 20%.
Fused multiply-add operation, thought
math.fma(x, y, z),
has been added. In case you have the urge to do some polynomial evaluation, right here, right now.time.sleep()
now raises an auditing event. Somebody is going to receive an email about this, and I can’t wait. No, but seriously, you don’t want ansleep(1e9)
in your code without raising an eyebrow in defense projects.re
functions such asre.findall()
,re.split()
,re.search()
andre.sub()
which perform short repeated matches can now be interrupted by the user. Fewer occasions to prevent Ctrl + C from working.str.replace()
's count param can be a keyword arg.
What struck me when reading the changelog for 3.12, and now for 3.13, is how much focus there is on improving what already is. Lots of bug fixes, small API tweaks, little attempts at better perfs, clean-up, and deprecation. Even the argparse
docs which were notoriously bad, got improved.
After years of chasing new features with async, typing, multi-interpreter or walrus operators, I was expecting the core dev new blood to double down on the shinies. Instead, we got better error messages, improved PDB, a more flexible parser, a nicer REPL and so on.
To me, this means they care.
I feel you've completely misunderstood what that issue about comprehensions in annotation scope is about. Its really nothing to do with supporting arbitrary weird syntax, its just fixing a genuine bug where something doesn't work when in an annotation scope (ie. when using the type var syntax) inside a nested class scope. The weird syntax in the bug report is because the bug was found by fuzzing: generating random weird syntax and throwing it at the interpreter to make sure it doesn't break it.
Its definitely not about allowing annotations to "accept comprehensions and lambdas." - you could ALWAYS do that. Its not about annotation syntax, its about annotation SCOPE.
Ie. consider:
class C(get_base_class()):
...
Here, we're calling a function to return the base class - not common, but perfectly legitimate. If we put this inside a nested class, we'll evaluate that function in in nested class scope. And if the class is a generic class, using the new TypeVar syntax, we're also in the **annotation scope** of that TypeVar. This is so you can do stuff like:
def foo[T](val : T) -> T:
And it knows that "T" here is the type variable, scoped to that function.
But there was a bug in the interpreter that caused this to fail when you also introduced another lambda or list comprehension scope, so while this:
class C(get_base_class([i for i in range(5)])):
...
would normally work, but if you did it inside a nested class, and made the class generic:
class Outer:
class C[T](get_base_class([i for i in range(5)])):
It'll currently fail in python 3.12. Worse, in some cases (the examples in that bug report) it'd fail with a SystemError - the python interpreter itself was reporting an erro . Far from meriting a "You don't say", that's something that should never happen: it indicates an internal error in python.
Currently in 3.12 the above just gives a SyntaxError (I assume because they just forbade this nesting as a stopgap in 3.12). But there's no reason it *should* fail: it's all perfectly legitimate syntax. If it worked without the annotation, it should definitely work with it - the above comprehension doesn't even USE the type variable (though the examples did).
As such, this is really nothing to do with relaxing what's allowed in annotations - its just fixing a bug where introducing annotation scopes broke this particular case.
I don't love json.dumps(), but I can't claim I've ever found it mind-numbing enough that it's left me in a "trailing coma". That sounds no fun at all!