Alexis Gallagher & Rens Dimmendaal post a pretty compelling analysis of the impact of AI on Python packages (TL;DR not much, except in the area of AI).
However, I think it’s a long bow to draw to say that less software is being made overall.
Open-source Python packages are, broadly, more fundamental than applications in an https://xkcd.com/2347/ sense. They’re mostly designed to be used by others, for holding up other software, and as such tend to be built slower and managed by seasoned developers.
My observation of how seasoned Python developers who use LLMs are “instinctively” doing so follows a rough gradient:
- vibe-code if it’s a prototype, or something that only I will use
- use it more like an assistant, for something that only my team will use
- for open-source python packages? holy fuck no!
I can see a couple of reasons behind the “holy fuck no” instinct which would explain why we’re not seeing an uptick in open-source contribution rate:
Open source is a community of developers, who come together in community to collaborate and morally support each other. LLMs aren’t welcome yet. There’s an element of that which is demonstrating pride in code and care for the world that doesn’t carry the same weight in closed-source. Using an LLM currently feels like it “bypasses” that moral intent. [I predict this will change in the next year as LLMs make more of a chaperoned contribution to open-source maintenance and bug-finding - ie reinforcing that which exists.]
There’s also a structural tension. Open source reduces the cost of building software by pooling labour. LLMs, we’re told, reduce costs by replacing labour. If you can use an LLM to solve a problem without labour, why bother pooling labour around the solution? The advantages of contributing code back are effectively nil when your marginal cost of rebuilding is close to zero.
Open source aims to solve the world’s problems; whereas if you’re going to use an LLM to solve a problem, you might as well only solve your own.
Then there’s the ouroboros problem. LLMs train on open-source. Contributing LLM output back to the training data risks model collapse - some developers may sensibly feel an aversion to feeding the snake its own tail.
So I agree with Gallagher and Dimmendaal’s analysis showing that AI isn’t impacting open source. But I suspect that’s not a sign that less software is being built. I think it’s rather a sign that, for better or worse, LLMs encourage a more self-centred approach to building software.
Visit link