Unlocking clarity and courage for leaders in tech

Links

Link

AI coding tools can create specific risks for decreasing users' engagement in learning by introducing inefficient learning habits. [...] This skill interrupts that pattern by reminding you to consider investing in reflection and learning. It introduces a different "mode" of interacting with Claude, which will intentionally feel different than highly fluent and fast agentic coding in the service of helping you reflect and explore your generated work.

— Dr Cat Hicks
Permalink
Link

Alexis Gallagher & Rens Dimmendaal post a pretty compelling analysis of the impact of AI on Python packages (TL;DR not much, except in the area of AI).

However, I think it’s a long bow to draw to say that less software is being made overall.

Open-source Python packages are, broadly, more fundamental than applications in an https://xkcd.com/2347/ sense. They’re mostly designed to be used by others, for holding up other software, and as such tend to be built slower and managed by seasoned developers.

My observation of how seasoned Python developers who use LLMs are “instinctively” doing so follows a rough gradient:

  • vibe-code if it’s a prototype, or something that only I will use
  • use it more like an assistant, for something that only my team will use
  • for open-source python packages? holy fuck no!

I can see a couple of reasons behind the “holy fuck no” instinct which would explain why we’re not seeing an uptick in open-source contribution rate:

Open source is a community of developers, who come together in community to collaborate and morally support each other. LLMs aren’t welcome yet. There’s an element of that which is demonstrating pride in code and care for the world that doesn’t carry the same weight in closed-source. Using an LLM currently feels like it “bypasses” that moral intent. [I predict this will change in the next year as LLMs make more of a chaperoned contribution to open-source maintenance and bug-finding - ie reinforcing that which exists.]

There’s also a structural tension. Open source reduces the cost of building software by pooling labour. LLMs, we’re told, reduce costs by replacing labour. If you can use an LLM to solve a problem without labour, why bother pooling labour around the solution? The advantages of contributing code back are effectively nil when your marginal cost of rebuilding is close to zero.

Open source aims to solve the world’s problems; whereas if you’re going to use an LLM to solve a problem, you might as well only solve your own.

Then there’s the ouroboros problem. LLMs train on open-source. Contributing LLM output back to the training data risks model collapse - some developers may sensibly feel an aversion to feeding the snake its own tail.

So I agree with Gallagher and Dimmendaal’s analysis showing that AI isn’t impacting open source. But I suspect that’s not a sign that less software is being built. I think it’s rather a sign that, for better or worse, LLMs encourage a more self-centred approach to building software.

Permalink
Link

I missed it the first time round, but this is absolutely incredible to watch (speaking as someone who generally doesn’t enjoy sportsball).

Apparently Alyssa Liu quit figure skating due to mistreatment and the kind of toxic culture you might imagine in such a field. She returned to it after years, this time with a focus on enjoyment of the sport and art.

Now it is Alyssa who asserts control over her training, choreography, diet, music (Giorgio Moroder my heart). She let go of her desire to compete, and focussed on her desire to express her art.

Witness the result:

Permalink
Link

Barry’s Economics turns ideas about “Learned Helplessness” on their head. A useful perspective in these times.

Your brain doesn't learn helplessness. Helplessness is wired in. It's the factory setting. [...] What has to be learned actively through direct experience is control.

Permalink
Link

I’ve recently found myself becoming sensitive to sentence constructions of the form “this delivers X while also Y”. You might also imagine X and Y being fairly buzzword-laden.

This phrasing is usually an expression of pride in a solution that addresses paradox, and that by itself is commendable, but something about it signals to me an invsible juggling in the background—a sense of diluted purpose, an unhappy compromise or a missed opportunity. I’m probably thinking that because that’s the underneath feeling I may have had when I used it myself in the past.

Anyway, here is a long post from Richard Claydon, that relates experiences of well-managed and poorly-managed ambiguity, conflict and paradox, and how they relate to the visibility of one’s work. The post contains some incisive descriptions of these experiences and their effects, and is worth spending some time with.

A project lead depends on functions they do not control. A country leader must reconcile local realities with global expectations. A department head is accountable for outcomes that depend on cooperation from peers with different incentives. In such systems, a great deal of what gets called leadership is really hidden integration labour: translating priorities, smoothing contradictions, protecting teams from churn, and keeping several partially incompatible agendas moving at once.

I have a feeling these leaders find themselves generating “X while also Y” sentences more than they’d ideally like.

The post goes on to relate how this work becomes illegible by its very nature, and so the leaders that do the most to untangle the confusion become the ones who are hardest to see.

Some leaders generate movement through force of personality, pressure, and ambient threat. Others generate very little structure at all while sounding agreeable, modern, and empowering. One style produces brittle compliance. The other produces unmanaged sprawl. Both, however, can remain more publicly legible than the quieter integrator whose contribution lies in making difficult work more doable. The system sees confidence, movement, and rhetorical fluency more easily than it sees repair, translation, and burden reduction.

The call in Claydon’s post is to bring visibility to entangled work, and thereby reduce its burden.

What to do about the “delivers X while also Y” construction? I don’t know yet—it’s still emerging for me. My superficial response would be to look for the “both/and” of X/Y. For example, rather than:

“This approach delivers alignment with global strategy while also protecting local team autonomy”

try something like:

“This grounds the strategy so that that local teams can act on it without translation.”

The less superficial part is meaning it.

Permalink