Parity Bits

Stack Ossification

technology is path-dependent.

some paths now have ushers.

With the rapidly growing category of LLM-based dev tooling, experienced developers and interested outsiders alike are capable of cranking out a great deal more code. A great deal more software.

The tooling all has embedded preferences. React, tailwind and Python are probably the strongest standouts from my own perspective, working mostly with Claude. Preferences here is anthropomorphizing a little, but that seems appropriate given the form-factor. With prior tools, the thing I'm talking about might be described as the affordances offered by the tool.

Personally, I have recently created a few trivial, and one not-that-trivial, personal apps in Python. The thing is: I had never, ever, written any python in the past. Python's syntax is straightforward enough that I can 'get the gist' at a glance, and wherever something is new or unclear to me, there's an immediate explanation available to me at whatever level of depth I'd like. LLMs do very well at providing appropriately levelled explanations with implicit contextual clues: a model, knowing both that you are driving the technical roadmap of some project, and also that asking a total beginner question about a specific piece of language syntax, tends to do well at tailoring the response.

A year ago, it would not be like me at all to spin up projects of any size in unfamiliar languages all willy-nilly. I'd do it only if I had a very specific motive, like moving into a new job or project that uses the language.

What has changed? I think there are two distinct drivers, which reinforce one another: the nudge and the identity-shift.

nudge

The nudge is essentially those preferences mentioned at the top of the piece. After some number of deflections from a default python response into a familiar language, I eventually relented and just rolled with it and allowed the LLM to build in Python.

This is much more explicit in terms of tiny-web-dev, where the public-facing chat interfaces of the major providers include inline rendering for react-based pages, but not vue, svelt, or whatever else.

identity-shift

Writing code by hand feels increasingly like multiplying large numbers by hand. I can do it. I understand and believe in the experiential value of having learned do it. But is just. not. practical.

When writing code now, I'm more and more closely resembling a product manager, with Claude as the developer. The thing is: as a PM, how much value do I get from niggling with the developer about the language or tools being used?

From where I stand right now, it feels likely that I'll use Python for more personal projects. As some of them grow, I'll likely be comfortable using it for projects intended for production or distribution as well.

Ossification

I'm an experienced, curmudgeonly, middle-aged developer. In the demographics of code-generating users of LLMs, I should probably land on the right-hand side of a passive-active curve.

But it wasn't enough! My decades-long general ick reaction to significant-whitespace has melted away. I bent the knee to the convenience of the top search response.

If it's true that I'm on the right side of this curve, then the embedded preferences of the meta-tooling (LLMs) is going to dominate tool choice (languages, deployment stacks, databases, etc) for all greenfield projects. Besides "picking winners" among current players (eg, react vs the others), we might expect an ever steeper climb for new tools.

Once upon a time you had to be 10x better than an incumbent to flip a mature market. Maybe now it'll be 100x. To be honest, I don't know if that multiplier can be achieved.

A potential caveat

Part of the argument here is that the nudge toward tool X will produce greater volumes of tool X output, which becomes part of the training corpus for future LLMs, and reinforces both their preference for the tool and their skill with it.

BUT: there have been observed hiccups in training models on model-produced data.

The failure mode is that LLMs generally produce slightly airy, sanded-smooth content. Subsequent training on this airy, sanded content drives the models to eventually produce only perfectly hollow and spherical content, suitable only for fast food ball pits.

I expect that naive recursive training schemes will be replaced with slightly less naive_ recursive training schemes, and this collapse can be avoided. Even if that isn't the case, the current preferences of the models could be sticky enough.

A completely alternate take

While writing this, I came across a quote from Steve Yegge, via Gene Kim, in turn via Simon Willison:

In the past, these decisions were so consequential, they were basically one-way doors, in Amazon language. That’s why we call them ‘architectural decisions!’ You basically have to live with your choice of database, authentication, JavaScript UI framework, almost forever.

But that’s changing with LLMs, because you can explore, investigate, and even prototype each one so quickly. Even technology migrations are becoming so much easier/cheaper/faster.

These are all examples of increasing optionality.

It's true!

But we also believed in earnest at one point that the increased communication optionality offered by the internet would lead to mutual understanding, respect, and enlightenment. Since then, we maybe know better that optionalities are marketplaces, lots of marketplaces are winner-take-most.

Maybe there's a counterfactual where we're all wikipedia editors, but as it turns out we're mostly feed refreshers. The revealed preference of the market is low-friction.

More options do not necessarily lead to more paths being well-worn.