Parity Bits

Invisible Engineers & Disposable Software

The ratio of compute(disposable software) to compute(maintained software) will double every 18 months.

Or maybe

The average deployable lifespan of new software solutions will half every 18 months.

Opinions seem divided about whether the LLM1 family of tooling will drive software developers to the unemployment lines or make them more valuable still. My own guess is that both will happen at once. In the same capacity as software before it, the LLM tool set might represent yampa: yet-another-more-powerful-abstraction. There is still no silver bullet ↗, but I expect some present-day werewolves could soon be regarded differently.

This much feels clear: the LLM family of tools will open up some portions of software development to a wider audience via natural language interfaces. There's a great writeup here: Post-GPT Computing ↗. The gist is that a many pieces of software really are neatly specifiable via the natural language interface, and that the LLMs are approaching good-enough territory both in understanding the spec and execution.

A minor follow-up is that the population at large (technical & non-technical) will grow its intuition about how to probe these interfaces, and at the same time the commercial LLM tools will realize the demand and get better at second-guessing pieces of its own understanding of a given spec, and pushing back for clarification:

And so on. Oi. The population and the software will meet in the middle somewhere.

Engineering as a Service

Moreover, much software authoring will disappear beneath the surface of the LLM interface. Today's LLM interfaces do really well with descriptive or creative tasks, but tend to fall flat when it comes to producing very specific factual information. If we ask an LLM today which NHL player scored the most third period goals on Wednesdays during the 1980s, it won't have a good answer (actually it'd probably just say Gretzky and happen to be right, but ignoring this...). In these situations, LLMs seem to aimlessly meander while pretending to be right.

But if we re-write this as a request for a piece of software, the current LLMs are already capable of spitting out a correct database query. Adding the data scraping isn't too hard. The prerequisite piece is for the LLMs to actively scout incoming queries for their suitability to being restated as a request-for-software.

From overconfident aimless meanderings to bespoke data-science execution in three easy feasible steps.

The same transformation can play out in various domains. Physical simulations, economic modelling, sentiment analysis, etc etc etc. Wherever a natural language query can be translated into, or aided by, a request-for-software with a specific output, it'll happen under the hood, be run once, and poof.

One-off Takeoff

As the level of effort required to produce increasingly sophisticated programs drops, the incentive to store, retrieve, and reuse these programs will also drop (not to mention the fact that cataloging this onslaught of software would be infeasible hard). Large numbers of 500, 1000, 10_000 line software projects will be spun up, run once, and then forgotten except for the subjective impact its outputs had on its author. If you thought that being the only user ↗ of your software was disinhibiting, just wait until you're composing software that'll only run once. Maintenance costs vanish. "You should use a library"? Obsolete.

I doubt that there will be a large immediate drop in the number of professional computer touchers, but I also expect over time that the dominant (by volume) mode of "software engineering" and program execution will be general population authors creating single-use software, just about as casually as we currently enter search queries. This mirrors the replacement of desktop traffic with mobile traffic, "serious" gaming with flangry bird gaming, and every other eternal September so far observed.

Rot Acceleration

At the same time, the costs associated with maintaining software might be rising. This is mostly anecdotal, but I vaguely recall a blog post2 from the past few years about the author's recent experience trying to revive two different software projects. One was ten or fifteen years old, the other two or three. The older program ran correctly out of the box, but the more recent one required a full day's work to shim various broken pipes. The author's comment was that software used to be made of files, which are durable, but it is now made of dependencies, which are flaky.

Maybe there are a dozen such blog posts - it certainly seems to mirror the general trajectory of software construction. 30 years ago software was written with installers that spanned a dozen floppy disks, which were put in a box that sat on a shelf, later to be loaded onto computers which had no internet connections. Today a software dependency that hasn't been updated in the past 30 days is assumed to be abandonware unfit for purpose.

Software written by the LLM family of tools is further reliant on just-in-time dependencies, and we will expect less and less durability from it.

Taken together

Combining wild speculation and armchair anecdata:

When cost of maintenance is higher than cost of production, durable goods are replaced with disposable. There will still be need for durable software goods - in fact it may still rise - but as a fraction of total compute effort they will become more and more niche.

When most software is created on the fly by LLMs, the LLM optimization war becomes all the more lucrative.

Credit

Obviously, to Gordon Moore ↗, 1929-2023, for both the persistent hint to think of all technological changes exponentially, and for his own self-fulfilling prophecy that makes it all possible.

  1. large language model - the machine learning technique powering ChatGPT and competitors like Bard

  2. Would love to link it if anyone knows what I'm talking about.