We are about the enter the PC era of AI
What the history of the computer tells us about what is coming next
While Welcome Magazine is, in general, an arts and culture publication, we are also born of the internet, and have watched technological developments shape artistic and cultural realities. For this reason, we keep a close eye on developments in tech. This weekend’s read stems from that expertise. We hope you enjoy.
In an economic and social climate fueled by aspirations for the transformative power of a single emerging technology, trying to predict where AI is going is no longer just a hobby or a hustle or a grift: it’s a market-moving phenomenon. A convincing doomsday scenario released over the weekend contributed to a significant downturn on Monday proves the point.
They say history rhymes. At least it does to those with an ear for it. And the most relevant line for our current times was written about a half century ago, when the mainframe computer gave way to the PC. The parallels between this transition and the one we are undergoing today are striking, and have a lot to tell us about what the next stage of AI might look like.
To start, a story. Here’s what it was like to use a computer before the emergence of PC:
Say you’re an economics student at MIT in the ‘60s wanting to run a regression on some macroeconomic data. You’ve identified the relevant variables, surveyed, and recorded data in massive binders. Until a few years ago, the next step would have been to calculate your regression by hand with a desk calculator, recording results in pencil, a painstaking process that would take you weeks. Now, it’s only going to take you a couple hours. There’s new tech on campus; a mainframe computer. It has its own building. You head there, lugging all of your data binders.
The first step is to transfer your data onto keypunch cards, which a designated employee will batch load onto disk storage so the computer can read it. Then you go to the ‘terminal’ where you can access the computer. This terminal looks a bit like a typewriter bolted to a desk. You have to log in, so the amount of compute power you’re using can be tracked. Compute costs a certain amount per command, but as long as you stay below a certain threshold, your department will foot the bill for you.
You’ve memorized a series of cryptic code commands to access and regress your data. The process is slow. After each command you have to wait several moments for an answer to spit out. You can’t even see the computer itself, which is in a climate controlled environment a few rooms over. There are, however, four other terminals in the room with you, and the people sitting at them are part of the reason this is taking so long; the computer is splitting its compute power between the whole group.
Despite all of this, in the end, the computer runs your regression in 1/1000th of the amount of time it would have taken you to do it yourself. Your R-squared, coefficients, B1, B2, and everything else you need, returned in list format you can take with you. As you exit the building, still weighed down by the binders of paper data, you feel like a weight has been taken off your shoulders, and wonder if this means you can throw away the binders of data.
This hypothetical sounds archaic to us, but it is an apt analogy for our current era of AI. Five years after that MIT Econ student ran that regression, the personal computer arrived and rendered the mainframe computer mostly obsolete. A similar change is coming to the world of AI, and it will arrive much sooner than five years. Really, it’s here already.
In case the similarities aren’t obvious, let’s return to the mainframe computer. It is a massive, centralized piece of technology owned by a company or an institution. Individuals essentially share this machine. They pay for their slice of its compute. They come to the machine with questions, and hopefully leave with answers. The process by which the machine achieved those answers is opaque. Its function is solely determined by the company building it, and there is almost no room for customization.
Despite the fact that modern software and interfaces and internet access obscure this fact, this is basically also how large chatbot LLMs function. ChatGPT seems like it is running on your computer, but it is not. The vital technology is elsewhere, in a massive windowless facility, centralized and privately owned. Its compute power is split between active users, which anyone who has ever tried to use chat during a surge knows. We bring our questions to the terminal (in this case, the chat box), and it gives us answers, which mostly we are left alone to apply ourselves. The particular methods and logics used to reach these answers are guarded trade secrets.
To understand the ways that this is about to change, let’s return to the emergence of the PC in the ‘70s. PC’s replaced the centralized nature of the mainframe computer with individual ownership. Suddenly, people owned their own compute. A general utility became a personal workflow tool that could be directly integrated into everyday tasks. A highly standardized version of the computer, built on the vision of a few big companies, was replaced by a more democratic model where the computer’s function could be endlessly reinvented through software.
There’s a long and interesting history of how this transition was possible, but the two most important factors were interface simplification and a compressing cost curve. In other words, they figured out how to make computers simple enough for the average person to understand, and cheap enough for the average person to afford.
The same thing is happening right now for AI. While using an LLM chatbot is easy and intuitive, figuring out how to extract deployable utility from them is still, for many people, prohibitively difficult. We all know that Claude can be used to build websites, but only a fraction of us know the right prompts, revisions, and software interactions to actually do so. But if the functionality of Claude was built directly into an OS, and what appeared was not the blank chat box but simply a button with something along the lines of ‘want me to do this?’, suddenly everyone can use AI for anything. And on the price side, the training frontier was a trillion dollar trailblazing endeavor. Running smaller models locally is going to very little in comparison. The per-inference cost is also collapsing.
So what does this mean? What is the ‘personal computer’ of AI?
It will take more than one form, but some general principles: you won’t rent AI compute, you’ll own it. It won’t be centered elsewhere, but will live on your computer, or desk, or around your neck. The basic use cases will be streamlined enough for anyone to use. Easy access and simple use means that the level of integration into our daily lives and environments will explode. And just as the personal computer brought with it an explosion of software, building on top of the now ubiquitous hardline, in-home AI and local AI networks will lead to a proliferation of new use cases, functionalities, and customizations.
If it’s true that history rhymes, this seems like the way things are headed. But this is just the start. Right now, as doomsday competes with utopic abundance in prediction models, it’s more important than ever to try to imagine a future. Understanding this transition is a good foundation for that.








