Virtualization - Infrastructure Disruption
The internet lets the cloud dump a rain of information across the landscape
Now, I’ve established that technology enables new business models which previously were not economically viable. I’ve shared a little bit of historical context for an IT market trend that is not well understood among many technical people.
This market trend is often articulated among economists along the lines of:
New technology enables novel organizations of land, labor, and capital
Very Econ 101, right? But many engineers lose sight of the forest for the trees - or rather, don’t really care about trees (or paper, for that matter). People can, for whatever reason, get so excited about the technology that allows us to run Kubernetes clusters that they fail to see the reason why one might want cluster computing in the first place.
“Now APIGuy, clusters have lots of workload benefits that other architectures lack! There are meaningful differences in use case, even the most product-pilled among us has to admit that! I hate you - you suck!”
Absolutely - I am glad someone will work on the Linux Kernel so I don’t have to, but to me the more important insight is that there are consumers who demand highly elastic consumption of parallelizable resources.
When you learn *why* they want that, it ties back to their budget. They could do more interesting projects and work on new technologies if they had more budget and tools that made them more effective.
Now back to the story at hand - a technology emerged in the late 90s called Virtual Machines. Now if you learned CS any time in the past couple (3-8) years it may seem pretty blasé that you can create a virtual machine on most modern architectures, then create small, isolated and encapsulated runtimes in each.
It used to be that you could only use one server for one purpose. You would need a server for every website or service that you ran, more or less. You could get away with putting a lot of it on a beefy server, but you couldn’t get around some technical limitations.
If you commingled services on the same box, they would compete over the server resources so your web-server might choke your file-server in periods of high utilization.
↪ (CS students, this should bring to mind SJTC, SJF, and other CPU time share algorithms 😉 - the astute reader will see the connecting tissue between the mainframe computers and modern day networks)
For organizations with high throughput - it often made sense to separate the server functions into specialized hardware and software stacks for each, which was often proprietary.
More importantly, those beefy servers were really expensive. Nowadays its pretty easy to use a server for multiple purposes or customers at the same time, but…
Back in 2012 it was the bomb - it was really cutting edge. Suddenly these servers that cost $250k to buy, however much to install and maintain ($200k+ yearly in employee and license costs?) - you could share tenancy on the same box. Remember how things used to be?
It basically put however many tenants you could cram in the box as the denominator and the server cost as the numerator.
That meant that whichever company could deliver virtualized servers could provide hosting and infrastructure for their customers at a *fraction* of the cost, nearly overnight. (Remember that super cheap information and software delivery infrastructure?)
Now - the virtualized infrastructure (cheap means of production) and the internet (cheap product delivery vector) enabled swathes of new entries into the IT space. Simply servicing all the new companies required *even more companies* that managed the services these new companies relied on.
A technological innovation allowed product and business model innovation
Ever wonder how Amazon Cloud Architect became a role? The enormity of the base-level service portfolio on offer by AWS requires full time employees to explore and stitch together.
The services started out as very low-level wrappers around the core AWS offerings of elastic compute, storage, and networking. Over years those wrappers were wrapped and replaced and wrapped and combined - until we have the complicated service market landscape we see today.
That’s the level of new firm entry we are talking about - tens of thousands of global managed services firms, software development firms, internet service providers, salespeople, insurance and finance companies to service all of the above.
This process accelerated as Pay-as-you-go models and containers allowed even cheaper delivery of workloads and data, now you only paid for the compute you actually used rather than the “whole” slice of the server tenancy.
The technology that enabled that is not to be looked down upon, but the question I encourage engineers and founders to ask themselves is:
What is the direction of efficient progress, and how can you accelerate that progress?
Where in a system does efficiency provide the *greatest leverage*?
If you think of anything good I have a big sack of cash waiting for you as a Series A.