Make it a virtual problem
It started two years ago when nearly everybody in my near surrounding working as an IT expert was burning for Docker, containers, virtualization, clustering and all that fancy stuff with Netflix-, Amazon or Google-cloud foundation. It was back in 2015 when I had first contact with IBM Bluemix Garage and the new craze Ionic and/ or AngularJS. Until then I was a heavy weight champion of JEE development with an application server running 50-100 servlets at once. Now I sat with two guys in my office and tried to port the important parts of a few servlets to the new platform – check if it works out and estimate transition cost.
So I step way back in time, let's say to the golden ages of package managers for *nix-systems. Why didn't any of my employers go with a RPM to deliver software? I suggested this at Deutsche Bahn/ DB Systems in 2014. Put everything into a readily available mature package manager compatible with the target environment – something from RedHat. They looked at me as if I suggested to use cereals (yes, corn, flakes, muesli…overnight oat) with a CD-drive. All they needed was a container with stuff inside that didn't change and had not only a label on it but also it's expectation about the eco system it will spill its beauty onto – if it fits.
What happened instead was a transition from either Administrator or Developer to so called DevOps. An easy piece of code needs to be tied to gazillions of other similar easy pieces with the help of: a management cluster, a cluster of virtual machines, each virtual node/ pod/ virtualization with an iptable (really!) to lock all network traffic in only poking holes into the safety bag with the help of space-levelled YML-file…holding 100.000 lines. This is what Netflix, Spotify and such are all about: a horde of intelligent and lazy comrads of IT business covering the dirty regions with abstraction. And the rest of us are copy cats – the best strategy to survive in a world full of concurrency. Escapism at its best.
Notes from the textbook/ History class: I was part of a high availability high throughput coding team at the beginning of the millenium. A single HP-UX Super Dome machine with 64 processors ran (and still runs) an ORACLE-database that rendered its own HTML-UI, from PL/SQL. You can't go any faster than that. A standard communication tunnel from Frankfurt Main Airport got a response from Wiesbaden (roughly 1hr by train) within guaranteed limits of under 1s. And it had to search through 10-20 million rows, joined accross 2-6 tables. It was always below the 500ms-boundary. I learned a lot these days about mission critical architecture and software design from freelancers who could call any day rate and got payed. But they didn't study blogs or IT magazines to find the perfect language or platform. They matched the problem space to the tools at hand – experience and methodology.
One simple task is left for you: reflect upon the past 2-3 years, how fast a library, tool or paradigm got outdated, replaced, irrelevant or let's call it legacy. Will the technology you managed today still be of use tomorrow? Isn't Amazon Lambda vendor lock-in? All the Google-components you bound into your client calls home through URLs, doesn't this slow down your loading time? A content delivery network requires sophisticated traffic logging to get the hot spots/ click rates that once could be extracted from your local access.log. And an outage of a DNS-cluster leaves Azure-nodes alone in the woods. Do you really run thousands of services that need transaction-tracing through gigabytes of logstash?