Containers are still a buzz word today and one I think we will see die out in the not too distant future and here are my reasons for thinking that.
Linux containers are cool!
Ever since I’ve been working in IT, isolating components that work together was useful for reliability. We used to and still do this through the use of packages to ensure consistent delivery of artefacts to the server. We then used these packages in conjunction with configuration management to build consistent Virtual machine images that are of a known good state. This is then turned into a ‘base’ VM image to be used to build on top of. This gives us a great amount of consistency and we know we can re-build the image, bugs and all to the same known state, be it from scratch or from a base image. Linux containers apply this same method but in fewer steps. It enables someone to monkey around with an image and say this is the new base image. Sometimes this will be done programmatically through configuration and packages, a shell script or some other automated way such as Build Servers.
Up to this point, the process is the same, the output is the same. The nice feature that containers add is when you apply a delta to the base image. The new image is simply the old image + the delta. With a traditional VM approach, you have to replace the entire image which means moving a lot of data around whereas with docker you just move the delta. That is cool. That saves data and time and if we could do the same thing with a VM image it would be perfect.
Another thing containers have going for them right now is that they are cool. It’s new and it’s shiny. People in general love playing with the new shiny technology as a bragging right and will often come up with loose justifications for having it used even if there is no actual benefit and the operational cost to maintain it is much higher. I’m not saying we should never use new things but that they should be decided upon considering all of the factors not just it’s new and Google use them.
Containers are simple, container infrastructure is hard
Containers in isolation are fine, nothing too hard, as we compared them to VM images earlier there is not much difference (conceptually) only in execution (process vs entire OS) which granted have some nice benefits such as speed to spin-up or down a container is vastly quicker than a VM. However, when we start looking at running multiple tools to make the containers play nicely with each other just for these time benefits it is not always justified. The complexity comes in when you realise you need a service discovery mechanism (etcd), A mechanism to keep containers together that need to be together (Kubernetes), a Logging mechanism (ELK), Monitoring (Hawkular), persistent shared storage (Gluster) and the list goes on.
Now again, in isolation any of these tools are fine. But for new people coming across containers, particularly now vs 2 years ago, there are a lot more tools that need to be understood in addition to come up with a useful mechanism for practically managing containers in the wild. This learning curve can be dramatic and almost impossible for teams who are only just starting to use configuration management. However there will always be that unicorn team, the one that has done it all and can manage it all within reason, they just neglected to mention that they have a team of 5 people. Each of those 5 people are veterans of cloud, logging solutions, service discovery, configuration management. The concepts and approaches across the team, although with a learning curve were fine. With this in mind, that is not a typical team in most organisations. A typical organisation has a team of say 5 people where only one or two of the people in the team understand those concepts well enough to be able to implement them. This then leaves the other 3 people in the team to struggle to upskill with this mountain of new knowledge and concepts. They will do this and they will succeed with it at some point, but most likely after a failed implementation or two of a container platform.
I’m not saying don’t do it because it’s hard, I’m saying that you need to ensure that as a team everyone understands the basic concepts of isolation, virtual networks, discovery mechanisms and why those are necessary before simply throwing in some containerisation – there is a steep learning curve and it can be too high for some people.
Containers are really great when used properly. When you have well defined, stateless architecture containers work really well especially for scaling horizontally. However, for most applications that exist today in enterprises the architecture is not designed to work in this way and shoehorning these into a container defeats the value of the container. For new development efforts where stateless apps are being designed or the application can be tailored to the specific constraints of a container platform, it makes sense to adopt this mechanism. If the product can be run in AWS’s or Azure’s hosted container platforms this takes away almost all of the bad aspects of running your own container platform. If however you are looking at a monolithic Java application that is 600MB in size before you even consider the ‘base’ database which is another 400MB in size and each container consumes in excess of 12GB of memory then maybe containers are best avoided. I say this as, particularly if you are running your own platform, you just can’t get any real valuable cost savings in that situation. Being able to consolidate 3 or 4 of these applications onto a container node does not give as much benefit compared to running proper microservices where in excess of 100 containers on a single host is much better density. If you now factor in the complexity of the platform (if running it yourself) the true business benefit just isn’t there. As a side point, check out Why Build a PaaS in a Cloud blog for the potential cost savings that can be achieved by using a PaaS properly. If used correctly it truly is a force for good, however as it stands there are a lot of barriers for incumbent software.
Considering that to make the most out of a container platform for traditional enterprise software requires that those products be re-architected into smaller microservices to make better use of the cost savings a container platform can provide, then it is important to think about the emerging trends that may make containers obsolete. Serverless architectures (sometimes called “Function as a Service”) are coming and Mike Roberts does a great job of explaining it. In essence with the move to push more workload onto the front end with frameworks such as Angular, Ember and React that can all make calls to service endpoints such as microservices it is then possible to start abstracting away the specific functions they call. i.e. the specific route (path in your browser) that each API call makes could just call that specific function. This means there’s no need to run an entire microservice for each area but instead make it so that when the route is called that specific function starts (less than 20ms) does its single operation (half a second) and goes away again. This produces a situation where we no longer need to run a container for a set of functions but instead, make use of a ‘Function as a Service’ to put each function into its own definition with rules about how it should be called. Now there is just execution of the function inside a PaaS/FaaS system. This greatly simplifies the operational overhead of containers and even VM’s as well as providing better cost management as now you truly are paying just for the half a second the function runs rather than the spin up and spin down time of a container.
My point here is that if an enterprise accepts that their application does indeed need re-architecting then moving to a container based approach will simply set them behind the curve instantly and they would be far better off focusing on moving to serverless. It may well be that they have to do it as a journey by splitting their monolithic application into macro-services then microservices then serverless but the end goal would not and should not be to simply get to containerisation.
Hopefully, this has been enlightening and shows the comparisons between containers and existing solutions that are simpler to maintain as well as introducing serverless the next step on from containers. It is for these reasons – complexity, lack of fit with the application and future technology that when looking at containers or implementing a container solution it is important to consider the capabilities of those responsible for the container platform as well as the business’s desire to re-architect its application to make it fit. If you are forcing a product onto a solution that is sub-optimal or a platform that is not supportable by the team, then containerisation should not be considered as a ‘quick-fix’. Instead, it is something that needs to be considered, not just adopted blindly. As with all things, it’s about balance and I hope that the points raised above are considered before simply jumping into containers blindly.