Camptocamp – News

Container adoption at Camptocamp

24 October 2017

Preamble

Containers provide solutions to problems of abstraction and portability which are not new to the IT world. Containerization has however become a central concept to face the ever growing complexity of IT systems that are deployed and accessible on the Internet. At Camptocamp, we have a long history of using container technologies. Starting with chroots (parts of Unix since 1979 and Linux since its inception), we moved to Linux-VServer which already offered some resource quotas, and finally adopted OpenVZ which already seemed like a revolution in 2005.

Since 2006 many efforts (initiated by Google) have been made in the Linux kernel to limit, count and isolate resource usage (processor, memory and disk space). This considerable effort known as cgroups (control groups) has allowed the emergence of Linux Containers (LXC) and then Docker, which was distributed as an open source starting in 2013. Today there are very serious challengers to Docker, in particular rkt (developed by CoreOS) or more recently cri-o (from the Kubernetes project).

Today IT companies such as Camptocamp cannot just develop and ship one or more software solutions at a given time. The goal is now to set up full platforms to orchestrate and monitor, throughout their entire lifecycles, the various components which make up the deployed solution. These platforms can thus evolve, functionally (change management) and in terms of size (scalability), in order to face the increase in the number of users. Service redundancy enables a very high level of availability which must also be taken in consideration.

Early 2016: move to Rancher’s Cattle

During 2015, we used Docker a lot to jail build processes and perform single host deployments where each service was isolated in a Docker container. In the middle of 2015, we realized how important and necessary it was for us to learn more about container orchestration. Orchestrators solve many questions related to the distribution of IT services on multiple servers, whether physical or virtual. The main features provided by orchestrators are:

  • Multi-host network orchestration (overlay networks, software-defined networking)
  • Scheduling and affinities
  • Interface with storage services
  • Scalability
  • Load balancing
  • Rolling- or blue-green upgrades

At the end of 2015, we performed various evaluations according to criteria we had defined beforehand to meet our needs. Our objective was to determine which open-source orchestrator was better fit to be quickly put to use in our projects, according to the identified needs. These were the orchestrators we inspected at this time:

The various tests we performed allowed us to quickly exclude some orchestrators and we ended up focusing on Kubernetes, Openshift, and Rancher. The reasons are briefly explained below.

Kubernetes

In the beginning of 2016, Kubernetes (also known as k8s) was already considered an extremely complete solution (derived from Google’s Borg) but quite difficult to implement. The idea of not being able to (re)use standard Docker APIs (in particular Docker Compose) bothered us. Additionally, the features of Kubernetes and its architecture seemed disproportionate when considering the needs of our teams at the time. We also thought that the learning curve for our technical teams who would have to use this orchestrator would be very (probably too) important.

OpenShift

As a Red Hat Partner, Camptocamp was naturally very interested in this solution. It was all the more so that, starting with version 3, OpenShift was entirely rethought around Kubernetes. In theory, this solution added to Kubernetes all the elements necessary to orchestrate containers in an enterprise context (authentication, RBAC, private registry, etc.), along with many features linked to continuous deployment. However, after testing version 3.1, we came to the conclusion that these promises were unfortunately not “yet” there and that it would be impossible, as it were, to use this solution for production projects. Eventually we attracted by the idea of a solution heading towards a “PaaS” application platform (very much DevOps-oriented), but we unfortunately had to conclude that this solution was not yet mature enough for our needs.

Rancher

It is by some kind of chance that we discovered Rancher at the end of 2015. Rancher’s initial approach was to rely as much as possible on standard Docker APIs. This made picking up this solution much easier to people who already had an experience with Docker. This simplicity, coupled with an extremely intuitive management interface, quickly attracted us. Furthermore, and contrary to other solutions, Rancher’s deployment was fully integrated to the solution.

In the beginning of 2016, we were ready to draw some conclusions on the maturity of these various orchestrators, while obviously keeping in mind that this domain was very vast and in full expansion. Technological evolutions are thus made: they bring concrete answers to complex problems, but at the same time, they question many methods that were adopted and stabilized along the way. It is a continual trade-off between innovation and stability.

We could summarize our feeling at the end of 2016 as follows:

Kubernetes was very impressive from a fonctional point of view but looked more like an orchestration toolkit than a fully integrated solution in the context of a real-life project.

OpenShift came out as a truly integrated solution in an enterprise context, taking full advantage of the well-tested features of Kubernetes, but version 3.1 was not mature at all, too much “PaaS”-oriented, and not enough “CaaS” yet. This aspect has changed a lot since and was very likely linked, at the time, to the transition between version 2 and 3.

Rancher, along with its orchestrator Cattle, was simple to implement andfully met our needs at the time. It seemed to us to be the most reasonable solution to set up up stable platforms and drive adoption within our teams.

June 2016: Rancher/Cattle in production

In the beginning of 2016, we started to massively adopt Rancher/Cattle for internal needs, and then to architecture and deploy new projects. We learned a lot during the whole year 2016 and discovered just how much Docker and Rancher have enabled us to evolve both technically and culturally. Beyond managing distributed resources and services, the facets of Continuous Integration and Deployment have really been the most beneficial for us in this change.

Internal collaboration between people in charge of Operations and Engineering took a whole new dimension during that period: many walls were removed and working with common technologies truly brought us closer. Evidently, inspiring a DevOps culture into a company cannot be done by will alone, it requires to share tools and rethink responsibilities.

In March 2016, Rancher started proposing Kubernetes and Docker Swarm as alternatives to its historical orchestrator Cattle. Several of our projects were being moved to production using Cattle and, besides a few problems in its infancy, feedback were really positive. Adopting Docker and Rancher has also enabled us to move on very rapidly on other fronts. From the point of view of provisioning, the development of a Terraform integration has allowed us to automate the deployment of Rancher environments; the Prometheus/Grafana couple has revolutionized our approach to monitoring; finally, the integration of Continuous Deployment with Jenkins gave a new dynamic to development projects. All this could be achieved without concessions in terms of maintainability or security for the deployed services, on the contrary.

Start of 2017: Kubernetes is everywhere

Containers usage is spreading everywhere early 2017. Most of the new projects we are working on are based in one way or another on Docker. We are generally satisfied with Rancher/Cattle as orchestrator. We identify however a few points that are becoming quite limiting, in particular the absence of a private registry, an access control system a bit too simple, and a few issues with network stability. These limitations in terms of access control do not allow us to really set up multi-tenant architectures, which is a prerequisite to sharing and partitioning a platform and resources between distinct client organisations.

The beginning of 2017 is also a milestone in the ferocious fight between the various container orchestration platforms. It is obvious that Kubernetes is surpassing its concurrents, both from a technical and from a communication point of view. Charismatic developers such as the omnipresent Kelsey Hightower contribute to dynamize the Kubernetes community, whose number of contributors and users is constantly growing.

This massive adoption of Kubernetes as a kind of container orchestration standard is reflected in editor solutions. CoreOS is letting go of Fleet for Kubernetes, IBM is also integrating Kubernetes into its Bluemix solution, Red Hat is accelerating the development of OpenShift by participating very actively into Kubernetes development, and VMWare is very busy with Pivotal on the Kubo project. As for Rancher Labs, they are increasing their communication and efforts around Kubernetes, however without giving up on their historical orchestrator Cattle.

April 2017: OpenShift 3.5 release

After more than a year of intense work on Docker/Rancher, we decided to intensify our watch in parallel with Kubernetes and OpenShift. As a Red Hat Partner, we have the possibility to participate in various training sessions and workshops in this domain, and we seize that opportunity.

After several weeks of immersion into Kubernetes and OpenShift, we must admit that these solutions have evolved considerably, both from the point of view of their deployment as for their features. Our impression is very good compared to our tests at the end of 2015 and early 2016. We are however not fully convinced when it comes to resources (the number of servers) required to simply set up a cluster. Evidently, Kubernetes/OpenShift is not really adapted to small projects. The use of Rancher/Cattle has always made sense to us, even if we feel more and more that Rancher Labs will eventually give up on Cattle development.

In order to validate our OpenShift skills, we developed various “proof-of-concepts”, in particular a demonstration of a CI/CD platform of a solution developed by our Geospatial department, called GeoMapFish. This demo doesn’t only use the primitives of Kubernetes’ Docker orchestration, but also makes use of the vast majority of OpenShift’s features. In particular, it uses the Docker image creation process known as Source-to-Image(S2I), the Jenkins integration possibilities, as well as Helm coupled to OpenShift to encapsulate the deployed components. This demo was presented during the last Red Hat Forum which recently took place in Zürich.

September 2017: Rancher 2.0 release

Our feeling about the future of Rancher/Cattle is confirmed on the 26th of September 2017 with the announcement of Rancher Labs concerning the 2.0 release of Rancher. After having abstracted various orchestrators in its solution, Rancher Labs decides to review its strategy and focus solely on Kubernetes in its new major version. The communication that follows this announcement brings some reassuring elements for Rancher/Cattle users. In particular, Cattle version 1.6 will be maintained until at least mid-2018 and the transition between Cattle and Kubernetes will be facilitated by a major backward-compatible layer.

The future

This convergence towards Kubernetes could not be predicted, but at least it brings some clarity on a complex situation by refocusing the efforts of many open-source contributors (such as Camptocamp) on a coherent ecosystem around the Kubernetes project, under the governance of the Cloud Native Computing Foundation (CNCF).

As for Camptocamp, we clearly wish to keep and intensify our efforts around OpenShift which is now reaching maturity, however without giving up on other platforms such as Rancher, also built on Kubernetes. Creating applications that can be deployed on various Kubernetes clusters is very possible; the Helm is clearly heading in this direction and we hope that our experience in developing Terraform providers will enable us to contribute to the improvement of automating deployments based on Helm.

Red Hat added extremely useful services to Kubernetes for “on-premise” deployments, and for clients that wish to keep a high level of control of their infrastructure. Using containers clearly does not limit the choice of the orchestrator: many other points (not only technical) must be rethought and addressed, and this is without a doubt on these that Camptocamp has invested most and learnt in the last 2 years. We put this experience daily to the use of our clients, through many projects or via our application platforms.

  1. *
  2. *
  3. *
  4. *