This site uses cookies to offer you a better browsing experience. Find out more on Privacy Policy.


February 7, 2017, Adam Karwala

Change is an aspect of life that we have to deal with on a daily basis – and one that has a special significance for software development, as it makes or breaks a project often enough. It’s no wonder then, that it has lately become an important topic for many software houses. The results? New ideas and support tools for software development and distribution.

Only through a proper change management we can avoid our endeavour’s failure and ensure its success. And one of the key aspects of change management is the feedback. When used well, it can minimize the number of misunderstandings and reduce the cost of the possible amendments. The more feedback we gathered from the developers, testers and users, the more reward we can reap. That in turn depends solely on how often they receive new versions of software.


The current model of software implementation – installing it directly on the device, in a way that is respective of the installed operating system (OS) – has many drawbacks. Differences in OS configuration, installed libraries and the software installed may cause something that runs smoothly on test environment to cause problems on production environment. This results in a longer implementation time and makes the installation process hard to automate.

The new approach, devised with solving the abovementioned issues in mind, lies in preparing and implementing a container along with the application. The container is a virtualisation mechanism on the OS level. It allows for creation of an isolate environment for the application, making it largely separated from the OS, other containers and the software they run; it also has its own file system. Additionally, each container has only partial access to the OS resources – given or taken back, depending on needs and their availability.

Virtualisation isn’t everything a container offers. It’s also and self-sufficient environment with everything that is needed to run the application, including libraries, configuration files, etc. All of this makes transferring an application from one environment to another rather simple – just copy and run the container’s image.
One of the most popular implementations of the container approach is Docker.


As long as an application consist of one container, its implementation and configuration create is free from any major issues. These begin to appear when the requirements regarding the application grow – and along them the application itself, the number of containers, instances, versions and supported environments. To tackle them, the containers must be properly managed. That is when Kubernetes comes in – an open-source platform from Google, designed for managing application created and run in containers.

Kubernetes allows for the creation of cluster infrastructure. A typical Kubernetes cluster consists of a few devices with pre-installed Linux system. One of them – described as “master” – is responsible for managing the whole cluster. The rest, called nodes (earlier – minions), have the Docker installed and are user to run containers with applications. One node server can handle many applications – but an application is not always in a single container. In some cases, it can use a group of interconnected containers, called “Pod”. A Pod groups the containers and allows working with them as if it was a single application. All the containers inside a Pod run on the same device, share the same environment, file system and the network address space. Access to particular applications is possible through a proxy or a load balancer service. The proxy grants access to applications via an internal network, while the service loader puts the application outside, mediating in communication between an external application and the software installed on node servers. Load balancer service acts as an access point, hiding the place and the fact that the software is installed inside Kubernetes.

Such an architecture has many benefits as well as offers plenty of possibilities, the most interesting of them being:

  • Scaling
    Kubernetes allows unrestricted scaling of an application running on it without putting in any additional work. To achieve that it creates a couple of application instances, their quantity taken from a configuration file. Each is placed on a separate node server. Next, using the load balancer, it evenly distributes traffic among the instances and, by doing this, hides from the external software the fact of multiple copies of the same application.
  • Health checking
    The moment Kubernetes is started, it sees to it that there is a number of instances available all the time. The moment one of them stops responding, Kubernetes creates a new one; it remains active until the former instance resumes responding. To check the state of an application, Kubernetes sets a HTTP request to the address indicated in the configuration file and verifies the status of the response.
  • Rolling updates
    Kubernetes also provides support for updating application version – enabling implementation of a new version without a break in the access to the application.

Kubernetes replaces, one by one, the instances of an application with their updated versions. While one of them is being updated, the ones still waiting remain active and accessible to the user.


Although the abovementioned tools undeniably simplify the implementation of an application, they are incapable of eliminating the human error. That’s why it is worth considering the automation of software implementation – continuous delivery.

In theory, any of the currently available tools can be used to this effect. In practice though, even though most of them support continuous delivery, they were created with continuous integration in mind. The exception is Spinnaker, an open-source application created especially for continuous delivery. It does not support continuous integration – it simply cannot be used as this type of tool. A typical use case involves publishing successive versions of an application, prepared with a different – continuous integration – tool. Here, Jenkins is one of the tools often mentioned in context of Spinnaker.

Spinnaker handles continuous delivery by creating and managing workflows. As in other similar software, these workflows are called pipelines and consist of stages, carried out one by one. Each pipeline can have a defined event that initiates it, a notification sent after it ends as well as parameters sent between the stages. A stage is the smallest unit of work performed within the pipeline. Spinnakers comes with a few pre-determined stages, each of them having a dedicated creator to simplify configuration process. The most important ones are:

  • Deploy – responsible for the implementation and running the image with an application;
  • Find image – searches for the image with the application, e.g. in the container registry;
  • Jenkins – runs the defined and named job on the Jenkins side;
  • Manual Judgment – suspends the running of pipeline, waiting for the user’s decision;
  • Enable/Disable/Destroy Server Group – turns on/off or deletes Server Group (this topic will be covered in the next part of the article).

The workflow isn’t everything – to implement the application, a properly configured environment is needed. Spinnaker lets you manage the resources of platforms such as Microsoft Azure, Google Cloud Platform (GMP), Amazon Web Services (AWS) and the earlier-mentioned Kubernetes. In each of the cases, the configuration looks almost the same – all due to the automatic mapping of the objects created through Spinnaker to make them appropriate for the platform used. The main Spinnaker objects and their Kunernetes counterparts are:

  • Server Group – represents the profile of the device that will run the application. Additionally, it describes the target number of its instances. Its equivalent in Kubernetes is Pod
  • Load balancer – describes the protocol handling the incoming traffic and the range of available ports. It balances the traffic between the device instances inside the Server Group. In Kubernetes – load balancer service.

Thanks to a native support for containers and Kubernetes, Spinnaker significantly simplifies and automates the software delivery process. It allows building various workflows for automated implementation of a container application: from simple sequences of commands to complicated structures involving nesting and simultaneous processing. Example pipeline of this type may look like the following:

  1. Changes in system version control should initiate a Spinnaker pipeline. Since Spinnaker supports continuous delivery only, a separate continuous integration tool (e.g. Jenkins) must be used to define a job that will be listening for changes in the repository (e.g. a VCS trigger) and subsequently work it as an initiation requirement for Spinnaker
  2. Immediately after the changes appear and the pipeline starts, Spinnaker should start continuous integration. Again, a separate tool must be used for this, to create an image of the container with an application
  3. Next, this image should be implemented in the test environment (Kubernetes)
  4. The earlier version of the application should be removed from the test environment
  5. Until the correctness of implementation of the application in the test environment is confirmed, the pipeline should be suspended (Manual Judgement)
  6. Finally, image of the container with the application should be copied from the test to the production environment
  7. The previous version should be deactivated on production, not removed, in case it needs to be restored

The unceasing technological progress make software increasingly important – and the same applies to the time of its delivery, since the software available today is worth much more than the one software available tomorrow. This is especially relevant for the companies where software has a major impact on their competitiveness.

The way of creating and distributing software and it’s tools suggested in this article definitely speeds up the installation process. It moreover increases the quality of communication between the supplier and the consumer of software, minimalizing the number of misunderstandings concerning the manner of setting the requirements. All of this made the idea of containers and the Kubernetes/Spinnaker tools quickly raise in popularity, displacing the methods of software delivery used until now.

Last posts