containers

Linux Shell on Windows

For many years I have been yearning to have a single operating system that I run for all my work and leisure needs, but I have always had to run Linux for my software development related tasks, and Microsoft Windows for general usage. Linux works well for me when doing programming work due to the powerfull bash shell, the Linux command line utilities (like git, grep, cat, etc) and how well they integrate with the shell and a host of development tools that just works so naturally in Linux, like python and node.

Dockerizing a NodeJS App

In this post I am documenting what steps I made to convert a traditional NodeJS App that is launched from a command line using node app.js Into a fully dockerized container solution. The App uses a MySQL database which it has static configuration for. I am not going into too much details about the App’s code or architecture but it is just worth noting that it has this piece of configuration for connecting to the database;

Continous Deployments

Continuous Deployments is the next stage of automation following on from it’s predecessors continuous integration (CI) and continuous delivery (CD). The integration phase of the project used to be the most painful step, depending on the size of the project developers work on isolated teams dedicated to seperate components of the application for a very long time, when the time came to integrate those components a lot of issues, like unmet dependencies, interfaces that don’t communicate etc, are dealt with for the first time - the idea of CI was thought out to combat this problem.

Installing Docker on Ubuntu

Installing Docker on Ubuntu This post is essentially my notes on getting started quickily with Docker. I set this up in my lab machines running Ubuntu 16.04.1 LTS, the steps are based on the excellent instructions written on the Docker getting started guide Add the Docker project repository to APT sources sudo apt-get install apt-transport-https ca-certificates sudo apt-key adv \ --keyserver hkp://ha.pool.sks-keyservers.net:80 \ --recv-keys 58118E89F3A912897C070ADBF76221572C52609D echo "deb https://apt.dockerproject.org/repo ubuntu-xenial main" | sudo tee /etc/apt/sources.

Service Discovery and Proxying

Delivery of software as microservices running on immutable and self-sufficient containers is a very sobust method and has gained a lot of popularity in the recent years. Containers usually expose tyhe microservice as a web service acccessible through a certain port number on the host. Because host machines are able to run many conatiners and the fact that these containers need to be started and shut down quickly and easily without any side effects, it is not really feasible for consumers of these web services to point to manually assigned hosts and ports.

Blue/Green Deployments

Traditionally deploying a new release and making it live in production involves replacing the existing release with the new one, this leads to a period of downtime which may be considerable for large, monolithic applications. The solutions adopted to tackle this problem are usually some variation of the o called blue-green deployment process. The diagram below illustrates this set up in which all public traffic is routed through a reverse proxy (like Nginx or HAProxy) that forwards request to the correct release of the application (which then interacts with its correct instance of a database)

Containers

Virtual Machines In the quest for maximising efficiency of computing power available on servers Virtual Machines (VMs) came into existence, with products from firms like VMware and Virtualbox pushing the concept to general users. “In computing, virtualization refers to the act of creating a virtual (rather than actual) version of something, including virtual computer hardware platforms, operating systems, storage devices, and computer resources.” - Wikipedia Virtual Machines are created on top of hypervisors which run on top of the host machine’s operating system (OS), the hypervisors allow emulation of hardrware like CPU, Disk, Memory, Network etc and server machines can be configured to create a pool of emulated hardware resources available for applications in the process making the actual harware resources on those server utilized much more efficiently.