DevOps Tool – What is Docker ?

Docker is the world’s leading software container platform. Developers use Docker to eliminate “works on my machine” problems when collaborating on code with co-workers. Operators use Docker to run and manage apps side-by-side in isolated containers to get better compute density. Enterprises use Docker to build agile software delivery pipelines to ship new features faster, more securely and with confidence for both Linux and Windows Server apps.What is a Container?

Using containers, everything required to make a piece of software run is packaged into isolated containers. Unlike VMs, containers do not bundle a full operating system – only libraries and settings required to make the software work are needed. This makes for efficient, lightweight, self-contained systems and guarantees that software will always run the same, regardless of where it’s deployed.

 

Both Developers and System Admin can use Dockers

Dockers for Developers

Developers using Docker don’t have to install and configure complex databases nor worry about switching between incompatible language toolchain versions. When an app is dockerized, that complexity is pushed into containers that are easily built, shared and run. Onboarding a co-worker to a new codebase no longer means hours spent installing software and explaining setup procedures. Code that ships with Dockerfiles is simpler to work on: Dependencies are pulled as neatly packaged Docker images and anyone with Docker and an editor installed can build and debug the app in minutes.

Dockers for System Admins

Docker streamlines software delivery. Develop and deploy bug fixes and new features without roadblocks. Scale applications in real time.

Docker is the secret weapon of developers and IT ops teams everywhere, allowing them to build, ship, test, and deploy apps automatically, securely, and portably with no surprises. No more wikis, READMEs, long runbook documents and post-it notes with stale information. Teams using Docker know that their images work the same in development, staging, and production. New features and fixes get to customers quickly without hassle, surprises, or downtime.

How  Dockerising a Old legacy application helped Cornell university ( from own words of Team’s cloud architect)

Our installation of Confluence is an interesting intersection of legacy and vendor solution.  We have customized the code, to work with our single sign on solution, as well as a custom synchronization with LDAP for group management.  When we started the project to move Confluence to the cloud the infrastructure, the software was old, compiled from the source and was being hand maintained.  

The stack looked like this:

  • Apache 2.2.10
  • OpenSSL 0.9.8H
  • Java 1.6 (EOL 2/13)
  • Confluence 5.6.5

This presented us with a number of challenges including:

  • The version of Java stopped receiving public updates in February of 2013.
  • Multiple vulnerabilities reported in the OpenSSL and Apache versions we were using.  
  • The last upgrade project for Confluence took six months.  
  • We had multiple environments for development, testing and production and over time these servers had fallen out of sync.  
  • The engineers who had originally set up these servers and made customizations, have left the University. When we looked how we might add high availability or whether a disaster, we found it too difficult to replicate the environment.

As we looked at the state of the application, we knew we wanted to make sure the environment was supportable going forward.  We decided to move towards an approach that applied the principles of infrastructure as code.  We wanted to have a repeatable process for building out Confluence environments and we wanted to be able to track changes that were made.  Using Docker helped to supercharge this effort.

We were able to leverage Dockerfiles to create a reproducible infrastructure, coupled with Puppet we can create instance specific images.  At Cornell we have implemented a series of base images that we use to build on.  In the case of Confluence, we build on a tomcat image which is built on a java image which is based on a Cornell specific Ubuntu image.  These base images are rebuilt with the latest patches on a daily basis.  Every time we build and deploy Confluence we have a fully patched system, no longer are we running on an end of life’d version of Java!  

These builds are no longer done by hand as we use Jenkins to automatically build and deploy our containers to our dev/test environments while pushing a tagged copy of these images to our private Docker Trusted Registry.  If we need to roll back, we can easily grab the last known working image tag.  When we have sufficiently tested in these environments we can automatically deploy to production.  

The project for us to Dockerize and move Confluence to the cloud took two months and was highly successful.   The only reason it hasn’t been up longer is that since we have Dockerized we have been doing Quarterly upgrades.  This is amazing! I remember that in the past upgrade projects were months long but now we do them in a couple weeks four times a year.  For the first time at Cor

For the first time at Cornell we have been able to remain on a current patched Confluence release.  In the past we used to automatically restart Confluence every Sunday to address performance issues.  In addition to the automatic restarts we used to restart Confluence multiple times a week to address intermittent errors. Docker helped to decrease time spent firefighting issues with the environment and have enabled us to eliminate these restart issues entirely.

After Dockerizing and moving Confluence to the cloud we have been able to drastically improve both HA and DR.  The on premises deployment of Confluence used a single VM in production with a single database backend also running on a single VM.  This was partly because Confluence does not allow more than one instance to run unless you pay extra for their Confluence Data Center product.  In the clo

This was partly because Confluence does not allow more than one instance to run unless you pay extra for their Confluence Data Center product.  In the cloud we are able to use an auto scaling group which will maintain one healthy server running at all times.  We are using a

We are using a multi AZ database which allows us to stay up even when a single zone fails.  For all our backups we snapshot our volumes then migrate them to a separate region within 30 minutes. So we can have Confluence up and running in another region within 30 minutes.  

On premise DR relies on tape backup and would takes hours or days to complete.  After all is said in done we have been able to dramatically increase the resilience and durability of Confluence and it cost $2,100 less annually to run.

 

Ramdev

Ramdev

I have started unixadminschool.com ( aka gurkulindia.com) in 2009 as my own personal reference blog, and later sometime i have realized that my leanings might be helpful for other unixadmins if I manage my knowledge-base in more user friendly format. And the result is today's' unixadminschool.com. You can connect me at - https://www.linkedin.com/in/unixadminschool/

1 Response

  1. March 31, 2017

    […] ( What is Docker ? […]

What is in your mind, about this post ? Leave a Reply

Close
  Our next learning article is ready, subscribe it in your email

What is your Learning Goal for Next Six Months ? Talk to us