What are Software Containers?

What are Software Containers?
What are Software Containers?

Enterprise businesses and developers have found a new darling piece of technology to latch onto. This technology has actually been around for some time now, but it has gained significant popularity over the last few years. In fact, it has grown so quickly that even small to medium-sized businesses have taken to the outsourcing of IT to make it a part of their workflow.

This new technology is software containers. With containers, businesses have found unheard-of levels of agility, flexibility, and reliability.

But what exactly are containers?

Let’s dig in and find out.

How Standard Software Works

In order to understand how a container works, let’s first consider a standard piece of software. Take, for instance, the NGINX web server. The NGINX web server is traditionally installed on an operating system (such as Linux) and requires certain dependencies. A dependency is when one piece of software depends upon another piece of software (or multiples pieces of software) in order to be installed or able to run on a system.

So in order to get NGINX installed, you’d first have to have a supporting operating system, make sure all the dependencies are met, and then install NGINX. Of course, most Linux server distributions will already have those dependencies met or the package manager will pick them up during the installation.

Now Imagine if you could have all of those dependencies wrapped up in a universal package format, that could then be run on nearly any platform. Wouldn’t that be great?

How Containers Work

Containers do just that: they build in everything necessary to run and can be deployed (using a container runtime such as Docker) as a single instance or onto a cluster. But to understand containers properly, you must understand how they are put together.

Every container deployed is based on an image. An image is a piece of software, housed in a repository (such as DockerHub), that contains the foundation necessary for the application or service to be deployed. There are basic images for NGINX, Apache (another webserver), MySQL (a database server), Ubuntu (a Linux distribution), and many, many more. There are also images that take those basic images and build upon them.

Say, for instance, you need the MySQL Server Enterprise Edition, or maybe your container must be built upon an image dedicated to the golang programming language. What if your container required OpenJDK, MongoDB, Elasticsearch, or any number of other possible technologies? Good news. Chances are, there’s a container image that perfectly meets the needs of the container you wish to deploy.

In order to get the necessary image, you pull the image to your local machine. This is typically done from the command line, but can also be accomplished through the use of web-based or desktop client tools.

Once the image has been pulled, a developer can then make changes to it in whatever way they see fit, such as building an entire website into the image. When the image is exactly as required, a container can be deployed, based on that image.

At this point, anyone on the network could point their browser to the address of the hosting server and see any website running on that NGINX container. If the website in question will be used for a massive deployment (such as an e-commerce site for a company), that container can be deployed to a cluster of servers. This is called “scaling”, and containers handle scaling with an ease other technologies cannot match.

How Does This Make Things Easier?

Imagine you are an IT administrator and you’ve been tasked to roll out a number of websites, each of which depends on a different type of technology (say, NGINX vs. Apache, or different versions of PHP). You could do it the old fashioned way by:

  1. Installing the operating system on server hardware.
  2. Installing the necessary dependencies.
  3. Installing the webserver.
  4. Building the website.
  5. Deploying the website.

Now, if the website needs to be deployed at scale, you’d then have to do the same thing with multiple servers and then configure the webserver to make use of redundancy and failover. That’s going to take considerable time and effort.

To do the same thing with containers, you would:

  • Download the necessary image onto a server containing the required runtime.
  • Build the site using the image.
  • Deploy the service (the website) as a container.

If the website needs to be deployed at scale, you would deploy it to a cluster (such as Docker Swarm or Kubernetes). Now, here is where containers can make the admin’s life considerably more efficient. Say you need to deploy a number of similar sites, each with only slightly different configurations or options. You can reuse the original image, modify it to fit your needs, and deploy your site. In other words, containers can be seen as a “create once, use often” type of technology.

And because containers contain everything necessary (platform, applications, libraries, frameworks, etc.) they are universal. If you have the Docker runtime installed on Linux, macOS, or Windows that container can be deployed from each. That means if you are a web developer, you can build your website and deploy it to any operating system that contains the runtime. And with the likes of Kubernetes, you can orchestrate those containers and even automate them (with a bit of work and a lot of knowledge) such that they become completely automated.

That is serious agility, flexibility, and reliability … which is one of the many reasons why containers have become so popular. Containers can also be alerted upon and monitored with relative ease, which you can find out more about here.

This is an article provided by our partners network. It does not reflect the views or opinions of our editorial team and management.

Contributed content