It has come to my attention during talks, conf-calls and WebEx - that many people can't imagine what an actual docker container is.
That reminds me of the "Cloud" hype we had some years ago. Everyone talked about it - very few actually knew/understood what it really was
From the "Docker" main page itself - we have this little piece of information which actually says it all already.
"Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in."
So - this tells us that everything that is needed to run a docker container is actually packaged inside the container!
But - what is it now, Virtualization ? No. It is definitely not.
Virtualization would require to have some parts of the Hardware/OS to be packed into an API, re-created and put on top of a main operating system and the virtual space needs to run its own OS etc.
Docker, in its functioning is way more simple than virtualization, but adds a different level of complexity in its setup. For this however the "docker" command has been developed to ease its use.
To understand how docker is able to work, one needs to know that the Linux kernel itself is the main component interfacing with the hardware and the programs. And this same Linux kernel is able to run very old executables. Imagine an executable from 15 years ago running on a current kernel of today. All that is needed for this executable to run is all the "code, runtime and libraries" which were used while this application was compiled and linked - and it will run without a problem.
With the right libraries in place, this executable can also be a 32Bit executable running on a 64Bit kernel (If the latter has 32Bit support compiled in).
So - to make things short, Docker is just an archive providing all that is required to a specific application to run.
And now come the next question: But how does the Docker container interface with the hardware ?
The answer to that question is rather simple. The UNIX philosophy is that everything happens through files. Every process, program communicates through files - and the Linux OS provides various "virtual" file Systems holding many virtual device and system files (yes - virtual is in the name, but it's because it's held in memory) as /sys or /proc to allow the processes to talk to each other and to access the hardware devices.
What docker now does - is mount these virtual file systems into the inside of the containers so that the application inside that container thinks it runs on a real OS
That - is the strength of Docker. And due to the fact that there is no Virtualization (as we know it from VMWare), the overhead is non existent as the application accesses directly all resources without having to pass through "translation" and "virtualization" layers.
The drawback is that Docker can only run an application that is meant to run the same OS. In this case, notably Linux !