Hands-on workshops have become a mainstay of Norfolk Developers over the last 18 months. As is well documented they started with a Neo4j workshop following an evening presentation when it became apparent that forty-five minutes just wasn’t enough for some topics. Covering many aspects of software engineering, from databases to JavaScript, the workshops are an opportunity to learn by doing rather than just listening and are created and given by both visiting speakers and our exceptional local talent.
Today it was the turn of Dom Davis of local Techstars Rainbird to tell us about Docker, an open platform for developers and sysadmins of distributed applications. Docker is a hot topic at the moment and a very popular method of deploying applications.
As with a lot of new technologies I’m keen to learn, but struggle to find the time, so a workshop like this one where I have to put the time aside is extremely valuable to me.
A lot of work clearly went into the preparation of the workshop. Each of the twenty one people who took part had three Amazon Web Services (AWS) instances running CoreOS, a version of Debian with Docker pre-installed. One instance for the docker repository, another for development, which got around the cross-platform issues as the participants had the usual mix of Macs, Linux and Windows machines, and one for production to simulate real deployments. Dom has a lot of AWS experience and was able to easily replicate the instances, but he’d also prepared individual small pieces of paper for everyone with the details of their three instances.
Once a few teething problems with Windows users who were relying on Putty to connect to their instances were quickly overcome, we were off! Dom had prepared nineteen exercises each with detailed steps and highlighted gotchas that could be completed via SSH to, in many cases, all three of the allocated instances. The first exercise was actually part of the Docker website where we could use an embedded shell to create Docker images, deploy and interact with them.
The rest of the exercises took us from creating slightly more complex images with Docker files to creating a repository and pushing Docker images to it from the development instance and retrieving and starting them on the production instance. The final few exercises showed us how to create a Node.js Docker image which served a simple message and simulated blue green deployment with a human router.
Throughout the workshop Dom was informative, funny and patient and easily held the attention of the group. Everyone learnt a huge amount about Docker and we’re hoping that as our experience grows and Docker matures, that Dom will come back to give us an intermediate and advanced workshop in the future.
Today it was the turn of Dom Davis of local Techstars Rainbird to tell us about Docker, an open platform for developers and sysadmins of distributed applications. Docker is a hot topic at the moment and a very popular method of deploying applications.
As with a lot of new technologies I’m keen to learn, but struggle to find the time, so a workshop like this one where I have to put the time aside is extremely valuable to me.
A lot of work clearly went into the preparation of the workshop. Each of the twenty one people who took part had three Amazon Web Services (AWS) instances running CoreOS, a version of Debian with Docker pre-installed. One instance for the docker repository, another for development, which got around the cross-platform issues as the participants had the usual mix of Macs, Linux and Windows machines, and one for production to simulate real deployments. Dom has a lot of AWS experience and was able to easily replicate the instances, but he’d also prepared individual small pieces of paper for everyone with the details of their three instances.
Once a few teething problems with Windows users who were relying on Putty to connect to their instances were quickly overcome, we were off! Dom had prepared nineteen exercises each with detailed steps and highlighted gotchas that could be completed via SSH to, in many cases, all three of the allocated instances. The first exercise was actually part of the Docker website where we could use an embedded shell to create Docker images, deploy and interact with them.
The rest of the exercises took us from creating slightly more complex images with Docker files to creating a repository and pushing Docker images to it from the development instance and retrieving and starting them on the production instance. The final few exercises showed us how to create a Node.js Docker image which served a simple message and simulated blue green deployment with a human router.
Throughout the workshop Dom was informative, funny and patient and easily held the attention of the group. Everyone learnt a huge amount about Docker and we’re hoping that as our experience grows and Docker matures, that Dom will come back to give us an intermediate and advanced workshop in the future.
Comments
Post a Comment