Skip to main content

Diving into the Digital Ocean

I’ve been looking for a good, online continuous integration solution since late 2007. It’s useful, even as a lone developer, although running it on your development machine can be problematic, especially as back then I developed on a number of different machines and those I wasn’t using were turned off.

IaaS (Infrastructure As A Service) was in its infancy then and those providers I spoke to were shocked at the idea of building and running an executable, especially as the boxes were predominantly shared. My hosting provider installed Cruise Control for me, but it never really worked. 

In the end I used a spare machine at work to run Jenkins and at home I went without until I decided to buy a server and virtualise it. That worked quite well, but it took a lot of setup and the performance wasn’t great. I couldn’t afford a huge amount of memory or super fast disks. That worked sufficiently for about eighteen months until I gave the machine a good dusting internally and it refused to start again. So I was back to looking at online providers.

In the meantime PaaS (Platform As A Service) had come along and Cloudbees have quite a good hosted Jenkins based offering for Java development. The free tier gives you a limited number of minutes per month but I found I used these up far too quickly. One reason being that it ran very slowly. I was told that the paid for tier was much, much faster but at $99/month, for me at least, it didn’t represent value for money. Plus a lot of my projects run integration tests against a variety of databases and using the Cloudbees solution would mean I’d have to also find a database hosted elsewhere. Cloudbees have a fantastic offering but it’s not for me.

By this time I was writing a course on Continuous Integration that also used Sonar for measuring code quality and ideally I wanted a hosted solution for that. Again, there is one, but it costs $20/month.

Having heard a great deal about Amazon Web Services (AWS) I turned to that. From the start I found it unintuitive and complicated in the extreme. How to get going and the billing were quite opaque to me and I had to rely on some help from a colleague. I started off with an S3 Ubuntu Linux instance running Sonar and MySQL, both of which died frequently. So I moved to AWS Linux and a separate database server, both of which I thought were in the free tier, that is until I received a bill for nearly £200. Thankfully AWS saw my point of view and refunded the money. 

To be on the safe side I removed all my instances and started again. This time just with a free S3 AWS Linux instance that I used to run Jenkins for a local NRUG meeting. I haven’t used it since.

My perception of my own behaviour is that I don’t usually respond to online advertising. However, when the Digital Ocean advert for server instances with 512 MB of RAM and 20GB of SSD space for $5 a month popped up on Facebook it was too much to resist. I clicked though. And set up an account in just a few seconds, but then I left it for a couple of weeks or more as it takes time setting these things up from scratch and I was busy with other things.

Then I started a new project that needed Sonar, Jenkins and Nexus (artefact repository). I logged back into Digital Ocean and went about creating a droplet (what Digital Ocean calls a server instance) for Sonar. It was fantastically easy in only a few steps:
  1. Select a host name.
  2. Select a size. There are six sizes to choose from ranging from 512 MB RAM, 1 CPU, 20GB SSD storage and 1TB of traffic for $5 per month to 16GB RAM, 16 CPUs, 160GB SSD, 6TB of traffic for $160 per month.
  3. Select a region to host the Droplet. The choice is New York 1, New York 2, Amsterdam 1, Amsterdam 2 or Singapore 1.
  4. Select an image. There are five Linux distributions to choose from Ubuntu, CentOS, Debian, Arch Linux, Fedora. You can also choose the version.
  5. Click create droplet.
The root password is emailed to you almost immediately and in less than 60 seconds the droplet is created, ready to go and you can SSH into it. That really is all there is to it.

I got Sonar set up really quite quickly as I’ve done a few times in the past. Then another Ubuntu droplet to run MySQL, another to run Jenkins and another to run Nexus. It was all so easy I managed to get most of it set up in a single evening and it all worked just like it was my own instance on my own box running in my own data centre. 

I did have one problem, that I had also seen with the AWS instance. Sonar would frequently fail to respond for a few minutes. I thought there  might not be enough memory for the droplet running Sonar (512MB) or the droplet running MySQL (which already had 1GB). Increasing the memory of the Sonar droplet to 1GB was incredibly easy. I shut it down, went into the droplet configuration, upped the memory and restarted. The problem went away.

It became apparent that a JIRA instance would be really helpful for running Norfolk Developers, The Norfolk Tech Journal and a few other projects. JIRA hosting is from $10 a month. If you have your own hardware its $10 a year, if not free. So I went back to Digital Ocean and set up a new droplet with 2GB of RAM, installed JIRA and pointed it at the existing MySQL database. All without a hitch.

I now have six droplets running on Digital Ocean and haven’t had a problem with any of them. The bill for the first month came to measly sum of $16 (a little over £10). I have all the flexibility I need with very little complexity at an affordable price. Digital Ocean is a simple service, you don’t get all the fancy tools that you have with AWS, but if you have straight forward needs and/or are just starting out I would recommend you start with Digital Ocean.

Originally published on the Norfolk Tech Journal.

Comments

Popular posts from this blog

Write Your Own Load Balancer: A worked Example

I was out walking with a techie friend of mine I’d not seen for a while and he asked me if I’d written anything recently. I hadn’t, other than an article on data sharing a few months before and I realised I was missing it. Well, not the writing itself, but the end result. In the last few weeks, another friend of mine, John Cricket , has been setting weekly code challenges via linkedin and his new website, https://codingchallenges.fyi/ . They were all quite interesting, but one in particular on writing load balancers appealed, so I thought I’d kill two birds with one stone and write up a worked example. You’ll find my worked example below. The challenge itself is italics and voice is that of John Crickets. The Coding Challenge https://codingchallenges.fyi/challenges/challenge-load-balancer/ Write Your Own Load Balancer This challenge is to build your own application layer load balancer. A load balancer sits in front of a group of servers and routes client requests across all of the serv...

Catalina-Ant for Tomcat 7

I recently upgraded from Tomcat 6 to Tomcat 7 and all of my Ant deployment scripts stopped working. I eventually worked out why and made the necessary changes, but there doesn’t seem to be a complete description of how to use Catalina-Ant for Tomcat 7 on the web so I thought I'd write one. To start with, make sure Tomcat manager is configured for use by Catalina-Ant. Make sure that manager-script is included in the roles for one of the users in TOMCAT_HOME/conf/tomcat-users.xml . For example: <tomcat-users> <user name="admin" password="s3cr£t" roles="manager-gui, manager-script "/> </tomcat-users> Catalina-Ant for Tomcat 6 was encapsulated within a single JAR file. Catalina-Ant for Tomcat 7 requires four JAR files. One from TOMCAT_HOME/bin : tomcat-juli.jar and three from TOMCAT_HOME/lib: catalina-ant.jar tomcat-coyote.jar tomcat-util.jar There are at least three ways of making the JARs available to Ant: Copy the JARs into th...

RESTful Behaviour Guide

I’ve used a lot of existing Representational State Transfer (REST) APIs and have created several of my own. I see a lot of inconsistency, not just between REST APIs but often within a single REST API. I think most developers understand, at a high level, what a REST API is for and how it should work, but lack a detailed understanding. I think the first thing they forget to consider is that REST APIs allow you to identify and manipulate resources on the web. Here I want to look briefly at what a REST API is and offer some advice on how to structure one, how it should behave and what should be considered when building it. I know this isn’t emacs vs vi, but it can be quite contentious. So, as  Barbossa from Pirates of the Caribbean said, this “...is more what you’d call ‘guidelines’ than actual rules.” Resources & Identifiers In their book, Rest in Practice - Hypermedia and Systems Architecture (‎ISBN: 978-0596805821), Jim Webber, Savas Parastatidis and Ian Robinson describe resour...