Skip to main content

Piping Software for Less: Why, What & How (Part 1)

Developing software is hard and all good developers are lazy. This is one of the reasons we have tools which automate practices like continuous integration, static analysis and measuring test coverage. The practices help us to measure quality and find problems with code early. When you measure something you can make it better. Automation makes it easy to perform the practices and means that lazy developers are likely to perform them more often, especially if they’re automatically performed every time the developer checks code in.

This is old news. These practices have been around for more than twenty years. They have become industry standards and not using them is, quite rightly, frowned upon. What is relatively new is the introduction of cloud based services such as BitBucket Pipelines, CircleCI and SonarCloud which allow you to set up these practices in minutes, however with this flexibility and efficiency comes a cost.

Why

While BitBucket Pipelines, CircleCI and SonarCloud have free tiers there are limits.

With BitBucket Pipelines you only get 50 build minutes a month on the free tier. The next step up is $15/month and then you get 2500 build minutes.

On the free CircleCI tier you get 2500 free credits per week, but you can only use public repositories, which means anyone and everyone can see your code. The use of private repositories starts at $15 per month.

With SonarCloud you can analyse as many lines of code as you like, but again you have to have your code in a public repository or pay $10 per month for the first 100,000 lines of code.

If you want continuous integration and a static analysis repository which includes test coverage and you need to keep your source code private, you’re looking at a minimum of $15 per month for these cloud based solutions and that’s if you can manage with only 50 build minutes per month. If you can’t it’s more likely to be $30 per month, that’s $360 per year.

That’s not a lot of money for a large software company or even a well funded startup or SME, though as the number of users goes up so does that price. For a personal project it’s a lot of money. 

Cost isn’t the only drawback, with these approaches you can lose some flexibility as well. 

The alternative is to build your own development pipelines. 

I bet you’re thinking that setting up these tools from scratch is a royal pain in the arse and will take hours; when the cloud solutions can be set up in minutes. Not to mention running and managing your own pipeline on your personal machine and don’t they suck resources when they’re running in the background all the time? And shouldn’t they be set up on isolated machines? What if I told you, you could set all of this up in about an hour and turn it all on and off as necessary with a single command? And if you wanted to, you could run it all on a DigitalOcean Droplet for around $20 per month. 

Interested? Read on.

What

When you know how, setting up a continuous integration server such as Jenkins and a static analysis repository such as SonarQube in a Docker container is relatively straightforward. As is starting and stopping them altogether using Docker Compose. As I said, the key is knowing how; and what I explain in the rest of this article is the product of around twenty development hours, a lot of which was banging my head against a number of individual issues which turned out to have really simple solutions.

Docker

Docker is a way of encapsulating software in a container. Anything from an entire operating system such as Ubuntu to a simple tool such as the scanner for SonarQube. The configuration of the container is detailed in a Dockerfile and Docker uses Dockerfiles to build, start and stop containers. Jenkins and SonarQube all have publically available Docker images, which we’ll use with a few relatively minor modifications, to build a development pipeline.

Docker Compose

Docker Compose is a tool which orchestrates Docker containers. Via a simple YML file it is possible to start and stop multiple Docker containers with a single command. This means that once configured we can start and stop the entire development pipeline so that it is only running when we need it or, via a tool such as Terraform, construct and provision a DigitalOcean droplet (or AWS service, etc.) with a few simple commands and tear it down again just as easily so that it only incurs cost when we’re actually developing. Terraform and DigitalOcean are beyond the scope of this article, but I plan to cover them in the near future. 

See the Docker and Docker Compose websites for instructions on how to install them for your operating system.

How

In order to focus on the development pipeline configuration, Over this and a few other posts I’ll describe how to create an extremely simple Dotnet Core class library with a very basic test and describe in more detail how to configure and run Jenkins and SonarQube Docker containers and setup simple projects in both to demonstrate the pipeline. I’ll also describe how to orchestrate the containers with Docker Compose. 

I’m using Dotnet Core because that’s what I’m working with on a daily basis. The development pipeline can also be used with Java, Node, TypeScript or any other of the supported languages. Dotnet Core is also free to install and use on Windows, Linux and Mac which means that anyone can follow along.

A Simple Dotnet Core Class Library Project

I’ve chosen to use a class library project as an example for two reasons. It means that I can easily use a separate project for the tests, which allows me to describe the development pipeline more iteratively. It also means that I can use it as the groundwork for a future article which introduces the NuGet server Baget to the development pipeline.

Open a command prompt and start off by creating an empty directory and moving into it.

mkdir messagelib
cd messagelib

Then open the directory in your favorite IDE, I like VSCode for this sort of project. Add a Dotnet Core appropriate .gitignore file and then create a solution and a class library project and add it to the solution:

dotnet new sln
dotnet new classLib --name Messagelib
dotnet sln add Messagelib/Messagelib.csproj

Delete MessageLib/class1.cs and create a new class file and class called Message:

using System;

namespace Messagelib
{
    public class Message
    {
        public string Deliver()
        {
            return "Hello, World!";
        }
    }
}

Make sure it builds with:

dotnet build

Commit the solution to a public git repository or you can use the existing one in my bitbucket account here: https://bitbucket.org/findmytea/messagelib

A public repository keeps this example simple and although I won’t cover it here, it’s quite straightforward to add a key to a BitBucket or GitHub private repository and to Jenkins so that it can access them.

Remember that one of the main driving forces for setting up the development pipeline is to allow the use of private repositories without having to incur unnecessary cost.


Read the next parts here:




Sidebar 1


Continuous Integration 

Continuous Integration (CI) is a development practice where developers integrate code into a shared repository frequently, preferably several times a day. Each integration can then be verified by an automated build and automated tests. While automated testing is not strictly part of CI it is typically implied.


Static Analysis

Static (code) analysis is a method of debugging by examining source code before a program is run. It’s done by analyzing a set of code against a set (or multiple sets) of coding rules.


Measuring Code Coverage

Code coverage is a metric that can help you understand how much of your source is tested. It's a very useful metric that can help you assess the quality of your test suite.



Sidebar 2: CircleCI Credits

Credits are used to pay for your team’s usage based on machine type and size, and premium features like Docker layer caching.



Sidebar 3: What is Terraform?

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. As the configuration changes, Terraform is able to determine what changed and create incremental execution plans which can be applied.

The infrastructure Terraform can manage includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc.


Comments

Popular posts from this blog

Write Your Own Load Balancer: A worked Example

I was out walking with a techie friend of mine I’d not seen for a while and he asked me if I’d written anything recently. I hadn’t, other than an article on data sharing a few months before and I realised I was missing it. Well, not the writing itself, but the end result. In the last few weeks, another friend of mine, John Cricket , has been setting weekly code challenges via linkedin and his new website, https://codingchallenges.fyi/ . They were all quite interesting, but one in particular on writing load balancers appealed, so I thought I’d kill two birds with one stone and write up a worked example. You’ll find my worked example below. The challenge itself is italics and voice is that of John Crickets. The Coding Challenge https://codingchallenges.fyi/challenges/challenge-load-balancer/ Write Your Own Load Balancer This challenge is to build your own application layer load balancer. A load balancer sits in front of a group of servers and routes client requests across all of the serv

Bloodstock 2009

This year was one of the best Bloodstock s ever, which surprised me as the line up didn't look too strong. I haven't come away with a list of bands I want to buy all the albums of, but I did enjoy a lot of the performances. Insomnium[6] sound a lot like Swallow the Sun and Paradise Lost. They put on a very good show. I find a lot of old thrash bands quite boring, but Sodom[5] were quite good. They could have done with a second guitarist and the bass broke in the first song and it seemed to take ages to get it fixed. Saxon[8] gave us some some classic traditional heavy metal. Solid, as expected. The best bit was, following the guitarist standing on a monitor, Biff Bifford ripped off the sign saying "DO NOT STAND" and showed it to the audience. Once their sound was sorted, Arch Enemy[10] stole the show. They turned out not only to be the best band of the day, but of the festival, but then that's what you'd expect from Arch Enemy. Carcass[4] were very disappoin

Catalina-Ant for Tomcat 7

I recently upgraded from Tomcat 6 to Tomcat 7 and all of my Ant deployment scripts stopped working. I eventually worked out why and made the necessary changes, but there doesn’t seem to be a complete description of how to use Catalina-Ant for Tomcat 7 on the web so I thought I'd write one. To start with, make sure Tomcat manager is configured for use by Catalina-Ant. Make sure that manager-script is included in the roles for one of the users in TOMCAT_HOME/conf/tomcat-users.xml . For example: <tomcat-users> <user name="admin" password="s3cr£t" roles="manager-gui, manager-script "/> </tomcat-users> Catalina-Ant for Tomcat 6 was encapsulated within a single JAR file. Catalina-Ant for Tomcat 7 requires four JAR files. One from TOMCAT_HOME/bin : tomcat-juli.jar and three from TOMCAT_HOME/lib: catalina-ant.jar tomcat-coyote.jar tomcat-util.jar There are at least three ways of making the JARs available to Ant: Copy the JARs into th