tag:blogger.com,1999:blog-45487899269951926492024-03-19T07:28:56.855+00:00Paul GrenyerPaul Grenyerhttp://www.blogger.com/profile/18212226926099615757noreply@blogger.comBlogger685125tag:blogger.com,1999:blog-4548789926995192649.post-52429663457006622882024-03-04T12:53:00.005+00:002024-03-04T12:53:48.795+00:00A Review: Machine Vendetta<p><span style="background-color: transparent; color: black; font-family: Arial,sans-serif; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; white-space: pre;"></span></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5BU2WyyKUHnWWd6z6G7u0axJu3996Zrtlsh7IMjG8wDGuAsgmJZ2rtRjzcpF4B_VYkCoNbXopzxHsetbC25lF6BtRBkldTP9wB_sZZHM1CMovyrHroqfvMkPxIuxGMymt6m-soawd4u0jka9tbKJr4Z_QrI6X5X1fjBwuT0812O0JI75pu2QRFJc0Yy4/s1500/machinev.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="1500" data-original-width="987" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5BU2WyyKUHnWWd6z6G7u0axJu3996Zrtlsh7IMjG8wDGuAsgmJZ2rtRjzcpF4B_VYkCoNbXopzxHsetbC25lF6BtRBkldTP9wB_sZZHM1CMovyrHroqfvMkPxIuxGMymt6m-soawd4u0jka9tbKJr4Z_QrI6X5X1fjBwuT0812O0JI75pu2QRFJc0Yy4/s320/machinev.jpg" width="211" /></a></div><b>Machine Vendetta<br /></b>Alastair Reynolds<br />ISBN: 978-0316462846<br /><br />Machine Vendetta is the final Revelation Space novel we’re getting and the final part of the Dreyfus trilogy and it could have been a lot better. Unlike the previous book in the series, there was only one main thread and consequently lacked a lot of the Space Opera we’re used to from Reynolds.<br /><br />A lot of the plot was predictable, including the Ultras coming to the rescue and Hafdis being key towards the end. There’s no real ending either. There are far too many loose ends left untied. It really felt like just one more novel to finish the authors publishing contract and it wasn’t really given the love and attention it deserved.Paul Grenyerhttp://www.blogger.com/profile/18212226926099615757noreply@blogger.com0tag:blogger.com,1999:blog-4548789926995192649.post-4000238235033963172023-12-31T10:18:00.003+00:002023-12-31T10:18:49.972+00:00A Review: A Storm of Swords, Part 2: Blood and Gold<p><span style="background-color: transparent; color: black; font-family: Arial,sans-serif; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap; white-space: pre;"></span></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyD2yWnCCdsAASbFHyojp1M4ldCcAojWKsOUSi-UggPlVSaX6aHo7D2QTJk4zmxVX43RsN9dSYdV8YukRAkxQ0MUc52YJYpnMA3QUVWY39Jp3GK0HUTysPT59uzT6SvFCI86fmFI50NPqov2lq7I-h1nwpl1_qA9GwexOpGuTKqECxDnMQ1k98GkunrmI/s1000/a-storm-of-swords.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="1000" data-original-width="652" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyD2yWnCCdsAASbFHyojp1M4ldCcAojWKsOUSi-UggPlVSaX6aHo7D2QTJk4zmxVX43RsN9dSYdV8YukRAkxQ0MUc52YJYpnMA3QUVWY39Jp3GK0HUTysPT59uzT6SvFCI86fmFI50NPqov2lq7I-h1nwpl1_qA9GwexOpGuTKqECxDnMQ1k98GkunrmI/s320/a-storm-of-swords.jpg" width="209" /></a></div><b>A Storm of Swords, Part 2: Blood and Gold (A Song of Ice and Fire, Book 3)</b><br />by George R.R. Martin<br />ISBN-13: 978-0007447855<br /><br />At least George R. R. Martin is consistent. I didn’t really enjoy the second part of this book any more than the first. The Red Wedding was very disappointing and there was far too much about choosing a new commander on The Wall. I was pleased Lysa got pushed through the moon door at the end though! <br /><br />Of course I’m going to continue with the final, so far, two books. I can’t not finish it.<br /><br /><p></p>Paul Grenyerhttp://www.blogger.com/profile/18212226926099615757noreply@blogger.com0tag:blogger.com,1999:blog-4548789926995192649.post-18768146959064255062023-12-24T09:01:00.001+00:002023-12-24T09:01:19.841+00:00A review of Zen and the Art of Motorcycle Maintenance<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiixDiU_GaPeK8PR6VmdvSDl1diyiFZOS-2IfpukU5IvtgRPxHcIzHOZaSlm81IFmOPBStSj2CPQm717wKongnWaCQc_x_IxKQ9wLuX1_81ztAH3yjv4n4LK7RvcihrHZ2c0abBboT9af_GAOheoprBjLVUjDWhjdXj69VotXPrYkAraGAoX_UyroqjiwE/s500/zen-art.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="500" data-original-width="309" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiixDiU_GaPeK8PR6VmdvSDl1diyiFZOS-2IfpukU5IvtgRPxHcIzHOZaSlm81IFmOPBStSj2CPQm717wKongnWaCQc_x_IxKQ9wLuX1_81ztAH3yjv4n4LK7RvcihrHZ2c0abBboT9af_GAOheoprBjLVUjDWhjdXj69VotXPrYkAraGAoX_UyroqjiwE/s320/zen-art.jpg" width="198" /></a></div><b>Zen and the Art of Motorcycle Maintenance</b> <br />by Robert Pirsig<br /><span class="a-list-item"><span class="a-text-bold">ISBN-13
:
</span> <span>978-0099786405</span> </span><p></p><p>Zen and the Art of Motorcycle Maintenance is an interesting book, which I read after a recommendation. The reader’s guide at the end (Kindle edition), which I’d recommend reading first, explains that the book is really three stories:<br /></p><div style="margin-left: 80px; text-align: left;"><ul style="text-align: left;"><li>A motorcycle trip from Minnesota to California</li><li>A philosophical meditation on the concept of quality</li><li>A story of a man persuaded by the ghost of his former self</li></ul></div><p>I only really enjoyed the motorcycle trip part. The discussion on quality was long, a bit rambly and convoluted. There was just too much of it.<br /><br />Other than the enjoyable description of the motorcycle journey, this part of the book has some interesting insights into relationships, interaction between people and what motivates people’s behaviour.<br /><br /></p>Paul Grenyerhttp://www.blogger.com/profile/18212226926099615757noreply@blogger.com0tag:blogger.com,1999:blog-4548789926995192649.post-15535833069883243902023-10-24T10:35:00.001+01:002023-10-24T10:35:50.316+01:00A Review: Prelude to Foundation by Isaac Asimov<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgI2-3TTGeYc1HfL4AogOF_uqbCov1ueXmQMCYKwHeqDCR9jJqHjldBTuIwi6F3zbaEKO8gpNo1LjfNqWH6PoWxaU9hJ_ifJF6duKoRp6S5Jq9uwDHosHNZ3h6awgd7aj85AHd-DAqKAPOlkrP8GUa3Crkk0GO2tb9V9giZq6Eccs-c8hSN6GSXCTg8gok/s500/prelude-to-foundation.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="500" data-original-width="306" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgI2-3TTGeYc1HfL4AogOF_uqbCov1ueXmQMCYKwHeqDCR9jJqHjldBTuIwi6F3zbaEKO8gpNo1LjfNqWH6PoWxaU9hJ_ifJF6duKoRp6S5Jq9uwDHosHNZ3h6awgd7aj85AHd-DAqKAPOlkrP8GUa3Crkk0GO2tb9V9giZq6Eccs-c8hSN6GSXCTg8gok/s320/prelude-to-foundation.jpg" width="196" /></a></div>Prelude To Foundation<br />Asaac Asimov<br />ISBN-13 : 978-0008117481<br /><br />Although, as a child and teenager, I’ve heard an abridged audiobook many times and read Prelude to Foundation the first time more than two decades ago, I loved it more than I can describe and more than any other book I have read for a long time.<br /><br />It’s a good story, well told. While probably not true Space Opera, it has a wide scope. It has all the things I like: Spaceships, otherworlds, science and even some action.<br /><br />I also realised for the first time that Hari Seldon is both unpleasant and sexist. Maybe this is because I am viewing a book from the mid eighties through eyes from the 2020s. Maybe this was Asimov’s intention. Maybe it’s how Asimov was. Perhaps reading the other Foundation and Robot novels will help my understanding.<br /><br />Paul Grenyerhttp://www.blogger.com/profile/18212226926099615757noreply@blogger.com0tag:blogger.com,1999:blog-4548789926995192649.post-92154171346419505012023-08-30T19:02:00.003+01:002023-08-30T19:02:12.594+01:00Not as good as TV! A review of Caliban's War, The Expanse Book 2<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyBfD0zl7lMCOpqS1dbCpXoCBMg8lYJrTDJqH5eF3eSkd7YjVMCyBVS7SfyDqjn-Q7PUljH7LSK6JJ7rDReO98FoYfQ_9qSHziu4ND8cyKZFVLhnqtY47tQBN2FVHTc8uOF3k8lmI6zsHlzoRxEB3N5R8G11psEU92u2zen4skZbUsALA96Ij2vDiEweI/s500/calibans%20war.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="500" data-original-width="317" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyBfD0zl7lMCOpqS1dbCpXoCBMg8lYJrTDJqH5eF3eSkd7YjVMCyBVS7SfyDqjn-Q7PUljH7LSK6JJ7rDReO98FoYfQ_9qSHziu4ND8cyKZFVLhnqtY47tQBN2FVHTc8uOF3k8lmI6zsHlzoRxEB3N5R8G11psEU92u2zen4skZbUsALA96Ij2vDiEweI/s320/calibans%20war.jpg" width="203" /></a></div>Caliban's War<br />by James S. A. Corey<br />ISBN: 978-1841499918<br /><br />I was really keen to read this after Leviathan Wakes was so good and after enjoying the TV series so much. Of course there was the pull of the introduction of Chrisjen Avasarala as well, and she really did not disappoint. She was amazing.<br /><br />The majority of the book was a bit ploddy, especially compared to the first, but the exciting bits were super exciting. The events which resolved the climax and sustained one of the main characters were somewhat contrived and convenient, but I could live with that.<br /><br />In this book, the TV series diverged even more. This disappoints me, because the story and events in the book are so much better than what they changed or invented for TV. I guess I have more of this to come moving on to book 3. <br /><br /><p></p>Paul Grenyerhttp://www.blogger.com/profile/18212226926099615757noreply@blogger.com0tag:blogger.com,1999:blog-4548789926995192649.post-14242672160765789792023-07-30T09:48:00.000+01:002023-07-30T09:48:11.312+01:00A Review: Detonation Boulevard <p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhRh-l-VQtIzo5UxPnex8xyeKokgUyeLeZFFapup6vZjjxeR5f-XdnjYypbmeJsl0bd_zggkEKhwBbKwg37YQpGn-L81MM8jUgIwa5U7eVkBLVoQ8bBeb0S_mm1VNIBw7v0hCRlM6b5lI9XphEIvio0eW72DAFO0E3ldvdc3ucZ1E-fY1tGcOWYkSE6Dy0/s500/detonation.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="500" data-original-width="333" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhRh-l-VQtIzo5UxPnex8xyeKokgUyeLeZFFapup6vZjjxeR5f-XdnjYypbmeJsl0bd_zggkEKhwBbKwg37YQpGn-L81MM8jUgIwa5U7eVkBLVoQ8bBeb0S_mm1VNIBw7v0hCRlM6b5lI9XphEIvio0eW72DAFO0E3ldvdc3ucZ1E-fY1tGcOWYkSE6Dy0/s320/detonation.jpg" width="213" /></a></div>By Alistair Reynolds<br />ASIN: B0C99899GL <br /><br />Take two of my favourite things and my favourite author and what do you get? Formula 1 in space with cyborgs, and who doesn’t love a Sisters of Mercy Reference?<br /><br />From a Formula 1 perspective, there’s so much there. Reynolds explains how, in this universe, there are different races on different bodies in the solar system. He alludes to some of the sports biggest questions from how much technology is used, to the role sponsors and money play to some of the politics around which teams are favoured and what benefits they may get to stay in the sport.<br /><br />He explores a bigger question through the drivers as well.<br /><br />This is a short story, so I read it in two sittings. There definitely could be a larger novel here, but I suspect there won’t be. If you’ve an hour or two to spare, give Detonation Boulevard (which I can only hear in my head in Andrew Edritch’s voice) a read!<br /><br /><br /><br /><p></p>Paul Grenyerhttp://www.blogger.com/profile/18212226926099615757noreply@blogger.com0tag:blogger.com,1999:blog-4548789926995192649.post-72914513453343011772023-07-28T14:23:00.001+01:002023-07-28T14:23:28.934+01:00A Review: God Emperor of Dune<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEggEXujBXXSCmlD-kN5ElFdfwoiA8OW5ZvIhneif1lIMt2RM5hClWeycpp2162PJuqbCYw2dEcmeMIfcKDtAHWJnoPDVICehkyYT_xrXUrGDMzieq-0KBsfrmbmG-xsLonSDbzi_ZHHbgqNG767lbT9rB8-IDo3iQNptuLdhymQ38ngiRiHRrfiCYyC2gA/s389/God_Emperor_of_Dune-Frank_Herbert_(1981)_First_edition.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="389" data-original-width="256" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEggEXujBXXSCmlD-kN5ElFdfwoiA8OW5ZvIhneif1lIMt2RM5hClWeycpp2162PJuqbCYw2dEcmeMIfcKDtAHWJnoPDVICehkyYT_xrXUrGDMzieq-0KBsfrmbmG-xsLonSDbzi_ZHHbgqNG767lbT9rB8-IDo3iQNptuLdhymQ38ngiRiHRrfiCYyC2gA/s320/God_Emperor_of_Dune-Frank_Herbert_(1981)_First_edition.jpg" width="211" /></a></div>By Frank Herbert<p></p><p>ISBN: 978-1473233805</p><p>I’ve seen lots of people rave about God Emperor of Dune, the first of the second Dune trilogy, which is set several thousand years after the events of Children of Dune. As far as I’m concerned, it’s ok.<br /><br />It consists mostly of the God Emperor, Leto II, whose body is transitioning into a wormlike state with a protruding, cowelled face and arms, giving various other characters his thoughts and feelings on existence and how wonderful and godlike he is.<br /><br />Not much actually happens in the book, few conclusions are drawn and the ending kinda peters out. <br /> <br /></p>Paul Grenyerhttp://www.blogger.com/profile/18212226926099615757noreply@blogger.com0tag:blogger.com,1999:blog-4548789926995192649.post-37860691788132500562023-07-08T20:59:00.002+01:002023-07-08T20:59:16.561+01:00Deploying AWS Lambda with Terraform and GitHub actions<p style="text-align: left;"></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgirZNcpoybZg2syyBxakG9n2-o4QpdXwWe01nX2ewz-eKgivLz6JZGspoNj5GdiV1dvqT4zrd7zDIaVxxmNl7pAAkEvC70NjOCGynhbghlGujFLK-39386vzQPGgWHZE9qKRX8_ixKbleb0KXyjcj7Hv36NhX6b8FJMIRz9CX5MXZhRr8xNQxucTMiEEo/s2048/HiResedit.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="2048" data-original-width="1966" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgirZNcpoybZg2syyBxakG9n2-o4QpdXwWe01nX2ewz-eKgivLz6JZGspoNj5GdiV1dvqT4zrd7zDIaVxxmNl7pAAkEvC70NjOCGynhbghlGujFLK-39386vzQPGgWHZE9qKRX8_ixKbleb0KXyjcj7Hv36NhX6b8FJMIRz9CX5MXZhRr8xNQxucTMiEEo/s320/HiResedit.jpg" width="307" /></a></div><p style="text-align: left;">Separation of Concerns is a key principle in software engineering. When we used to deploy applications to physical hardware, the two separate concerns of infrastructure and software would often become blurred as an application would often need to tailor to particular hardware, or more likely the operating system running on that hardware. <br /><br />In the modern world of Infrastructure as Code (IaC), where everything is software (almost), it’s potentially even more difficult to separate the concerns of infrastructure and software. Organisations often have two teams. A platform team to look after the infrastructure and a development team to develop the software. Once the infrastructure is created and the software is written there is an overlap between the teams which is deployment. This is where the teams must work together.<br /><br />Consider AWS Lambdas. There are two separate components which are often considered as a whole, the Lambda itself within the AWS infrastructure and the code which runs within the Lambda. The Lambda itself changes infrequently, while the code changes frequently. Therefore it makes sense to separate the way both are deployed.<br /><br /><a href="https://www.terraform.io/" target="_blank">Terraform</a> is a tool for creating infrastructure from code and is usually the responsibility of the platform team. As we’ll see shortly, Terraform can be used to create Lambdas in AWS, deploy and redeploy the code. However, if the development team must rely on the platform team every time they need to deploy the Lambda, delays and friction between the two teams may occur. Given the relatively stable nature of the Lambda function itself and the frequent changes to the code, it is sensible to adopt a separate deployment approach for each.<br /><br />Let’s have a look at how we might do this, by first creating a basic Lambda with Terraform and then automating the deployment of iterations of the code running in the Lambda with GitHub actions.<br /></p><h2 style="text-align: left;">Terraform & AWS</h2>I am assuming that you have a working knowledge of Terraform and AWS, including configuring the AWS provider, somewhere to store Terraform state, AWS VPCs, AWS security groups, etc. The Terraform we develop here together is also available as part of a GitHub repo:<br /><br /><a href="https://github.com/pjgrenyer/email-sender-lambda-platform">https://github.com/pjgrenyer/email-sender-lambda-platform</a><br /><br />and was developed as part of a Lambda based email sender I’m creating. <br /><br />Assuming you have a Terraform project configured and ready to go, let’s start with where to store the code.<br /><h2 style="text-align: left;">Amazon Simple Storage Service (S3)</h2>As we want to store the Lambda code separately from the creation of the Lambda itself, we need somewhere to put it. Amazon Simple Storage Service, more commonly known as S3, is the perfect place. It’s cheap, scalable and simple to use. There are a number of ways of deploying Lambda code, but putting it in a zip file is quick and easy. This makes S3 even more ideal as we can simply upload a new version of the zip file and tell AWS to redeploy the code to the Lambda any time we want to.<br /><br />Let’s start by creating an S3 bucket. Remember that a S3 bucket name must be unique across all of AWS:<br /><br /><span style="font-family: courier;">// s3.tf<br /><br />resource "aws_s3_bucket" "email_sender_lambda" {<br /> bucket = "email-sender-lambda"<br />}</span><br /><br />A quick<br /><br /><span style="font-family: courier;">terraform apply</span><br /><br />should create the new bucket. Next we need to have some simple Lambda function code which we can zip up and use, just once, to initialise the Lambda when it’s first created. For example:<br /><br /><span style="font-family: courier;">// index.js<br /><br />exports.handler = async (event) => {<br /> console.log(event);<br /><br /> const response = {<br /> statusCode: 200,<br /> body: JSON.stringify(event),<br /> };<br /> return response;<br />};</span><br /><br />This short function sends the message received by the Lambda to the console, so that we can see it working in CloudWatch, converts the message to a string and passes it back with a status code in a new object. This means that when we invoke the Lambda, we’ll get back a message which demonstrates it’s working.<br /><br />Use your favourite zip tool to zip up the index.js file into a zip file called email-sender-lambda.zip. The zip file will need to be available to Terraform. I like to put it in a subdirectory called lambdas. Then we can use the zip file to create an S3 object which, when the terraform is applied, will upload the zip file into the S3 bucket:<br /><br /><span style="font-family: courier;">// lambda.tf<br /><br />resource "aws_s3_object" "email_sender_lambda" {<br /> bucket = aws_s3_bucket.email_sender_lambda.id<br /> key = "email-sender-lambda.zip"<br /> source = "lambdas/email-sender-lambda.zip"<br />}</span><br /><br />Go ahead and apply the Terraform code to upload the zip file.<br /><h2 style="text-align: left;">Lambda Permissions</h2>Before we can create the Lambda itself, we need to create a role for it and give the role permissions to write logs to CloudWatch; so that we can see that it is working and debug if it doesn’t:<br /><span style="font-family: courier;"><br />// iam.tf<br /><br />resource "aws_iam_role" "lambda" {<br /> name = "lambda"<br /><br /> assume_role_policy = <<EOF<br /> {<br /> "Version": "2012-10-17",<br /> "Statement": [<br /> {<br /> "Action": "sts:AssumeRole",<br /> "Principal": {<br />"Service": "lambda.amazonaws.com"<br /> },<br /> "Effect": "Allow",<br /> "Sid": ""<br /> }<br /> ]<br /> }<br /> EOF<br />}<br /><br />resource "aws_iam_role_policy" "lambda_role_logs_policy" {<br /> name = "LambdaLogsPolicy"<br /> role = aws_iam_role.lambda.id<br /> policy = <<EOF<br />{<br /> "Version": "2012-10-17",<br /> "Statement": [<br /> {<br /> "Action": [<br />"logs:CreateLogGroup",<br />"logs:CreateLogStream",<br />"logs:PutLogEvents"<br /> ],<br />"Effect": "Allow",<br />"Resource": "*"<br /> }<br />]<br />}<br />EOF<br />}</span><br /><br />Now it’s time to create the Lambda.<br /><h2 style="text-align: left;">Lambda</h2>With the zip file containing the code in S3 and the Lambda permissions in place, we have everything we need to go ahead and create the Lambda:<br /><br /><span style="font-family: courier;">// lambda.tf<br /><br />resource "aws_lambda_function" "email_sender_lambda" {<br /> s3_bucket = aws_s3_bucket.email_sender_lambda.id<br /> s3_key = aws_s3_object.email_sender_lambda.key<br /> function_name = "email-sender-lambda"<br /> role = aws_iam_role.lambda.arn<br /> handler = "index.handler"<br /> publish = true<br /><br /> runtime = "nodejs16.x"<br /> layers = []<br />}</span><br /><br />The s3_bucket and s3_key properties refer to the S3 bucket and the name of the object to use to get the code for the Lambda. We also give the Lambda a name and its role and tell it where to find the function in the code. We want creating the Lambda to publish a new version of the code, so we set publish to true. Finally we set the nodejs version to run the code in and specify that there aren’t any AWS Lambda layers we want to use.<br /><br />To prove it works we can invoke the lambda with a payload. Remember, our code should return an object with a status code and a string version of the message it receives. In this context, payload, message and event refer to much the same thing. So we need a payload. Create a file called payload.json and put it somewhere that Terraform can access it, I favour the lambdas directory I created before:<br /><br /><span style="font-family: courier;">// payload.json<br /><br />{<br /> "key": "value"<br />}</span><br /><br />Hopefully, as a developer using AWS, you have the <a href="https://aws.amazon.com/cli/" target="_blank">AWS command line tool</a> installed. If not, install and configure it before moving to the next step.<br /><br />To execute the Lambda, we can use the AWS command line tool:<br /><br /><span style="font-family: courier;">aws lambda invoke --function-name email-sender-lambda --payload file://lambdas/payload.json --cli-binary-format raw-in-base64-out response.json && more response.json</span><br /><br />lambda invoke says that we want to execute a Lambda. Then we specify the name of the Lambda with,<span style="font-family: courier;"> --function-name</span>. Then the payload file and the format and file we want the response to go into. Finally we print the response from the file to the console. <br /><br />When you execute the Lambda you should get the following response:<br /><br /><span style="font-family: courier;">{<br /> "StatusCode": 200,<br /> "ExecutedVersion": "$LATEST"<br />}<br />{"statusCode":200,"body":"{\"key\":\"value\"}"}</span><br /><br /><br />Change the payload and execute the Lambda again a few times to satisfy yourself that the Lamba is working as expected. Finally, go to CloudWatch, in the AWS console, and find the log group aws/lambda/email-sender-lambda, find the most recent log stream and see that the Lambda is executing.<br /><br />You’re done with your Terraform code for now, so if you’ve created a git repository for it, commit and push the latest code and then put it to one side. You’ll need it again later.<br /><h2 style="text-align: left;">New Code</h2>Now that we’ve got a Lambda running in AWS we need to demonstrate that we can update the code it’s running without the need to run any Terraform. First we need a suitable nodejs project. I’ve created one with the code in here:<br /><br /><a href="https://github.com/pjgrenyer/email-sender-lambda-code">https://github.com/pjgrenyer/email-sender-lambda-code</a><br /><br />Create the node project in the usual way using:<br /><br /><span style="font-family: courier;">npm init</span><br /><br />and copy the <span style="font-family: courier;">index.js</span> from above in. Make a slight change to the code so that it brings in the project version from <span style="font-family: courier;">package.json</span> and adds it to the response:<br /><br /><span style="font-family: courier;">const package = require('./package.json');<br /><br />exports.handler = async (event) => {<br /> console.log(event);<br /><br /> const response = {<br /> statusCode: 200,<br /> body: JSON.stringify(event),<br /> version: package.version<br /> };<br /> return response;<br />};</span><br /><br />Now we have a simple way of versioning the code and seeing that the new version is deployed. <br /><h2 style="text-align: left;">Package the Code</h2>Now that we have code, we need to package it. We need a zip file which contains both the<span style="font-family: courier;"> index.js</span> file and now the <span style="font-family: courier;">package.json</span> file. We also want the version number to be part of the zip file’s name, so that we know which version of the code it contains. And we’re developers and that means we’re lazy, so we want to make it as simple to do again and again as possible!<br /><br />There are a number of zip tools. I find the standard zip which comes with the Linux platform I’m using works well, but for this I think we need something we can bundle with our project. <a href="https://www.npmjs.com/package/repack-zip" target="_blank">repack-zip</a> is such a tool and is simple to use. You can install it with:<br /><br /><span style="font-family: courier;">npm i --save-dev repack-zip </span><br /><br />but it does need a little configuration. When you run repack-zip, you pass it a directory to zip the contents of and a name for the zip file. There are a few files we want to exclude from the zip, so add the following to your<span style="font-family: courier;"> package.json</span>:<br /><br /><span style="font-family: courier;">"repackZipConfig": {<br /> "excludes": [<br /> "LICENSE",<br /> "README.md",<br /> "package-lock.json"<br /> ]<br /> }</span><br /><br />Then add a script to <span style="font-family: courier;">package.json</span>, so that we can to execute repack-zip repeatedly without having to remember the full command, and use the name and version from <span style="font-family: courier;">package.json</span> to name the zip file:<br /><br /><span style="font-family: courier;">"scripts": {<br /> …<br /> "package": "repack-zip . ${npm_package_name}-${npm_package_version}.zip"<br /> },</span><br /><br />Run the script with:<br /><br /><span style="font-family: courier;">npm run package</span><br /><br />And you should find you get a zip file, with a name something like:<br /><span style="font-family: courier;"><br />email-sender-lambda-0.0.1.zip</span><br /><br />containing just the <span style="font-family: courier;">index.js</span> and <span style="font-family: courier;">package.json</span> files, which is what we want.<br /><br />You may also notice that the zip file does not contain a <span style="font-family: courier;">node_modules</span> directory, even though we didn’t exclude it. This is because repack-zip is clever and knows there are only dev dependencies. If there were non-dev dependencies, the <span style="font-family: courier;">node_modules</span> directory would be included in the zip, but the dev dependencies would be excluded.<br /><h2 style="text-align: left;">Push and Publish</h2>Now that we have a zip file containing an update of the code for our Lambda, it’s time to publish it. First we must push the zip file into S3. We’re still feeling lazy and we need the project name and version from <span style="font-family: courier;">package.json</span>, so we’ll create a script for that too using the AWS command line tool. Add the following script to the <span style="font-family: courier;">package.json</span>:<br /><br /><span style="font-family: courier;">"upload": "aws s3 cp ${npm_package_name}-${npm_package_version}.zip s3://email-sender-lambda/${npm_package_name}-${npm_package_version}.zip",<br /></span><br />It’s quite straightforward. We’re telling the tool to copy our zip file into the bucket and what to call it in the bucket. Run it:<br /><br /><span style="font-family: courier;">npm run upload</span><br /><br />and see what happens:<br /><br /><span style="font-family: courier;">> email-sender-lambda-code@0.0.1 upload<br />> aws s3 cp ${npm_package_name}-${npm_package_version}.zip s3://email-sender-lambda/${npm_package_name}-${npm_package_version}.zip<br /><br />upload: ./email-sender-lambda-code-0.0.1.zip to s3://email-sender-lambda/email-sender-lambda-code-0.0.1.zip<br /></span><br />Hopefully your output is similar and the zip file is now in the S3 bucket alongside the original zip file. Log into the AWS console and check.<br /><br />Next we need to tell the Lambda to use the new code. Again this can be done with the AWS command line tool and a <span style="font-family: courier;">package.json</span> script:<br /><span style="font-family: courier;"><br />"publish": "aws lambda update-function-code --function-name email-sender-lambda --s3-bucket email-sender-lambda --s3-key ${npm_package_name}-${npm_package_version}.zip"</span><br /><br />This is quite straightforward too. We’re telling the tool that we want to update the Lambda’s code with <span style="font-family: courier;">update-function-code</span>. We’re telling it which lambda function with<span style="font-family: courier;"> --function-name</span>, which S3 bucket the code is in with<span style="font-family: courier;"> --s3-bucket</span> and what the zip file containing the code is called with <span style="font-family: courier;">--s3-key</span>. Run it:<br /><span style="font-family: courier;"><br />npm run publish</span><br /><br />You should get quite a lot of output, but it should indicate that it was successful. <br /><br />Now, all you should need to do is execute the Lambda and see the new response which includes the version number, don’t forget to copy or recreate the <span style="font-family: courier;">payload.json</span> file from the platform project, add it to the repack-zip excludes, and create another <span style="font-family: courier;">package.json</span> script:<br /><br /><span style="font-family: courier;">"invoke": "aws lambda invoke --function-name email-sender-lambda --payload file://payload.json --cli-binary-format raw-in-base64-out response.json && more response.json"</span><br /><br />When you run it:<br /><br /><span style="font-family: courier;">npm run invoke</span><br /><br />You should see some output along the times of:<br /><br /><span style="font-family: courier;">{<br /> "StatusCode": 200,<br /> "ExecutedVersion": "$LATEST"<br />}<br />{"statusCode":200,"body":"{\"key\":\"value\"}","version":"0.0.1"</span>}<br /><br />Fantastic! Success! There’s the new version number, but are we sure? Let’s check by increasing the version number:<br /><br /><span style="font-family: courier;">"version": "0.0.2"</span><br /><br />Republishing, which we can do via a further script which packages, pushes and publishes in one go:<br /><br /><span style="font-family: courier;">"package-upload-publish": "npm run package && npm run upload && npm run publish",<br /><br />npm run package-upload-publish</span><br /><br />And then execute the Lambda once more.<br /><br /><span style="font-family: courier;">npm run invoke</span><br /><br />And you should see:<br /><br /><span style="font-family: courier;">{<br /> "StatusCode": 200,<br /> "ExecutedVersion": "$LATEST"<br />}<br />{"statusCode":200,"body":"{\"key\":\"value\"}","version":"0.0.2"}</span><br /><br />I’m convinced and I hope you are too! You should now be able to easily publish new code to an existing Lambda.<br /><h2 style="text-align: left;">Continuous Deployment</h2><a href="https://en.wikipedia.org/wiki/Continuous_deploymen" target="_blank">Continuous Deployment</a> is the process of automatically deploying software without a manual step. For example creating and pushing a zip file of code to an S3 bucket and publishing the code to a Lambda when code is pushed to a repository.<br /><br />GitHub Actions are a great way to create pipelines which implement Continuous Deployment and, as we’re lazy developers, Continuous Deployment is exactly what we want. <br /><h2 style="text-align: left;">GitHub Actions</h2>When we check in some code which is ready to be deployed, <a href="https://github.com/features/actions" target="_blank">GitHub Actions</a> can take that code and deploy it to AWS without manual intervention. There are a number of ways to identify code which is ready to be deployed. I favour using a particular branch. I usually use the <a href="https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow" target="_blank">git flow</a> process and when code is pushed to the "develop" branch, it is deployed to my Dev environment and when a release is created, merged and pushed to the "main" branch, it is deployed to my production environment. For our Lambda though, we’ll get it to deploy every time we push new code.<br /><br />Let’s create a continuous delivery pipeline!<br /><br />Make sure you have your Lambda code pushed to a GitHub repository. GitHub Actions are configured via a YML file and placed in the <span style="font-family: courier;">workflows</span> directory in the <span style="font-family: courier;">.github</span> directory, which is in the root of the project. Start off by creating a YML file called <span style="font-family: courier;">publish.yml</span> in the workflows directory. You’ll most likely need to create both the <span style="font-family: courier;">.github</span> directory and the <span style="font-family: courier;">workflows</span> directory.<br /><br /><span style="font-family: courier;"># .github/workflows/publish.yml<br /><br />on:<br /> push<br /><br />jobs:<br /> build:<br /> runs-on: ubuntu-latest<br /> steps:<br /> - uses: actions/checkout@v3<br /> - uses: actions/setup-node@v3<br /> with:<br /> node-version: '18.x'<br /> - run: npm install -g npm<br /> - run: npm ci<br /> - run: npm run package-upload-publish<br /> env:<br /> AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}<br /> AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}<br /> AWS_DEFAULT_REGION: 'eu-west-2'</span><br /><br />The on element tells GitHub Actions when to run the job. In this case, when code is pushed to any branch. The job runs inside an Ubuntu container so that we’ve got all the tools we need. The code is checked out and node 18 is installed. Then npm is installed, followed by the Lambda project’s dependencies. Then the package-upload-publish script we created is executed to package and publish the Lambda. We know the script uses the AWS command line which, fortunately, is available to us in this environment. However we need to give it the AWS keys and region, which is done via environment variables.<br /><br />Go on, add, commit and push the file to GitHub. Then go to the Actions tab of the repository and you’ll see the action fail at the point it tries to push the zip file to S3. This is because we haven’t added the values for the AWS environment variables. You could go ahead and add the key and secret you’ve been using for development, but the chances are this either has full admin permissions or more permissions than the action needs. In either case this is an unnecessary security risk.<br /><h2 style="text-align: left;">AWS Lambda Deployment User</h2>When accessing AWS from a third party system, such as GitHub, it’s good practice to use a user with only the minimum necessary permissions so that if the user’s credentials are compromised the amount of damage which can be done with it is minimised.<br /><br />With Terraform we can create a new user, the necessary keys and some restricted roles. Go back to your platform project and lets add a new user called, email-sender-lambda-deploy:<br /><br /><span style="font-family: courier;">resource "aws_iam_user" "email_sender_lambda_deploy" {<br /> name = "email-sender-lambda-deploy"<br /> force_destroy = true<br />}<br /></span><br />By setting <span style="font-family: courier;">force_destroy</span> to true, we ensure that the user is recreated with new keys if its permissions are changed. Next we need a user role which will allow the user to access S3:<br /><br /><span style="font-family: courier;">resource "aws_iam_user_policy" "lambda_s3_deploy_policy" {<br /> name = "EmailSenderLambdaS3DeployPolicy"<br /> user = aws_iam_user.email_sender_lambda_deploy.id<br /><br /> policy = <<EOF<br />{<br /> "Version": "2012-10-17",<br /> "Statement": [<br /> {<br /> "Sid": "VisualEditor0",<br /> "Effect": "Allow",<br /> "Action": [<br /> "s3:PutObject",<br /> "s3:GetObject"<br /> ],<br /> "Resource": "arn:aws:s3:::email-sender-lambda/email-sender-lambda*"<br /> }<br /> ]<br />}<br />EOF<br />}</span><br /><br />This role permits only the put object and get object actions on the email-sender-lambda bucket; and files beginning with email-sender-lambda only. No other roles, buckets or file names are permitted. Put object is required to push the zip file into the S3 bucket and get object is required to allow the code to be published to the Lambda.<br /><br />We also need a user role to allow us to publish the code to the Lambda:<br /><span style="font-family: courier;"><br />resource "aws_iam_user_policy" "lambda_function_deploy_policy" {<br /> name = "EmailSenderLambdaFunctionDeployPolicy"<br /> user = aws_iam_user.email_sender_lambda_deploy.id<br /><br /> policy = <<EOF<br />{<br /> "Version": "2012-10-17",<br /> "Statement": [<br /> {<br /> "Sid": "VisualEditor0",<br /> "Effect": "Allow",<br /> "Action": "lambda:UpdateFunctionCode",<br /> "Resource": "arn:aws:lambda:eu-west-2:100241228786:function:email-sender-lambda"<br /> }<br />]<br />}<br />EOF<br />}</span><br /><br />Here the only action is <span style="font-family: courier;">UpdateLambdaFunction</span> and it’s restricted to our Lambda.<br /><br />We also need some keys to give to GitHub so that it can use the user:<br /><br /><span style="font-family: courier;">resource "aws_iam_access_key" "email_sender_lambda_deploy" {<br /> user = aws_iam_user.email_sender_lambda_deploy.name<br /> pgp_key = var.pgp_key<br />}<br /><br />output "secret" {<br /> value = aws_iam_access_key.email_sender_lambda_deploy.encrypted_secret<br />}<br /><br />output "id" {<br /> value = aws_iam_access_key.email_sender_lambda_deploy.id<br />}</span><br /><br />This is quite straightforward, generate some keys for the user and output the key and the secret when the Terraform is applied. The interesting bit is the <span style="font-family: courier;">pgp_key</span>, which I’ve put into a variable:<br /><br /><span style="font-family: courier;">variable "pgp_key" {}</span><br /><br />The PGP key is used to encrypt the secret before it’s printed to the console. There’s more about it in the Terraform documentation:<br /><br /><a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_access_key">https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_access_key</a><br /><br />I opted for the <a href="https://keybase.io/">Keybase</a> solution, which requires setting up an account and installing Keybase locally. Once the terraform is applied:<br /><br /><span style="font-family: courier;">terraform apply<br /><br />...<br /><br />Apply complete! Resources: 2 added, 0 changed, 0 destroyed.<br /><br />Outputs:<br /><br />id = "AKI..."<br />secret = "wcFMAwY4M..."</span><br /><br /> the secret can be decrypted with:<br /><br /><span style="font-family: courier;">terraform output -raw secret | base64 --decode | keybase pgp decrypt</span><br /><br />Now that you have a key and a secret, go to the settings tab in the GitHub repository which has the Lambda code in, “Secrets and variables” and then “Actions” and add the key as a secret called <span style="font-family: courier;">AWS_ACCESS_KEY_ID</span> and the secret as a secret called <span style="font-family: courier;">AWS_SECRET_ACCESS_KEY</span>. Then go to the Actions tab, find the failed job and rerun it. You should find it works this time.<br /><br />Go back to your Lambda code, up the version number in <span style="font-family: courier;">package.json</span>, add commit and push the change. Watch the GitHub Action deploy the Lambda and then execute from the command line, as before, to prove that the new version of Lambda has deployed automatically.<br /><h2>Finally</h2><p>Here we’ve looked at how to create a basic AWS Lambda with Terraform, build, upload and publish new lambda code independently and automate deployments of new versions of the code.<br /><br />This is sustainable as long as you’re always publishing new code versions and don’t need to roll back. If you have to rebuild your infrastructure from scratch or roll a version back, then a manual intervention is required to get the right version of code deployed. There are ways to do this, including a hybrid approach using Terraform, but that’s for another time.<br /><br />The AWS roles we created need to be tightened up to and bound to specific resources. Giving AWS keys to GitHub actions isn’t necessarily the most secure configuration either and you could consider: <a href="https://aws.amazon.com/blogs/security/use-iam-roles-to-connect-github-actions-to-actions-in-aws/">https://aws.amazon.com/blogs/security/use-iam-roles-to-connect-github-actions-to-actions-in-aws/</a><br /><br />I’m sure you get the idea though and can see this is one way of separating the concern of infrastructure from the concern of code.</p><p></p><p>Thank you to <a href="https://www.linkedin.com/in/samwpennington/" target="_blank">Sam Pennington</a> and <a href="https://www.linkedin.com/in/stephencresswell/" target="_blank">Steve Cresswell</a> for inspiration and review.<br /><br /></p><p style="text-align: left;"></p>Paul Grenyerhttp://www.blogger.com/profile/18212226926099615757noreply@blogger.com0tag:blogger.com,1999:blog-4548789926995192649.post-13570975250170241382023-07-02T12:42:00.000+01:002023-07-02T12:42:06.800+01:00A Review Storm of Swords: Part 1 Steel and Snow<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgWMP5OHnA8JSBSOd_r3Bi4HnRQnu_Wat4Iva-5DLOXR2bjtQYYv7Ek0V_vVN9cUtbDmwwkv11_W8Sr-hWMChyLczLD32ADxA4N1SPwQaqnZJ_IBxX8lvButTWYCk1pVweFNkJyFLKFy3eIQxOCv7V1B5YE_ixvVa3TuLIo5qMM--YkDsyjV0kG9FoAvaY/s614/clasof.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="614" data-original-width="400" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgWMP5OHnA8JSBSOd_r3Bi4HnRQnu_Wat4Iva-5DLOXR2bjtQYYv7Ek0V_vVN9cUtbDmwwkv11_W8Sr-hWMChyLczLD32ADxA4N1SPwQaqnZJ_IBxX8lvButTWYCk1pVweFNkJyFLKFy3eIQxOCv7V1B5YE_ixvVa3TuLIo5qMM--YkDsyjV0kG9FoAvaY/s320/clasof.jpg" width="208" /></a></div>Storm of Swords: Part 1 Steel and Snow<br />George R Martin<br />ISBN: 9780007447848<br /><br /><a href="https://paulgrenyer.blogspot.com/2022/02/a-clash-of-kings-review.html" target="_blank">I didn’t like the previous book, A Clash of Kings</a> and Storm of Swords isn’t a great deal better, but it is better. <br /><br />I really enjoyed the multiple threads and, of course, learning more about the characters we all know from the TV series. There are hints about the Red Wedding and I was expecting it to be in the final part of the book, but the last 5% turned out to be appendices this time. Perhaps it’s at the start of the second part. I’ll get to it.<br /><br /><p></p>Paul Grenyerhttp://www.blogger.com/profile/18212226926099615757noreply@blogger.com0tag:blogger.com,1999:blog-4548789926995192649.post-7382829503315178172023-06-04T07:25:00.004+01:002023-06-04T07:28:34.023+01:00Bloomreach Transactional Email API Client<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiD-12470e9s8n_yMkexOXU-hh3bRwC7eYr9iRtalCMgp24qi-ezt1IKTADSlEAGIIQGRmwxagpW4uonVVLBuw8slpztyIpVA2ge4pxShvGz5jzmt3ru3IGGCSRQMpEP8YbANqkbTNEErxiGn3aLMz2xxTKfre6y2zO7JARS_v44GzqtQNkh1FyNpb3/s2048/HiResedit.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="2048" data-original-width="1966" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiD-12470e9s8n_yMkexOXU-hh3bRwC7eYr9iRtalCMgp24qi-ezt1IKTADSlEAGIIQGRmwxagpW4uonVVLBuw8slpztyIpVA2ge4pxShvGz5jzmt3ru3IGGCSRQMpEP8YbANqkbTNEErxiGn3aLMz2xxTKfre6y2zO7JARS_v44GzqtQNkh1FyNpb3/s320/HiResedit.jpg" width="307" /></a></div><p></p><p>A nonofficial, JavaScript, feature complete, client library for sending transactional emails via <a href="https://www.bloomreach.com" target="_blank">Bloomreach</a>.</p><p>The aim of the <a href="https://www.npmjs.com/package/bloomreach-transactional-email" target="_blank">bloomreach-transactional-email</a> package is to get you going with the Bloomreach Transactional Email API as quickly as possible. The sendEmail function takes the minimum number of required parameters to send an email. Other parameters are optional. Full details of all the options can be found in the <a href="https://documentation.bloomreach.com/engagement/reference/transactional-email-2" target="_blank">Bloomreach Transactional Email API documentation</a>.<br /><br />bloomreach-transactional-email uses <a href="(https://www.npmjs.com/package/axios" target="_blank">axios</a>, as a peer dependency, to make HTTP calls.<br /><br /><b>Install</b><br /><br /><span style="font-family: courier;">npm i -save bloomreach-transactional-email</span><br /><br /><b>Basic Examples</b><br /><br />If you have Customer IDs and a default email integration with a sender name and address setup in Bloomreach then you can use the minimum configuration to send an email by specifying a HTML body and a subject:<br /><br /><span style="font-family: courier;">import { sendEmail } from 'bloomreach-transactional-email';<br /><br />const auth = {<br /> username: '...', // Your APIKeyID<br /> password: '...', // Your APISecret<br /> baseUrl: 'https://api.exponea.com', // Your base url<br /> projectToken: '...', // Your project token<br />};<br /><br />const campaignName = 'MyCampaign';<br /><br />const customerIds = {<br /> registered: 'marian@exponea.com'<br />};<br /><br />const htmlContent = {<br /> html: '<!DOCTYPEhtml><body>Hello world</body></html>',<br /> subject: 'SubjectExample',<br />}<br /><br />await sendEmail(auth, campaignName, customerIds, htmlContent);</span><br /><br />If you have a template set up you can also send an email using it:<br /><br /><span style="font-family: courier;">const templateContent = {<br /> templateId: '60758e2d18883e1048b817a8',<br /> params: { first_name: 'Marian' }<br />}<br /><br />await sendEmail(auth, campaignName, customerIds, templateContent);</span><br /><br />If you don’t have Customer IDs setup in Bloomreach you can specify the email address to send the email to (you still need to specify Customer IDs). If you have language variants of your template, you can specify the language. You can also specify the sender name and sender address:<br /><br /><span style="font-family: courier;">await sendEmail(<br /> auth,<br /> campaignName,<br /> customerIds,<br /> htmlContent,<br /> { <br /> email: 'jon.doe@example.com',<br /> language: 'en',<br /> senderAddress: 'Marian',<br /> senderName: 'marian@exponea.com'<br /> }<br />);</span><br /><br /><b>Integrations</b><br /><br />You can specify either a single integration:<br /><br /><span style="font-family: courier;">await sendEmail(<br /> auth,<br /> campaignName,<br /> customerIds,<br /> htmlContent,<br /> { <br /> integrationId: "5b337eceeb7cdb000d4e20f3"<br /> }<br />);</span><br /><br />or up to two integrations, a primary and a backup in case the primary fails, with individual sender addresses:<br /><br /><span style="font-family: courier;">await sendEmail(<br /> auth,<br /> campaignName,<br /> customerIds,<br /> htmlContent,<br /> { <br /> integrations: [<br /> {<br /> id: "5b337eceeb7cdb000d4e20f3",<br /> senderAddress: "marian@exponea.com",<br /> },<br /> {<br /> id: "3f02e4d000bdc7beece733b5",<br /> senderAddress: "marian@exponea.com",<br /> }<br /> ]<br /> }<br />);</span><br /><br /><b>Transfer Identity</b><br /><br />You can specify a transfer identity of:<br /></p><ul style="text-align: left;"><li>enabled</li><li>disabled</li><li>first_click</li></ul><p><span style="font-family: courier;">await sendEmail(<br /> auth,<br /> campaignName,<br /> customerIds,<br /> htmlContent,<br /> { <br /> transferIdentity: 'disabled'<br /> });</span><br /><br /><b>Attachments</b><br /><br />You can add an array of attachments with base64 encoded content:<br /><br /><span style="font-family: courier;">await sendEmail(<br /> auth,<br /> campaignName,<br /> customerIds,<br /> htmlContent,<br /> {}, // Options object can also be undefined<br /> [<br /> {<br /> filename: 'example1.txt',<br /> content: 'RXhhbXBsZSBhdHRhY2htZW50',<br /> contentType: 'text/plain',<br /> },<br /> {<br /> filename: 'example2.txt',<br /> content: 'RXhhbXBsZSBhdHRhY2htZW50',<br /> contentType: 'text/plain',<br /> },<br /> ]);</span><br /><br /><b>Settings</b></p><p>You can also add:</p><ul style="text-align: left;"><li>Custom Event Properties</li><li>Custom Headers</li><li>Url Params</li><li>Transfer User Identity</li><li>Consent Category</li><li>Consent Category Tracking</li></ul><p><br />Check the <a href="https://documentation.bloomreach.com/engagement/reference/transactional-email-2" target="_blank">Bloomreach Transactional Email API documentation</a> for details:<br /><br /><span style="font-family: courier;">await sendEmail(<br /> auth,<br /> campaignName,<br /> customerIds,<br /> htmlContent,<br /> {}, // Options object can also be undefined<br /> [], // Attachments array can also be undefined<br /> {<br /> customEventProperties: {<br /> bannana: 'yellow',<br /> 1: 2,<br /> },<br /> customHeaders: {<br /> source: 'your-company',<br /> 1: 2,<br /> },<br /> urlParams: {<br /> source: 'email',<br /> 1: 2,<br /> },<br /> transferUserIdentity: 'first_click',<br /> consentCategory: 'sms',<br /> consentCategoryTracking: 'sms',<br /> });</span><br /><br /></p>Paul Grenyerhttp://www.blogger.com/profile/18212226926099615757noreply@blogger.com0tag:blogger.com,1999:blog-4548789926995192649.post-30263710029732917232023-05-06T08:55:00.006+01:002023-05-06T08:56:13.323+01:00A Review: Leviathan Wakes: The Expanse, Book 1<p dir="ltr" id="docs-internal-guid-a3ab7007-7fff-4361-0387-5dac582ece83" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgZAIujl8B7kd9HfGI6xDxMG0hpqGKdJ4a-vn4q4SPlx_GfQ434RtEloisdykvqei-ToK28ZVkjfz4NbNiEOEfRX3lnSex2_2ueCotuj5_5Z-7BGIKjyRWJrimfHis-j5qz-1YlnADT7goyyxQbMflPbUb6ocWXZu0oN71xHx87_a9k2-8I4LLiiCq1/s500/leviathan-wakes.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="500" data-original-width="319" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgZAIujl8B7kd9HfGI6xDxMG0hpqGKdJ4a-vn4q4SPlx_GfQ434RtEloisdykvqei-ToK28ZVkjfz4NbNiEOEfRX3lnSex2_2ueCotuj5_5Z-7BGIKjyRWJrimfHis-j5qz-1YlnADT7goyyxQbMflPbUb6ocWXZu0oN71xHx87_a9k2-8I4LLiiCq1/s320/leviathan-wakes.jpg" width="204" /></a></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">by James S. A. Corey</span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">ISBN-13: 978-0316333429</span></p><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> </span></p><span style="font-family: Arial;">I was apprehensive about reading Leviathan Wakes as a friend had suggested it was boring compared to the TV series, which I loved. It wasn’t! The Protomolecule is brown goo, rather than bright glittery stuff, but that really didn’t matter. Chrisjen Avasarala, one of my favourite characters from the TV series, and the earth government don’t feature at all. It’ll be interesting to see if she appears later in the series. Some of the other bits invented for TV I didn’t feel were necessary. The main characters were mostly the same and I felt like I already knew them.<br /><br />I struggle to think of Leviathan Wakes as a space opera. There are only really two threads and the scope isn’t particularly broad. However, there is loads of potential for the future books and I’m really looking forward to them.</span><br />Paul Grenyerhttp://www.blogger.com/profile/18212226926099615757noreply@blogger.com0tag:blogger.com,1999:blog-4548789926995192649.post-77436700308140179842023-05-04T19:26:00.003+01:002024-03-04T14:33:51.229+00:00Write Your Own Load Balancer: A worked Example<p></p><div class="separator" style="clear: both; text-align: left;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi-CtBtApyWJ1wOuBQVQb4T4ErVgfdwTmSrpsP8omEgb-HcZjd9t9J5oS_n-w40Mx-EsZr0UhcpZLgjV48HVLgYjxEQ2DbXClbLh76HqpQJ5coAHisUjVawkihiN0ke9mjzSY0fyUcXG0Ln8Yc2AzIsSZl2n7CCwWr_oz9Zw0vt4lGX3XGM-4OkutEG/s2048/HiResedit.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="2048" data-original-width="1966" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi-CtBtApyWJ1wOuBQVQb4T4ErVgfdwTmSrpsP8omEgb-HcZjd9t9J5oS_n-w40Mx-EsZr0UhcpZLgjV48HVLgYjxEQ2DbXClbLh76HqpQJ5coAHisUjVawkihiN0ke9mjzSY0fyUcXG0Ln8Yc2AzIsSZl2n7CCwWr_oz9Zw0vt4lGX3XGM-4OkutEG/s320/HiResedit.jpg" width="307" /></a>I was out walking with a techie friend of mine I’d not seen for a while and he asked me if I’d written anything recently. I hadn’t, other than an article on data sharing a few months before and I realised I was missing it. Well, not the writing itself, but the end result.</div><br />In the last few weeks, another friend of mine, <a href="https://www.linkedin.com/in/johncrickett/" target="_blank">John Cricket</a>, has been setting weekly code challenges via linkedin and his new website, <a href="https://codingchallenges.fyi/">https://codingchallenges.fyi/</a>. They were all quite interesting, but one in particular on writing load balancers appealed, so I thought I’d kill two birds with one stone and write up a worked example.<br /><br />You’ll find my worked example below. The challenge itself is italics and voice is that of John Crickets.<p></p><h2 style="text-align: left;">The Coding Challenge</h2><p><a href="https://codingchallenges.fyi/challenges/challenge-load-balancer/">https://codingchallenges.fyi/challenges/challenge-load-balancer/</a><br /></p><h3 style="text-align: left;">Write Your Own Load Balancer</h3><p><i>This challenge is to build your own application layer load balancer.<br /><br />A load balancer sits in front of a group of servers and routes client requests across all of the servers that are capable of fulfilling those requests. The intention is to minimise response time and maximise utilisation whilst ensuring that no server is overloaded. If a server goes offline the load balance redirects the traffic to the remaining servers and when a new server is added it automatically starts sending requests to it.<br /><br />Load balancers can work at different levels of the <a href="https://en.wikipedia.org/wiki/OSI_model#Layer_architecture" target="_blank">OSI seven-layer network model</a> for example most cloud providers offer application load balancers (layer seven) and network load balancers (layer four). We’re going to focus on a layer seven - application load balancer, which will route HTTP requests from clients to a pool of HTTP servers.<br /><br />A load balancer performs the following functions:<br /></i></p><ul style="text-align: left;"><li><i>Distributes client requests/network load efficiently across multiple servers</i></li><li><i>Ensures high availability and reliability by sending requests only to servers that are online</i></li><li><i>Provides the flexibility to add or subtract servers as demand dictates</i></li></ul><p><i>Therefore our goals for this project are to:<br /><br /></i></p><ol style="text-align: left;"><li><i>Build a load balancer that can send traffic to two or more servers.</i></li><li><i>Health check the servers.</i></li><li><i>Handle a server going offline (failing a health check).</i></li><li><i>Handle a server coming back online (passing a health check).</i></li></ol><h3 style="text-align: left;"><i>Step Zero</i></h3><p style="text-align: left;"><i>As usual this is where you select the programming language you’re going to use for this challenge, set up your IDE and grab a coffee (or other beverage of your choice).<br /><br />If you’re not used to building systems that handle multiple concurrent events you might like to grab your beverage of choice and do some background reading on multi-threading, concurrency and asynchronous programming as it relates to your programming language of choice.</i><br /><br />This is an easy choice for me as most of the work I’m doing at the moment is in node and it supports the sort of concurrency I need. To make getting started easy, I created a node project template which supports typescript, because all good developers love type safety, right?<br /><br />https://github.com/pjgrenyer/node-typescript-template<br /><br /></p><h3 style="text-align: left;">Step 1</h3><p style="text-align: left;"><i>In this step your goal is to create a basic server that can start-up, listen for incoming connections and then forward them to a single server.<br /><br />The first sub-step then is to create a program (I’ll call it ‘lb’) that will start up and listen for connections on on a specified port (i.e. 80 for HTTP). I’d suggest you then log a message to standard out confirming an incoming connection, something like this:<br /><br /></i><span style="font-family: courier;">./lb<br />Received request from 127.0.0.1<br />GET / HTTP/1.1<br />Host: localhost<br />User-Agent: curl/7.85.0<br />Accept: */*</span><i><br /><br />For when I sent a test request to our load balancer like this:<br /></i><br /><span style="font-family: courier;">curl http://localhost/</span><br /><br />Cool! Time to write some code. Fortunately it’s really easy to create a program which can listen for connections in node, with https://expressjs.com/ and really easy to get going, by making a copy of the node typescript template project I created:<br /><br /><a href="https://github.com/pjgrenyer/coding-challenges-lb">https://github.com/pjgrenyer/coding-challenges-lb</a><br /><br />And creating a new feature branch:<br /><br /><span style="font-family: courier;">git flow feature start listener</span><br /><br />Then all I needed was to install express:<br /><br /><span style="font-family: courier;">npm install express --save<br />npm install @types/express --save-dev</span><br /><br />and fire up an express server:<br /><br /><span style="font-family: courier;">// index.ts<br /><br />import express, { Request, Response } from "express";<br /><br />const port = 8080;<br />const app = express();<br /><br />app.get('/', (req: Request, res: Response) => {<br /> res.send('Code Challenge!')<br />})<br /><br />app.listen(port, () => {<br /> console.log(`Listening on port ${port}`)<br />});</span><br /><br />run the app:<br /><br /><span style="font-family: courier;">npm run dev<br /><br />> coding-challenges-lb@1.0.1 dev<br />> npm run build && node dist/index.js<br /><br /><br />> coding-challenges-lb@1.0.1 build<br />> rimraf dist && tsc<br /><br />Listening on port 8080</span><br /><br />and check it works with curl:<br /><br /><span style="font-family: courier;">curl localhost:8080<br /><br />Code Challenge!</span><br /><br />The code challenge suggests using port 80 for the load balancer, but I’m developing on Linux where this port is reserved, so I’ve gone for port 8080 instead.<br /><br />The code challenge suggests logging the request, which is easily done with node and express, by modifying the request handler:<br /><br /><span style="font-family: courier;">app.get('/', (req: Request, res: Response) => {<br /> console.log(req);<br /> res.send('Code Challenge!');<br />});</span><br /><br />I’ll not show the output here as, from express, it’s very large.<br /><br />That’s it for this part of the step, so I committed the code:<br /><br /><span style="font-family: courier;">git add .<br />git commit -m"feat: added listener"</span><br /><br />And completed the feature branch:<br /><br /><span style="font-family: courier;">git flow feature finish</span><br /><br />before going back to the code challenge instructions.<br /><br /><i>Next up we want to forward the request made to the load balancer to a back end server. This involves opening a connection to the back end server, making the same request to it that we received, then passing the result back to the client.<br />In order to handle multiple clients making requests you’ll need to add some concurrency, either with your programming language’s async framework or threads.</i></p><p style="text-align: left;"><i>I decided to use a modified version of my code from the beginning of this step as my backend. It was changed to respond with a HTTP status of 200 and the text: ‘Hello From Backend Server’.</i></p><p style="text-align: left;">I decided to do much the same thing. Node can handle multiple clients making calls out of the box. I created another new project:</p><p style="text-align: left;"><a href="https://github.com/pjgrenyer/coding-challenges-be">https://github.com/pjgrenyer/coding-challenges-be</a></p><p style="text-align: left;">which is much the same as the load balancer, but with a few changes. I know from reading ahead in the code challenge I’m going to need to run the backend server on multiple ports, so I made that an environment variable and returned it as part of the response message:</p><p style="text-align: left;"><br /><span style="font-family: courier;">// index.ts</span></p><p style="text-align: left;"><span style="font-family: courier;">import express, { Request, Response } from 'express';<br /><br />const port = process.env.PORT ? +process.env.PORT : 8081;<br />const app = express();<br /><br />app.get('/', (req: Request, res: Response) => {<br /> res.send(`Hello From Backend Server (${port})`);<br />});<br /><br />app.listen(port, () => {<br /> console.log(`Listening on port ${port}`);<br />});</span><br /><br />Now, if I run the app as normal it will run on port 8081 by default, but if I set the port environment variable:<br /><br /><span style="font-family: courier;">PORT=8082 npm run dev<br /></span><br />then the app will run on the specified port:<br /><br /><span style="font-family: courier;">> coding-challenges-be@1.0.1 dev<br />> npm run build && node dist/index.js<br /><br />> coding-challenges-be@1.0.1 build<br />> rimraf dist && tsc<br /><br />Listening on port 8082</span><br /><br />and the port number is returned as part of the response:<br /><br /><span style="font-family: courier;">curl localhost:8082<br /><br />Hello From Backend Server (8082)</span><br /><br />Next, I need to modify the load balancer to call the backend server. <br /><br />Node doesn’t support fetch natively like a browser’s implementation of JavaScript does and even though there is a <a href="https://www.npmjs.com/package/node-fetch" target="_blank">node-fetch</a> package, I prefer <a href="https://www.npmjs.com/package/axios" target="_blank">Axios</a> for no other reason than its implementation makes a little more sense to me:<br /><br /><span style="font-family: courier;">npm i axios --save<br />npm i @types/axios --save<br /><br />With Axios installed I can call the backend directly and return the response:<br /><br />import express, { Request, Response } from 'express';<br />import axios from 'axios'<br /><br />const port = 8080;<br />const app = express();<br /><br />app.get('/', async (req: Request, res: Response) => {<br /> const response = <br />await axios.get('http://localhost:8081');<br /> res.send(response.data);<br />});<br /><br />app.listen(port, () => {<br /> // eslint-disable-next-line no-console<br /> console.log(`Listening on port ${port}`);<br />});</span><br /><br />Now with the backend running on port 8081 and the load balancer on 8080, making a request to the load balancer from curl gives the backend response:<br /><span style="font-family: courier;"><br />curl localhost:8080<br /><br />Hello From Backend Server (8081)</span><br /><br />And now it’s on to step 2.<br /></p><h3 style="text-align: left;">Step 2</h3><p style="text-align: left;"><i>In this step your goal is to distribute the incoming requests between two servers using a basic scheduling algorithm - round robin.<br /><br />Round robin is the simplest form of static load balancing. In a nutshell it works by sending each new request to the next server in the list of servers. When we’ve sent a request to every server, we start back at the beginning of the list.</i><i> </i></p><p style="text-align: left;"><i>You can read more about <a href="https://codingchallenges.fyi/blog/load-balancing-algorithms/" target="_blank">load balancing algorithms</a> on the Coding Challenges website.<br /><br />So to do this we’ll need to do several things:</i></p><ol style="text-align: left;"><li><i>Extend our load balancer to allow for multiple backend severs.</i></li><li><i>Then route the request to the servers based on the round robin scheduling algorithm.</i></li></ol><p style="text-align: left;"><i><br /></i>The code challenge goes on to suggest that a Python server could be used as the backend, but I’ve built my own which can be easily started on different ports, so I’m going to stick with that.<br /><br />Although I came up with quite a few different ways of storing and iterating through backend server urls, including using a database, the simplest idea I had is to maintain an array of the servers and, each time, take the server off the top, use it and push it back on from the bottom. Node modules allow arrays and functions to be easily shared between requests, so I created a new module with the array, initialised it and exported a function to get the next server:<br /><br /><span style="font-family: courier;">// backend.ts<br /><br />let backends: Array<string> = [];<br />[<br /> 'http://localhost:8081',<br /> 'http://localhost:8082',<br /> 'http://localhost:8083'<br />].forEach((url) => backends.push(url));<br /><br />export const nextBackend = (): string | undefined => {<br /> const nextBackend = backends.shift();<br /> if (nextBackend) {<br /> backends.push(nextBackend);<br /> }<br /> return nextBackend;<br />};</span><br /><br />Then I modified the request to get the next backend server and use its url each time a request is made:<br /><br /><span style="font-family: courier;">import { nextBackend } from './backend';<br />…<br />app.get('/', async (req: Request, res: Response) => {<br /> const backend = nextBackend();<br /> if (backend) {<br /> const response = await axios.get(backend);<br /> res.send(response.data);<br /> } else {<br /> res.status(503).send('Error!');<br /> }<br />});</span><br /><br />When you ‘shift’ (remove from the top) an element out of an array, it will return undefined if the array is empty, so undefined is a possible value returned by nextBackend and must therefore be handled, in this case, by returning status code 503 and a simple error message.<br /><br />This should be all which is needed, so I fired up three backend servers on three different ports:<br /><br /><span style="font-family: courier;">PORT=8081 npm run dev<br />PORT=8082 npm run dev<br />PORT=8083 npm run dev</span><br /><br />started the modified load balancer and called it a few times:<br /><br /><span style="font-family: courier;">> curl localhost:8080<br /><br />Hello From Backend Server (8081)<br /><br />> curl localhost:8080<br /><br />Hello From Backend Server (8082)<br /><br />> curl localhost:8080<br /><br />Hello From Backend Server (8083)<br /><br />> curl localhost:8080<br /><br />Hello From Backend Server (8081)<br />…</span><br /><br />And it worked! I got a response from each of the backed servers in turn! It works from a web browser too.<br /></p><h3 style="text-align: left;">Step 3</h3><p style="text-align: left;"><i>In this step your goal is to periodically health check the application servers that we’re forwarding traffic to. If any server fails the health check then we will stop sending requests to it.<br /><br />For this exercise we’re going to use a HTTP GET request as the health check. If the status code returned is 200, the server is healthy. Any other response and the server is unhealthy and requests should no longer be sent to it.<br /><br />Typically the health checks are sent periodically, I’d suggest you make this configurable via the command line so we can set a short period for testing - say 10 seconds. You will also need to be able to specify a health check URL.<br /><br />So in summary the key tasks for this step:<br /><br />Allow a health check period to be specified on the command line.<br />Every period make a GET request to the health check URL if the result is 200 carry on. Otherwise take the server out of the list of available servers to handle requests.<br />If the health check of a server starts passing again, add it back to the list of servers available to handle requests.<br /><br />It would be a good idea to run the health check as a background task, concurrently to handling client requests.</i><br /><br />In my experience a health endpoint is usually a slightly oddly named endpoint with ‘health’ in the name. One I see frequently is ‘_health’. A health endpoint can be any endpoint which returns 200 and should have zero or minimal side effects. For example, it shouldn’t be calling a database, but it may log. The backend service already has such an endpoint, but that might change in the future, so I’m going to create a dedicated one.<br /><br /><span style="font-family: courier;">…<br />app.get('/_health', (req: Request, res: Response) => {<br /> res.send();<br />});<br />…</span><br /><br />Then, back in the load balancer I need to call the health check endpoint on each backend every X seconds where X is configurable. Creating a background tasks in the node is really easy using setInterval:<br /><br /><span style="font-family: courier;">const healthCheckInterval = <br />process.env.HEALTH_CHECK_INTERVAL ? +process.env.HEALTH_CHECK_INTERVAL : 10;<br /><br />…<br />const timer = setInterval(() => {<br /> console.log('Health check!');<br />}, 1000 * healthCheckInterval);<br />timer.unref();<br />…</span><br /><br />Starting the load balancer will print the console message every 10 seconds. Next I need to get a list of the backends and call the health check endpoint on each. First I’m going to create a health check function and put it in a module of its own to keep things clean:<br /><br /><span style="font-family: courier;">// healthChecks.ts<br /></span><span style="font-family: courier;">…</span><span style="font-family: courier;"><br />export const healthChecks = async () => {<br /> console.log('Health check!');<br />}</span><br /><br />And then call it on startup, so the first thing which is done is the health check on each backend, and then from the interval function:<br /><br /><span style="font-family: courier;">// index.ts<br />…<br />healthChecks();<br />const timer = setInterval(async () => {<br /> await healthChecks();<br />}, 1000 * healthCheckInterval);<br />timer.unref();</span><br /><br />Next I need a list of backends. The current backends array changes. Backends are pulled off the top of the array and pushed back on the bottom. In future they’ll also be removed from the array when the backend isn’t healthy, so I need a constant array of backends. I made a few changes for this:<br /><br /><span style="font-family: courier;">// backend.ts<br /><br />export const backends = [<br />'http://localhost:8081',<br />'http://localhost:8082',<br />'http://localhost:8083'];<br /><br />let activeBackends: Array<string> = [];<br />backends.forEach((url) => activeBackends.push(url));<br /><br />export const nextBackend = (): string | undefined => {<br /> const nextBackend = activeBackends.shift();<br /> if (nextBackend) {backends<br /> activeBackends.push(nextBackend);<br /> }<br /> return nextBackend;<br />};</span><br /><br />Now the <span style="font-family: courier;">backend</span> array is constant and a new <span style="font-family: courier;">activeBackends</span> array is used for round robin load balancing. I also exported the array so that I can use it elsewhere.<br /><br />Now I want to iterate through the backends:<br /><br /><span style="font-family: courier;">// healthChecks.ts<br /><br />export const healthChecks = async () => {<br /> for (const backend of backends) {<br /> }<br />};</span><br /><br />Next I want a function I can call to determine if the backend is healthy:<br /><br /><span style="font-family: courier;">// healthChecks.ts<br /><br />const healthCheckPath = <br />process.env.HEALTH_CHECK_PATH ?? '_health';<br />…<br />const isHealthy = async (url: string): Promise<boolean> => {<br /> try {<br /> const response = <br />axios.get(`${url}/${healthCheckPath}`);<br /> return (await response).status === 200;<br /> } catch (error: any) {<br /> return false;<br /> }<br />};</span><br /><br />The path to the health check endpoint should be configurable, so I’ve made it an environment variable. The response status from the endpoint is checked and if it isn’t 200 then the health check fails. If an exception is thrown, which can happen with Axios if the backend isn’t there, it’s caught and considered a health check failure.<br /><br />Now I can iterate through the backends and, check each one and output a message about its health:<br /><br /><span style="font-family: courier;">export const healthChecks = async () => {<br /> for (const backend of backends) {<br /> if (await isHealthy(backend)) {<br /> console.log(`${backend} is healthy`);<br /> } else {<br /> console.log(`${backend} is not healthy`);<br /> }<br /> }<br />};</span><br /><br />This is the point where, if you’re like me, you fire up three instances of the backend (if you haven’t still got them running), fire up the load load balancer and spend hours (well, several minutes) starting and stopping the backends and watching the health check messages:<br /><br /><span style="font-family: courier;">Listening on port 8080<br />http://localhost:8081 is healthy<br />http://localhost:8082 is not healthy<br />http://localhost:8083 is healthy<br />http://localhost:8081 is healthy<br />http://localhost:8082 is not healthy<br />http://localhost:8083 is healthy<br />http://localhost:8081 is healthy<br />http://localhost:8082 is healthy<br />http://localhost:8083 is healthy<br />…</span><br /><br />And while that’s a lot of fun, it’s not getting us anywhere as we’re not actually removing dead backends from the list or re-adding them when they’re revived. I added two new functions to do that:<br /><br /><span style="font-family: courier;">// backend.ts<br /><br />export const removeBackend = (url: string) => {<br /> activeBackends = activeBackends<br />.filter((backend) => backend != url);<br />};<br /><br />export const addBackend = (url: string) => {<br /> if (!activeBackends.find((backend) => backend == url)) {<br /> activeBackends.push(url);<br /> }<br />};</span><br /><br />The <span style="font-family: courier;">removeBackend</span> function simply iterates through the <span style="font-family: courier;">activeBackends</span> array and filters out the unhealthy backend. <span style="font-family: courier;">addBackend</span> looks to see if the backend already exists and only adds it to <span style="font-family: courier;">activeBackends</span> if it doesn’t so I don’t end up with duplicates.<br /><br />Then, all that remains to do in this step is to use these methods in the the <span style="font-family: courier;">healthChecks</span> function:<br /><br /><span style="font-family: courier;">// healthChecks.ts<br /><br />export const healthChecks = async () => {<br /> for (const backend of backends) {<br /> if (await isHealthy(backend)) {<br /> console.log(`${backend} is healthy`);<br /> addBackend(backend);<br /> } else {<br /> console.log(`${backend} is not healthy`);<br /> removeBackend(backend);<br /> }<br /> }<br />};</span><br /><br />Then restart the load balancer and test…<br /><br /><i>When it comes to testing this I suggest you start up a third backend server.<br /><br />Then connect to your load balancer three to six times to verify that it is rotating through backend servers as expected. Once you’ve verified that kill one of the servers and verify the request are only routed to the remaining two, without you, the end user receiving any errors.<br /><br />Once you’ve verified that, start the server back up, wait just a little longer than the health check duration and then check it is now back to serving content when requests are made through the load balancer.<br /><br />As a final test, check your load balancer can handle multiple concurrent requests, I suggest using curl for this. First create a file containing the urls to check - for this they’ll all be the same:<br /></i><br /><span style="font-family: courier;">url = "http://localhost:8080"<br />url = "http://localhost:8080"<br />url = "http://localhost:8080"<br />url = "http://localhost:8080"<br />url = "http://localhost:8080"<br />url = "http://localhost:8080"<br />url = "http://localhost:8080"<br />url = "http://localhost:8080"</span><br /><br /><i>Then invoke curl to make concurrent requests:</i><br /><br /><span style="font-family: courier;">curl --parallel --parallel-immediate --parallel-max 3 --config urls.txt</span><br /><br /><i>Tweak the maximum parallelisation to see how well your server copes!<br /><br />If that all works, congratulations, you’ve built a basic HTTP load balancer!<br /></i><br />I tried all the tests against the load balancer and they all passed. I put more than 90 calls in the urls file and had 30 parallel calls before I got bored. It all worked really well.<br /><br /></p><h2 style="text-align: left;">Finally</h2><p style="text-align: left;">Having built a rocking load balancer and got the recommended tests to pass, I’m going to leave this worked example here. However, the code challenge does suggests some further steps:<br /></p><h3 style="text-align: left;">Beyond Step 3 - Further Extensions You Could Build</h3><p style="text-align: left;"><i>Having gotten this far you’ve built a basic working load balancer. That’s pretty awesome!<br /><br />Here are some other areas you could explore if you wish to take the project further and dig deeper into what makes a load balancer useful and how it works:<br /><br /></i></p><ol style="text-align: left;"><li><i>Read up about HTTP <a href="https://en.wikipedia.org/wiki/HTTP_persistent_connection" target="_blank">keep-alive</a> and how it is used to reuse back end connections until the timeout expires.</i></li><li><i>Add some Logging - think about the kinds of things that would be useful for a developer, i.e. which server did a client’s request go to, how long did the backend server take to process the request and so on.</i></li><li><i>Build some automated tests that stand up the backend servers, a load balancer and a few clients. Check the load balancer can handle multiple clients at the same time.</i></li><li><i>If you opted for threads, try converting it to use an async framework - or vice versa.</i></li></ol><p style="text-align: left;">These seem like a lot of fun, especially writing the automated tests. I may well return to and progress this code challenge in the future.</p><p style="text-align: left;"></p><p style="text-align: left;">Thank you to <a href="https://www.linkedin.com/in/md84419/" target="_blank">Michael Davey</a>, <a href="https://www.linkedin.com/in/johncrickett/" target="_blank">John Cricket</a> and <a href="https://www.linkedin.com/in/stephencresswell/" target="_blank">Stephen Cresswell</a>.<br /><br /><br /></p>Paul Grenyerhttp://www.blogger.com/profile/18212226926099615757noreply@blogger.com0tag:blogger.com,1999:blog-4548789926995192649.post-55606644003310101822023-04-05T07:20:00.004+01:002023-04-05T07:27:04.312+01:00Sleepover A Review<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhouVeeEaG4qIYTe3hEkQ1-nG6nQht0HRey6YKYhj4eC1fP_dSYSknbxCREeKbehIkzAIQLNmG_KQJg-ecmovnjcNcRGy-LMQrowmqH89DATreimkhdZw7AlVLX3rHi4SzRofPEHMjg4aOgW0kq35E9-z0i6HDBh6DBXLvcKCGRD5kxbngqFMHB4uQg/s400/sleepover.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="400" data-original-width="264" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhouVeeEaG4qIYTe3hEkQ1-nG6nQht0HRey6YKYhj4eC1fP_dSYSknbxCREeKbehIkzAIQLNmG_KQJg-ecmovnjcNcRGy-LMQrowmqH89DATreimkhdZw7AlVLX3rHi4SzRofPEHMjg4aOgW0kq35E9-z0i6HDBh6DBXLvcKCGRD5kxbngqFMHB4uQg/s320/sleepover.jpg" width="211" /></a></div><b>Sleepover</b><br />Alistair Reynolds<br />ASIN: B0097AXWUY<br /><br />I often think that Alastair Reynolds must have encountered some truly unpleasant people in his life, as he creates such nasty characters so well. Sleepover starts with Gaunt awakened from a long sleep, but soon the backstory of why those who awoke him are so unpleasant unfolds. Gaunt awakes to a dystopian future quite unlike any other I’ve read about or imagined and this is where the story really starts to get interesting.<br /><br />It was clear from the outset that this story was just the beginnings or a larger work, which was pieced together from notes, and may or may not be developed into a longer novel. I hope it does.<br /><br />Paul Grenyerhttp://www.blogger.com/profile/18212226926099615757noreply@blogger.com0tag:blogger.com,1999:blog-4548789926995192649.post-81778418671205587792023-03-30T07:32:00.004+01:002023-03-30T07:32:38.538+01:00A Review: The Silmarillion<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgAcbXBcQa9kYR6zIsK--MB5xgzIFhYYt0TRKTEwsUkl_uxoQAdZJjFl83kS3KvqwX2mxzG6LqO8QrJOd-9HP3F0D9ISXuuVnZN1AcXB47N0Qcv0AaeYUDSKvuib36HgosuwRPKu-LN5I3D6i4yzwFSA4pkxyrKva3hmnySbtWT9qblFbaXZpUX8QtB/s399/md30933563667.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="399" data-original-width="300" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgAcbXBcQa9kYR6zIsK--MB5xgzIFhYYt0TRKTEwsUkl_uxoQAdZJjFl83kS3KvqwX2mxzG6LqO8QrJOd-9HP3F0D9ISXuuVnZN1AcXB47N0Qcv0AaeYUDSKvuib36HgosuwRPKu-LN5I3D6i4yzwFSA4pkxyrKva3hmnySbtWT9qblFbaXZpUX8QtB/s320/md30933563667.jpg" width="241" /></a></div><b>The Silmarillion</b><br />by J. R. R. Tolkien, Christopher Tolkien<br />ISBN: 978-0007523221<br /><br />I read the Lord of the Rings when I was about 8, having loved the BBC radio play starring Michael Hordern as Gandalf. Of course I wanted to read the Silmarillion too, but was always told it was hard going, which put me off. Then I tried it in my early 20s and didn’t get past the first few pages. Today, at (nearly) 46 I finished it.<br /><br />The Silmarillion is hard going and, for the most part, unpleasant to read. It’s mostly the language used and that it reads more like a technical history than a story. It’s quite repetitive with battle after battle and no real progress for good or evil. There are so many different names and places and this makes it difficult to follow.<br /><br />It gets better around Beren and Lúthien. Where it really gets interesting, and more enjoyable, is when it reaches the Third Age and there’s more about the rings of power and the characters I’m more familiar with from the Lord of the Rings.<br /><br />I was disappointed that the scenes from the Hobbit film series with Radagast weren’t described in the Silmarillion, as I’d been led to believe they were. Someone must have made them up.<br /><br />The recent Amazon TV series, the Rings of Power, doesn’t seem to be consistent with the Silmarillion either, for example, Gandalf didn’t drop from a ball in the sky and there’s no mention of Sauron rescuing Galadriel and sailing to Numenor. <br /><p></p>Paul Grenyerhttp://www.blogger.com/profile/18212226926099615757noreply@blogger.com0tag:blogger.com,1999:blog-4548789926995192649.post-82286095384055315612023-03-25T09:12:00.004+00:002023-03-25T09:12:49.688+00:00A Review: The New One Minute Manager<p><b></b></p><div class="separator" style="clear: both; text-align: center;"><b><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_5q-jVAQU-EyESdHTg1_31v2tS9Ejd7ChWAyaZ0mNyHGbh2w_g5CY2fajbBf7kvOdZplpODycyWM5EB0pJJq4vGtjPnw6-VlSQD5V8Aobst5r0WTC7tQ-hTgz30vDsYbJq3PB3IzMdrzYtJRrznbapw5ez072IJYRZmhPIOAkZovfVSS_RuHSKu0q/s2560/one-minute-manager.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="2560" data-original-width="1694" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_5q-jVAQU-EyESdHTg1_31v2tS9Ejd7ChWAyaZ0mNyHGbh2w_g5CY2fajbBf7kvOdZplpODycyWM5EB0pJJq4vGtjPnw6-VlSQD5V8Aobst5r0WTC7tQ-hTgz30vDsYbJq3PB3IzMdrzYtJRrznbapw5ez072IJYRZmhPIOAkZovfVSS_RuHSKu0q/s320/one-minute-manager.jpg" width="212" /></a></b></div><b>The New One Minute Manager</b><br />by Kenneth Blanchard and Spencer Johnson <p></p><p>I rarely read a book more than once (unless it’s set in Alastair Reynolds’ Revelation Space Universe), but this is the third time I’ve read the New One Minute Manager, and It’s not just because it’s a quick and easy book to read, with clear, concise digestible advice. <br /><br />Many years ago, when I ran my own business, I was working with the conflicting practice of asking my employees to assume that everything was ok unless I said otherwise, and the desire for them to be happy in their work. This meant that most of the time feedback was sparse and, when it did come, it was predominantly negative - although I didn’t operate a blame culture. <br /><br />The New One Minute manager has quite a different approach. Now that I’m leading a team again, I read it through to remind myself of the approach. I’m currently experimenting with the One Minute Goals in a software engineering context. I use something similar to the One Minute Praisings already and the book has reminded me to structure how I give feedback, generally and for the individual, by applying the One Minute Redirect.<br /><br />And of course, I’ve also written out the details of the secrets and stuck them to my wall.<br /><br /></p>Paul Grenyerhttp://www.blogger.com/profile/18212226926099615757noreply@blogger.com0tag:blogger.com,1999:blog-4548789926995192649.post-42097240818244756752023-01-21T11:15:00.000+00:002023-01-21T11:15:32.279+00:00Review: Diamond Dogs, Turquoise Days<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-98Bt1u0bqhxUNE57rFpC509JN_sY8fSGjQa9qRVrIAygTJT_XaK9CU0IaoVS2n5McEqGDZxIL22282Yf4y6S_M6Sk1xL0OAbFEcSSayQwuGlpSyyRK_s1fKinKZZN_Mqk1wNFsSouLnyHV343TdJwvxJ6IVPjiYBnp9CkWUZEA5FRugPGvqFcle5/s387/Diamond_Dogs,_Turquoise_Days_cover_(Amazon).jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="387" data-original-width="257" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-98Bt1u0bqhxUNE57rFpC509JN_sY8fSGjQa9qRVrIAygTJT_XaK9CU0IaoVS2n5McEqGDZxIL22282Yf4y6S_M6Sk1xL0OAbFEcSSayQwuGlpSyyRK_s1fKinKZZN_Mqk1wNFsSouLnyHV343TdJwvxJ6IVPjiYBnp9CkWUZEA5FRugPGvqFcle5/s320/Diamond_Dogs,_Turquoise_Days_cover_(Amazon).jpg" width="213" /></a></div><p>Diamond Dogs, Turquoise Days<br />by Alistair Reynolds<br />ISBN: <span class="a-list-item"><span class="a-text-bold">
</span> <span>978-0575083134</span></span></p><p><span class="a-list-item"><span><b>Diamond Dogs</b><br /><br />I don’t get full satisfaction from stories that leave unanswered questions, unless those questions get answered in future stories. I don’t like that I don’t know why the Spire can levitate. I don’t like that I don’t know where the Spire came from, who built it or what it was for. I don’t like it that I don’t know if Richard and/or Childe completed all the puzzles and reached the top. I don’t like that I don’t know what was at the top and I don’t like the implication that it might be the weapon used to kill Pattern Jugglers, because that asks even more unanswered questions.<br /><br />I loved the story so much more on second reading and I think that’s because I was so much more familiar with the Revelation Space universe, specifically the eighty, and the other stories within it this time. No longer do I feel it was for people who really enjoy maths and I enjoyed the characters and their motivations immensely. <br /><br /><b>Turquoise Days</b><br /><br />Turquoise Days is a strange story to pair up with Diamond Dogs. The only real connections between them are the Revelation Space universe, the suggestion that the weapon used against the Pattern Jugglers might have come from the Spire and, of course, that Celestine was modified by the Pattern Jugglers. <br /><br />The first part of the story takes some getting through with the detailed description of swimming with the Pattern Jugglers. The rest of the story moves quite quickly and could probably have been a much longer story with more about what the fanatical leader of the scientists had done. Ultimately a very enjoyable read. </span></span></p><p><span class="a-list-item"><span><br /></span></span></p><p></p>Paul Grenyerhttp://www.blogger.com/profile/18212226926099615757noreply@blogger.com0tag:blogger.com,1999:blog-4548789926995192649.post-41620354372860312582023-01-04T17:04:00.002+00:002023-01-04T17:04:36.852+00:00The Great Dune Trilogy: A Review<p></p><div class="separator" style="clear: both; text-align: center;"><b><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhGIVmwE8O-E_XVJwhllGFS5GKDgcdevyivC9ANnWVquQ1Xd-DoH8uEB04qpzlYIB0cOg70Te22psgmf6WhTph0tZ6KESqetv3gXCk2trYAfI9_sX69FN9n5fi6BeOGhRfXdIo76c_-NVpzEr6gIAQf4m2rqi28a6Ai-4ZofUR4dgiOlX31DYnOZ79r/s400/dune.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="400" data-original-width="264" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhGIVmwE8O-E_XVJwhllGFS5GKDgcdevyivC9ANnWVquQ1Xd-DoH8uEB04qpzlYIB0cOg70Te22psgmf6WhTph0tZ6KESqetv3gXCk2trYAfI9_sX69FN9n5fi6BeOGhRfXdIo76c_-NVpzEr6gIAQf4m2rqi28a6Ai-4ZofUR4dgiOlX31DYnOZ79r/s320/dune.jpg" width="211" /></a></b></div><b>The Great Dune Trilogy</b><br />Frank Herbert<br />ISBN-13: <span>0575070706<br /><br />I remember distinctly reading Dune in 1992 after seeing the 1980s film. In fact I can still picture myself lying on a bed in a holiday cottage in a small French village near Carcassonne reading the book. I went on to read Dune Messiah, but couldn’t get into Children of Dune. I tried it again several years later, but still couldn’t get into it. Dune has been on my list to reread for a while. When searching for Dune in the Amazon Kindle store the trilogy came up as one book, so I decided to read all three straight through and I’m glad I did!<br /><br />There’s no getting away from the fact that Dune is a great story. I discovered recently that it’s two stories glued together and it shows. The first half of the book has lots of details and then there appears to be a large gap in the story, which at least one of the films attempted to fill, and then you get the end of the story. I don’t really like the way Frank Herbert explains what’s going to happen at the beginning and then that’s what happens, with a small twist at the end. I’d rather be kept in suspense.<br /><br />Dune Messiah is a great little story, but it mostly leaves science fiction behind in favour of feudalism and politics. This is also where I think Herbert starts getting indulgent. Most of us love well developed characters, but I found myself in their heads far too much. The twelve year gap in the story between Dune and Dune Messiah where a whole Jihad rages across the galaxy isn’t done justice.<br /><br />I enjoyed, mostly, Children of Dune too, but it was very much more of the same. I didn’t see the end coming, which was a good surprise, but I don’t think I liked it.<br /><br />It will be interesting to see how God Emperor of Dune shapes up, especially as it’s set several thousand years in the future.<br /><br /></span><p></p>Paul Grenyerhttp://www.blogger.com/profile/18212226926099615757noreply@blogger.com0tag:blogger.com,1999:blog-4548789926995192649.post-6067855835187546432023-01-04T08:30:00.001+00:002023-01-04T08:30:58.210+00:00Duplicate Data in Microservices<h2 style="text-align: left;">So You Are Uncomfortable with Duplicate Data? </h2><p>If, like me, you’ve spent a reasonable amount of your career working with relational databases, where data is rationalised to avoid duplication, the idea of duplicating data across microservices is probably anathema to you. <br /><br />Even if you’ve worked with a noSql database like MongoDB, where data is often duplicated across the documents, you probably still struggle with the idea of a service keeping a copy of data owned by another service. <br /><br />Discomfort with duplication doesn’t need to come from databases. The Don't Repeat Yourself (DRY) principle of software engineering states that "Every piece of knowledge must have a single, unambiguous, authoritative representation within a system". Even the process of Test Driven Development (TDD) includes a step for refactoring to remove duplication as part of the cycle.<br /><br />As software developers we are programmed to detest duplication in all its forms.<br /><br />It’s ok, I have felt your pain and as soon as you come to terms with the idea of duplicating data across microservices being the best way to make your microservices more robust, independent and decoupled from other services your pain will go - forever.</p><h2 style="text-align: left;">Making Microservices more Robust and More Independent</h2><p>A Microservices architecture consists of a collection of different services which each provide a well defined, loosely coupled and independent business function. You can find out more at: <a href="https://microservices.io/">https://microservices.io/</a>. Let’s have a look at an example from part of a previous project of mine, an app for finding local cafes that serve great tea, called Find My Tea.</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjW45Su39YeigxVZJkb2YAQ4-4Zfn233Yp6yNqvPY9nxMsyJeYUNI2lkwOTnQ6C-_UtJRZgVT0VJ8ZeZ9DUwRYvLKwT56XSM5en-mwAQIewblEdIM6WLdmEhaFjIK6ux_pU3ZZc3i2QCUX3T7sVY3LjBIgDmeg80RdJ9-GLp_WeP61E2uFcOZU8fOCC/s343/fmt-location-brand.drawio.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="211" data-original-width="343" height="197" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjW45Su39YeigxVZJkb2YAQ4-4Zfn233Yp6yNqvPY9nxMsyJeYUNI2lkwOTnQ6C-_UtJRZgVT0VJ8ZeZ9DUwRYvLKwT56XSM5en-mwAQIewblEdIM6WLdmEhaFjIK6ux_pU3ZZc3i2QCUX3T7sVY3LjBIgDmeg80RdJ9-GLp_WeP61E2uFcOZU8fOCC/s320/fmt-location-brand.drawio.png" width="320" /></a></div><br />Find My Tea had a handful of microservices, but two of the most important ones were the Location service, which managed the locations (cafes, restaurants, etc.) where great tea could be found and the Brand service which managed the brands of tea served by the locations. Managing locations and brands consists of creating, reading, updating and deleting (CRUD) them. The Location service was the Single Source of Truth for Locations and the Brand service for brands.<br /><br />In the first iteration, when the app requested a location, the Location service looked-up the location in its database, found the IDs of the brands served by location and then requested the details of those brands from the Brand service. The Brand service then looked up the brands in its database and passed them back to the Location service. The Location service then enriched the location with the brand details and passed it back to the app.<br /><br />It is arguable that the more independent a microservice is, the more robust it becomes. Here we have a very clear dependency on the Brand service by the Location service when performing a location lookup request. If the Brand service is down or otherwise unavailable, either the brands are not returned with the location or, depending on how the location service is designed to handle partial failure, the entire request fails. There is also the potential latency of inter-service communication and two database lookups, although in a simple request such as this, it is likely to be negligible.<br /><br />To remove this dependency the Location service can keep a copy of the data maintained by the Brand service up-to-date in its own database, so that when a location is requested, a call to the Brand service is not necessary. This does not make the Brand service redundant as it is still required to maintain the brand data and remains the Single Source of Truth for brands. The copy of the brand data kept in the Location service, although updatable from the Brand service, is effectively a read-only copy. The advantage is that if the Brand service is unavailable the location request will still succeed and the brand data will be present in the response.<br /><p></p><h2 style="text-align: left;">Distribute the Data</h2><p style="text-align: left;">However; this does beg the question of how to keep the brand data up-to-date in the Location service. Could this be a source of coupling? I’ve seen this done in two ways, but there are other approaches too.<br /><br />One way is to have the Location service poll the Brand service every so often to get brand data and update it in its own database. There are a number of drawbacks with this approach. We all know polling is evil. In the case where you have multiple instances of the polling service, you have to specify one instance as the one that does the polling, or all of the instances could be polling for the data unnecessarily and all at the same time. The data in the Brand service may get updated in-between polls, meaning that the data is out of date, for a period of time, in the Location service - whatever method of data synchronisation you use, there will always be an element of Eventual Consistency. You either need to devise a clever mechanism for determining which brands have been updated since the last poll or always send back all of the data resulting in potentially large requests. The polling approach requires the Location service to know where to find the Brand service, creating an unnecessary dependency. It is also required to handle the error caused by the Brand service being down or unreachable. The polling approach doesn’t tend to scale very well for all of these reasons.<br /><br />The approach I favour is to use a message broker. When a brand is created, updated or deleted, the Brand service can put a message onto a topic with only the details of that change. The Location service can listen to a queue, which is subscribed to the topic, and only update its database with a single brand when a message is received. There is no polling necessary. Message brokers are usually very fast and the amount of time the Location service would be out of date is likely to be negligible. The Location service only needs to know where the queue is that it is listening to. When there are multiple instances of the Location service, the queue can be configured to only deliver each message to the first instance which requests it. An added advantage is that the Brand service only needs to know where to find the topic. It doesn’t need to know anything about the Location service, or any other services, which may want to consume the messages via a queue subscribed to the topic. Of course both the sending and receiving services are slightly coupled by the format of the message and the data contract, potentially, becomes as important as any API contract.</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHoIFoNw9nSNCOXjIqz9_xNtVRYMj_navJjgazd0OWvJd60NKiNdwnS3h7BdJKjSK4_mxOhHT5PoeuJu9ui7mN5slXv19cDDEmnwXDp6s6lpT7JVwTBMUto3dtY_R3WCZKbbjDMzXiwKQPKm2XfsDzx8GNa_WWS9bShP3u8RlaXxMXqWOgJzMaCjPI/s563/fmt-location-brand-messaging.drawio.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="217" data-original-width="563" height="173" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHoIFoNw9nSNCOXjIqz9_xNtVRYMj_navJjgazd0OWvJd60NKiNdwnS3h7BdJKjSK4_mxOhHT5PoeuJu9ui7mN5slXv19cDDEmnwXDp6s6lpT7JVwTBMUto3dtY_R3WCZKbbjDMzXiwKQPKm2XfsDzx8GNa_WWS9bShP3u8RlaXxMXqWOgJzMaCjPI/w450-h173/fmt-location-brand-messaging.drawio.png" width="450" /></a></div><p style="text-align: left;"></p><h2 style="text-align: left;">More Robust, Independent and Loosely Coupled</h2><p>It can be as simple as that. Maintaining a copy of data in one service, which is maintained by another service, can make the services more robust and independent. Distributing that data via a message broker makes the services loosely coupled, although not entirely decoupled, it keeps the data up-to-date and reduces the size of messages.<br /><br />As with most things in software development, maintaining a local copy of data which is managed elsewhere is a tradeoff. As a software developer you must consider, for example, the security concerns which come with duplicating data. Especially if it is considered personal data. You must also consider the complexity of keeping the data up-to-date. For example, when does the data expire or become invalid. Does the data need to be versioned? Does the order of applied updates need to be taken into account?<br /><br />I hope it goes without saying that in almost every other context duplication should still be avoided, detested and possibly even hated. However, it should also be clear that the trade off of duplicating data in microservices can make for better microservies.<br /><br />Much of my early understanding of microservices, including the advantages of sharing data and some of the possible ways to do it came from <a href="https://www.manning.com/books/microservices-patterns" rel="nofollow" target="_blank">Microservices Patterns by Chris Richardson</a> (ISBN-13: 978-1617294549). If you’re interested in learning more about microservices, I would strongly recommend giving it a read. The rest has come from trial and error, failure, eventual success and quite a lot of arguing with colleagues. <br /><br /><br /></p>Paul Grenyerhttp://www.blogger.com/profile/18212226926099615757noreply@blogger.com0tag:blogger.com,1999:blog-4548789926995192649.post-47907285305437728832022-10-04T20:03:00.001+01:002022-10-04T20:03:30.546+01:00A Review: Eversion by Alastair Reynolds<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmxHhFw1EBawxx_VOlgvCC-CxJplNT6kWw98nXxJ7heIrGVFS37CmADn_fJi__F5iqthYWJAmpkqnaBq9Mzb_ZSzHMI2EUO33r7BvOUIrHgsKW2qGDrzQhtvkZB4BI109FVrV7yMDiZr22n98uVecLFXSxua0ba8e-hkL62zegskBJSqYOPUjov6_7/s597/58727132.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="597" data-original-width="384" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmxHhFw1EBawxx_VOlgvCC-CxJplNT6kWw98nXxJ7heIrGVFS37CmADn_fJi__F5iqthYWJAmpkqnaBq9Mzb_ZSzHMI2EUO33r7BvOUIrHgsKW2qGDrzQhtvkZB4BI109FVrV7yMDiZr22n98uVecLFXSxua0ba8e-hkL62zegskBJSqYOPUjov6_7/s320/58727132.jpg" width="206" /></a></div><p></p><h2 style="text-align: left;">Eversion</h2><p>by Alastair Reynolds<br />ISBN-13 : 978-0575090781<br /><br />There’s little to no sci-fi in the first 30% of this book and you’d be forgiven for thinking Reynolds was just indulging himself in a seafaring romp. There’s only a hint of sci-fi up to about half way through, when it develops into groundhog day. I didn’t enjoy this. I put the book down for a week or two until I forced myself to pick it up again and finish it. I couldn’t put the second half down!<br /><br />There isn’t the usual Alastair Reynolds scope, but that doesn’t matter as there is plenty of his trademark exploration and discovery and a fantastic plot twist I didn’t see coming. By far my favourite character is Ada for her sharp sense of humour and general attitude to life. The other characters are convincing and, for once, there’s a good ending, even if it is a little corny. <br /><br /></p>Paul Grenyerhttp://www.blogger.com/profile/18212226926099615757noreply@blogger.com0tag:blogger.com,1999:blog-4548789926995192649.post-90944023759464110712022-09-20T07:50:00.002+01:002022-09-20T07:53:28.518+01:00ARD & Winterfyleth at the Bread Shed<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhpgjgf2F29Pyux7m28mzLHsP8i_zzXxcsXO54_ALtNyJ5smNakNwK6Y9GClia3u8uTCkPk5xnyS-vvENGb1bGdQD7EY3T8K9um5JI4Af8xEpq0x7KJp-nPtsRH_AOe4tpzgyCvXQFVq0LS2PCB5HzefiikFqSPS6xy0FT2nwYyabKj7IHiOIo9p9TS/s4032/IMG_0912.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="3024" data-original-width="4032" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhpgjgf2F29Pyux7m28mzLHsP8i_zzXxcsXO54_ALtNyJ5smNakNwK6Y9GClia3u8uTCkPk5xnyS-vvENGb1bGdQD7EY3T8K9um5JI4Af8xEpq0x7KJp-nPtsRH_AOe4tpzgyCvXQFVq0LS2PCB5HzefiikFqSPS6xy0FT2nwYyabKj7IHiOIo9p9TS/s320/IMG_0912.jpg" width="320" /></a></div><h3 style="text-align: left;">ARD</h3><p>Following a nightmare which is parking in Manchester and getting a meal on a Saturday night without booking, we walked into the <a href="https://www.bread-shed.co.uk/manchester" target="_blank">Bread Shed</a> just as <a href="https://ardnorthumbria.bandcamp.com/" target="_blank">ARD</a> were getting going, minus my vinyl and CDs for signing!. The masterpiece which is Take Up My Bones was instantly recognisable, as were composer and multi instrumentalist Mark Deeks and fellow Winterfyleth band mate Chris Naughton, both on guitar. The latter was centre stage, where surely Deeks should have been? <br /><br />From the off the band, who were put together to perform an album which was never intended to be performed live, were a little loose with the drums too prominent and the guitars not clear enough. There appeared to be a lot retuning necessary, especially from Chris and the lead guitarist who appeared hidden away a lot of the time. This didn’t really detract from enjoyment of the incredible compositions from the album. By the time the final 10 minutes, consisting of Only Three Shall Know, came along something had changed, the band was as tight as anything and I wished they could have started again from the beginning. 45 minutes had flown by and I’ll certainly go and see them again.</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhkWSH_JCxokQON9PvLTDeWK6tI-igEE5-ia5vKCCQNSPwvlwuxBWb4Mr0m4CSjbMdWChmZcbkt9ZJZhNtitW4SpH9oRJjJvcDNEVPO7wcGdLBCCQrUsLP9Xd9nuYz-5qVsui2A9WGycN62r6fSy1WZG7asrvMvz_2MY9e-DcMJ30WA31hi8Lq4Y8CN/s4032/IMG_0910.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="3024" data-original-width="4032" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhkWSH_JCxokQON9PvLTDeWK6tI-igEE5-ia5vKCCQNSPwvlwuxBWb4Mr0m4CSjbMdWChmZcbkt9ZJZhNtitW4SpH9oRJjJvcDNEVPO7wcGdLBCCQrUsLP9Xd9nuYz-5qVsui2A9WGycN62r6fSy1WZG7asrvMvz_2MY9e-DcMJ30WA31hi8Lq4Y8CN/s320/IMG_0910.jpg" width="320" /></a></div><p></p><h3>Winterfyleth</h3><br />I think I’ve seen <a href="https://winterfylleth.bandcamp.com/" target="_blank">Winterfyleth</a> four times now, including the set which became their live album recorded at Bloodstock and earlier this year supporting Emperor at <a href="https://paulgrenyer.blogspot.com/2022/05/a-review-incineration-fest-2022-metal.html" target="_blank">Incineration Fest</a>. They <i>never</i> disappoint. <br /><br />Winterfyleth are one of those bands that are so consistent with their music, without being boring or repetitive, that it doesn’t matter what they play or how familiar I am with the songs, it’s just incredible to listen to. Having said that, disappointingly, they didn’t play A Valley Thick With Oaks, which is my favourite. Who can resist singing along “In the heart of every Englishman…”? However, I did come away with a new favourite in Green Cathedral!<br /><br />We only got an hour, but at least they didn’t bugger about going off and coming back for an encore. There were old songs, new songs and never before played live songs. Loved every second of it and, for the first time for me, the final song wasn’t preceded with “Sadly time is short and our songs are long, so this is our last one.” Until next time!<br /><br /> <p></p>Paul Grenyerhttp://www.blogger.com/profile/18212226926099615757noreply@blogger.com0tag:blogger.com,1999:blog-4548789926995192649.post-72853014932733765162022-09-09T17:03:00.000+01:002022-09-09T17:03:10.317+01:00Glory! Hammer!<p></p><div class="separator" style="clear: both; text-align: center;"><b><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjLoSpm9ijAaZnl2lVVujIjflvWWCkBz9QWxfkwUuIrgucvGfM-5PTE9ojcNDPi_u4qxcBLryhe0jBziztlf3e8Hr6CzAYH7KeKyKmTvmFn07srayXOYLMuTCXlIAGt90Dkj4Ny_SDVvdURxvuGEao4vP32sReD_RlbkB5ef2rhiJ8aShIK3ovuSY4C/s4032/IMG_0856.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="3024" data-original-width="4032" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjLoSpm9ijAaZnl2lVVujIjflvWWCkBz9QWxfkwUuIrgucvGfM-5PTE9ojcNDPi_u4qxcBLryhe0jBziztlf3e8Hr6CzAYH7KeKyKmTvmFn07srayXOYLMuTCXlIAGt90Dkj4Ny_SDVvdURxvuGEao4vP32sReD_RlbkB5ef2rhiJ8aShIK3ovuSY4C/s320/IMG_0856.jpg" width="320" /></a></b></div><b>Glory! Hammer! Were fantastic! </b><br /><br />They played well and were lots of fun as you’d expect. I mean who doesn’t like a gig to start with a cardboard cutout of Tom Jones and Delilah playing on the PA. The band were all dressed up - it must have been very hot - and playing their parts.<br /><br />I did find some of the gaps between songs and the interplay with the audience felt a little too Steel Panther. It was too frequent, superfluous and added time to a set which could have been shorter. <br /><br />Sozos Michael is a phenomenal singer and makes it seem effortless and perfect. I’m a big fan of widdly guitar and it doesn’t stand out as much on record as it did live which was a really nice surprise. <br /><br />It’ll be great to see them again when the promised new album is out and they tour again.<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiiaqPohBQ1T7OWEZaRxEVu_Q4XH-2xk9jEMeP_sZDgTUOKnqxKexmmhvEsEh6fOcElWYzlE31hIOpDiOFmUTcXKxhLcHr0omWENEM1slpMp4lVQEEZ52SMo4Qx_kJNzmkgQnzu89jGlFGTAeKMfNngMFJs58K-sP0bxFEbtPEbFpLaW9xjfkqr2CSk/s4032/IMG_0854.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="3024" data-original-width="4032" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiiaqPohBQ1T7OWEZaRxEVu_Q4XH-2xk9jEMeP_sZDgTUOKnqxKexmmhvEsEh6fOcElWYzlE31hIOpDiOFmUTcXKxhLcHr0omWENEM1slpMp4lVQEEZ52SMo4Qx_kJNzmkgQnzu89jGlFGTAeKMfNngMFJs58K-sP0bxFEbtPEbFpLaW9xjfkqr2CSk/s320/IMG_0854.jpg" width="320" /></a></div><br /><p><br /></p><p><br /></p>Paul Grenyerhttp://www.blogger.com/profile/18212226926099615757noreply@blogger.com0tag:blogger.com,1999:blog-4548789926995192649.post-22081725385640279442022-09-04T15:40:00.000+01:002022-09-04T15:40:03.709+01:00A review of React Cookbook: Recipes for Mastering the React Framework<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhNF8b0TU-R-mpzHmGDDYGJd_fGIwbnuFaU8AV8ixAyvD0U4wPMHLV-hc5S71YPiBOI64Faw3q5BlW2O29V57DaXOzNotW507Bv2myPwARaweo-gEqvZBIx33y7KeC8obey_UD-56tt_0VqwTudimI5nWqJ4ObnQlPdK-HgOlej91jm9SsgSj9enSlz/s499/react-cook-book.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="499" data-original-width="381" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhNF8b0TU-R-mpzHmGDDYGJd_fGIwbnuFaU8AV8ixAyvD0U4wPMHLV-hc5S71YPiBOI64Faw3q5BlW2O29V57DaXOzNotW507Bv2myPwARaweo-gEqvZBIx33y7KeC8obey_UD-56tt_0VqwTudimI5nWqJ4ObnQlPdK-HgOlej91jm9SsgSj9enSlz/s320/react-cook-book.jpg" width="244" /></a></div><b>React Cookbook: Recipes for Mastering the React Framework</b><p></p><p><b></b>by David Griffiths and Dawn Griffiths<br />ISBN: 978-1492085843<br /></p><p>This is a book of about 100 recipes across 11 sections. The sections range from the basics, such as creating React apps, routing and managing state to the more involved topics such as security, accessibility and performance.<br /><br />I was especially pleased to see that the section on creating apps looked at create-react-app, nextjs and a number of other getting started tools and libraries, rather than just sticking with create-react-app.<br /><br />I instantly liked the way each recipe laid out the problem it was solving, the solution and then had a discussion on different aspects of the solution. It immediately felt a bit like a patterns book. For example, after describing how to use create-react-app, the discussion section explains in more depth what it really is, how it works, how to use it to maintain your app and how to get rid of it.<br /><br />As with a lot of React developers, the vast majority of the work I do is maintaining existing applications, rather than creating new ones from scratch. I frequently forget about how to setup things like routing scratch and would usually reach for Google. However, with a book like this I can see myself reaching for the easy to find recipes again and again.</p>Paul Grenyerhttp://www.blogger.com/profile/18212226926099615757noreply@blogger.com0tag:blogger.com,1999:blog-4548789926995192649.post-17908788297073782152022-05-21T19:16:00.000+01:002022-05-21T19:16:10.033+01:00Chasm City<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3SRNo1ZfN9BxxguONJOLpOlL60Qfx1UJkugYsbAF-s2IqwYsTBiG-lJSXf6kFvTjKeiaQBK7gCDNK9qrBgQL9FoI6jZOx9WSKZ1y4nfEJvepxLafbb_cqevV6txm1BEg6IbwQTGEmkRRiwGi4V8lqKr-ApTfY8db-9l1q28UrBH24jqgO8Kgk8qSp/s1000/chasmcity.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="1000" data-original-width="654" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3SRNo1ZfN9BxxguONJOLpOlL60Qfx1UJkugYsbAF-s2IqwYsTBiG-lJSXf6kFvTjKeiaQBK7gCDNK9qrBgQL9FoI6jZOx9WSKZ1y4nfEJvepxLafbb_cqevV6txm1BEg6IbwQTGEmkRRiwGi4V8lqKr-ApTfY8db-9l1q28UrBH24jqgO8Kgk8qSp/s320/chasmcity.jpg" width="209" /></a></div><h2 style="text-align: left;">Chasm City</h2><p>Alistair Reynolds<br />ISBN-13: 978-0575083158</p><p>Following the announcement of the release of Inhibitor Phase and then Elysium Fire I’ve been rereading some of the previous Revelation Space novels to pick up the thread. First time around I found Chasm City a dark story and it was no different the second time, but I got so much more out of it. I also remember losing the thread towards the end the first time, but not this time!<br /><br />As with most of the series, the thread of the main story is inconsequential to the main Revelation Space arc. It’s the other aspects of the story which tie up with other Revelation Space events which make this such a fantastic book. By the time I read the last page I knew that Sky's Edge was named after the edge Sky Hassausman had over the other ships in the flotilla which settled the planet. I knew that the war had started between the ships of the flotilla and what they were fighting about. I knew how the Melding Plague had got to Chasm City and how it was discovered and spread. I knew that Sky had met Khouri, who is an important character in the main trilogy. And more, much more! I wish I knew how the Melding Plague came to be though.<br /><br />I read the second half of the book in about two weeks. I just couldn’t put it down!</p>Paul Grenyerhttp://www.blogger.com/profile/18212226926099615757noreply@blogger.com0tag:blogger.com,1999:blog-4548789926995192649.post-12291039973179401302022-05-08T16:49:00.001+01:002022-05-08T17:35:08.431+01:00A Review: Incineration Fest 2022 - Metal is back!<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgnDZGLFkG3tJl8362MN2-Ii3O18cj17vsCelY0Ubjuon9EZFVItjXhp94D8zJS-7WoGOq0FvNErcrFcc1DqT2rLvnS1cCWz_96W_pjemDC9pkTQZdSFLQOgbns269xO8R-zVTdgDm9oRRVaJIkyBsGVJ_gpn892O79G7FTT23W8VUbD6v-4nw6w3Jm/s4032/IMG_9863.JPG" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="4032" data-original-width="3024" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgnDZGLFkG3tJl8362MN2-Ii3O18cj17vsCelY0Ubjuon9EZFVItjXhp94D8zJS-7WoGOq0FvNErcrFcc1DqT2rLvnS1cCWz_96W_pjemDC9pkTQZdSFLQOgbns269xO8R-zVTdgDm9oRRVaJIkyBsGVJ_gpn892O79G7FTT23W8VUbD6v-4nw6w3Jm/s320/IMG_9863.JPG" width="240" /></a></div>Overall I really enjoyed Incineration Fest and would go again if the line up is right for me. What was really great was seeing metallers back at gig with no restrictions and doing what we do best!<br /><br /><h2>Winterfylleth </h2><p>I completely fell in love with Winterfylleth when they played Bloodstock on the mainstage and even more so when they released the set as a live album. They are incredible and totally deserved to be opening proceedings at the Roundhouse for Incineration Fest. Actually, they deserved to be much higher up the bill. They’re a solid outfit, played what I wanted to hear and ended, as I always think of them ending from the live album, with Chris saying this is the last song “as time is short and our songs are long!” I need to see them do a headline set in a venue with a great PA soon.</p><h2>Tsjuder</h2>Tsjuder was the wildcard for me. I didn’t really know them and had heard only a few things on Spotify before, although what I heard was really good. I had no idea I was going to be blown away. They sounded incredible from the first note, which was even more impressive given that they are only a three piece and the PA in the Roundhouse wasn’t turning out to be great for definition. <br /><br /><h2>Bloodbath</h2>Bloodbath was really the reason I was at Incineration Fest. I’d missed them at Bloodstock ten years before as one of my sons was being born and I hadn’t had a chance to see them until now. Of course now Nick Holmes (Paradise Lost) rather than Mikael Åkerfeldt (Opeth) was on lead vocals.<br /><br />I was very, very excited and from the moment I heard that trademark crunching guitar sound I was even more excited. They played for a full hour. Unlike the Black Metal bands on the bill there was more riffing and solos and a slight different drum sound.<br /><br />They’re an odd band to watch. For reasons I don’t understand, the bass player and two guitarists would often turn their backs to the audience to face the dummer. The band didn’t seem to interact much with each other on stage and even less so with Nick.<br /><br />Nick’s deadpan humour was present when he did speak to the audience. He introduced the band as being from Sweden, then added from Halifax almost as an afterthought! During the set he admitted he couldn’t see and dispensed with his sun glasses as they’d apparently been a good idea backstage. After breaking the microphone he enquired if it would be added to his bill at the end of the night.<br /><br /><h2>Emperor</h2>Emperor hasn't released any new material (that I know of) since 2001 and, if I’m honest, I barely listen to them beyond the live album these days. I’ve seen them at least three times before, the first time being in 1999 in a small club in Bradford on my birthday - it doesn’t get much better than that. I’m more of a fan of Ihsahn’s solo stuff these days and I still really enjoy Samoth’s Zyklon whenever I play it. Emperor, not so much anymore.<br /><br />They played for the full ninety and for the most part were solid as you might expect. Whether or not Faust plays with is of no consequence to me and I certainly didn’t need to covers they played with him towards the end of the set. There was lots I knew and lots I enjoyed, but I wouldn't make an effort to see Emperor again.<br /><br /><br /><br /><br /><br /><br /><br />Paul Grenyerhttp://www.blogger.com/profile/18212226926099615757noreply@blogger.com0tag:blogger.com,1999:blog-4548789926995192649.post-38893009548168455422022-04-19T12:49:00.001+01:002022-04-19T12:59:36.843+01:00Devin Townsend at the Royal Albert Hall (again)<p dir="ltr" id="docs-internal-guid-b7d8a1c9-7fff-5263-113b-6faf7a9d7cf0" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhvRraTUTc-vSM5w4xZbSxPp-L_NJaAXr6rltYjH6d_Mt0OqbyWO0tEYUTNwoVC2zEFj7-Dp5n9EqD8FEK52BP4cwHPsZJPYE1QBboEpl6oemjMM1iThAiW27N7EkhY7VRgNrxPGrN45j5xLmXO5E5_bFftsYTWS6OTugENSR-2npd96xSVlj2Wi_Lb/s4032/IMG_9745.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="3024" data-original-width="4032" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhvRraTUTc-vSM5w4xZbSxPp-L_NJaAXr6rltYjH6d_Mt0OqbyWO0tEYUTNwoVC2zEFj7-Dp5n9EqD8FEK52BP4cwHPsZJPYE1QBboEpl6oemjMM1iThAiW27N7EkhY7VRgNrxPGrN45j5xLmXO5E5_bFftsYTWS6OTugENSR-2npd96xSVlj2Wi_Lb/s320/IMG_9745.jpg" width="320" /></a></p><span style="font-family: Arial;"><b>Leprous</b><br /><br />There’s an obvious pull for me towards Leprous due to the association with <a href="https://en.wikipedia.org/wiki/Ihsahn" target="_blank">Ihsahn</a> and prog, but rock bands generally do little for me these days. I listened to a little of <a href="https://en.wikipedia.org/wiki/Aphelion_(Leprous_album)" target="_blank">Aphelion</a> before the gig, but it didn’t grip me. <br /><br />They’re an odd live band and some of the time the cello player looked a bit out of place when he was without his cello. The sharing of the keyboards among various band members, often in the same song, was also weird. The singer was wearing a waistcoat and doing some very odd dancing and his voice can grate. For a prog band the lack of any guitar or keyboard lead breaks was also weird.<br /><br />However, I quite enjoyed Leprous!<br /><br /><br /><b>Devin Townsend</b><br /><br />We’d only seen Devin Townsend a few months ago (<a href="https://www.bloodstock.uk.com/events/boa-2021/stages" target="_blank">in the summer at Bloodstock</a>), but my wife loves him so we went again. We should have gone the night before as he played loads of songs we knew, in contrast to the night we went where he played nothing we knew! Most of it, I am reliably informed, was from the <a href="https://en.wikipedia.org/wiki/Ocean_Machine:_Biomech" target="_blank">Ocean Machine</a> and <a href="https://en.wikipedia.org/wiki/Infinity_(Devin_Townsend_album)" target="_blank">Infinity</a> albums.<br /><br />Devin still plays brilliantly and it was great to see him again with the session musicians he’d teamed up with for his Bloodstock performance. He creates a fantastic wall of sound and engages with the crowd like few others. I’m sure we’ll go and see him again, after all we’ve not heard <a href="https://www.youtube.com/watch?v=Mf4_LB32M6Q" target="_blank">Hyperdrive</a> live yet!</span><br />Paul Grenyerhttp://www.blogger.com/profile/18212226926099615757noreply@blogger.com0