~/.bashrc
Add this to youu .bashcr
file in order to add current branch name to BASH prompt:
1 | parse_git_branch() { |
Add this to youu .bashcr
file in order to add current branch name to BASH prompt:
1 | parse_git_branch() { |
I already wrote about testing bash scripts automatically in this post: if you missed it, reading it can be useful to get the context.
Moving a step forward, when you tests your bash scripts you can be interested both in mocking standard input and spying standard output (and standard error): here is how.
You can write a test case that checks an assertion about produced output simply exploiting command substitution: you can write a test case that collects standard output (into the output
variable) and makes an assertion about its content.
1 | test_spying_stdout () { |
Simply adding some redirection stuff you can do the same about standard error:
1 | test_spying_stderr () { |
If you need to implement a test invoking a script that gets input from stding, both through redirection or through read
command, you can provide mocked input providing a hardcoded version of stdin through <<EOF
syntax:
1 | test_mocking_stdin () { |
With
1 | cat <<EOF | your_command |
you’re instructing the shell to use the text stream provided after the command invocation and before the closing EOF
token as stdin
: now your_command
can read from stdin getting the hardcoded, well known input you want to base your test case on.
Reached this point, I think we can extract a slightly reworked version of the script I have shown in the previous post into a reusable tool we can rely on to run our scripts’ test cases. With little imagination I called it bash-unit
and published it here.
Simply put, it allows you to simply launch
1 | bash-unit [test_cases_dir] |
in order to execute the test suite: you find full installation instructions and test cases samples in the README file.
Kubernetes (K8S) is for a few years now a trending topic. If you are approaching it, you need a way to test what you’re learning - the usual way for beginners consists in the use of minikube, an out-of-the-box solution that sets up a single-node K8S cluster you can use for learning purposes, tipically through virtualization (but deploy on container and bare-metal is supported).
If you want to experiment with a production-like multi-node cluster, you have to find another solution - tipically you end up using a cloud provider supporting free subscription, like okteto, or consuming your free initial credit on something like GKE, EKS, or AKS.
In the past three years I’ve explored another approach, installing a K8S cluster on a group of VMs running on my physical machine.
I initially tried installing and configuring from scratch everything I needed - very interesting way of learning, but very annoying way to proceed if you need a running cluster in minutes: you need to choose and install the OS, pick and install one container manager (e.g. containerd
), install official K8S packages (kubelet
, kubeadm
), disable the swap (for real!), check and fix firewall’s rules, configure a cgroupdriver
, restart a bunch of system daemons, … then you are ready to set up the cluster, starting a primary and at least one worker node, choosing and installing a network plugin, a metrics plugin, an ingress plugin… easy, right?
If you want play with the cluster without entering the maze of system installation and configuration, the way to go is to setup your cluster through one Kubernetes Distribution - something like MicroK8s or K3s.
And if you’re an automation addicted like me, may be you end up with a set of parameterizable script you can use to setup a cluster with a handful of CLI command: this is my version ot the game - you can try opening a terminal and lunching
1 | git clone https://gitlab.com/pietrom/vagrant-microk8s-cluster.git |
Configuration (for cluster name, VMs’ IPs, …) is available editing variables in Vagrantfile
.
The provided solution works with Vagrant and a virtualization provider supported by the vagrant box generic/ubuntu2004
.
Good luck and enjoy playing with you brand new K8S cluster!!
More about the Proxy
class introduced by ES6: providing an apply
trap function in the handler passed to Proxy
‘s constructor, we can intercept and modify call to the target function we pass to the same constructor.
Exploiting this feature, we can e.g. implement a very simple memoize
function, which return a wrapper function that call the original function and caches its return value, avoiding recalculation - this can be useful when such calculation is time- o money-expensive.
1 | function memoize(fn) { |
The same approack works when the function we need do memoize is a member of an object, accessing siebling members of the object itself during the calculation:
1 | let container = { |
EcmaScript 6 (ES6) introduces the Proxy
class, which we can use to straightforwardly implement design patterns like (obviously) proxy, decorator and similar.
The only thing we need is to create an instance of Proxy
, providing the proxied object/function and a handler object, containing hook methods (the official doc calls them traps) that we can use to intercept and modify the proxied object’s behaviour.
For example, we can intercept and modify calls to object properties’ get calls providing a get(target, prop, handler)
function in the handler object, or we can intercept and modify calls to a function providing a apply(target, thisArg, argumentsList)
in the handler object.
The list of the supported traps is available here.
Here we use the set
trap to simply implement a readonly
factory function, which receieves an object and returns a new object wrapping the original object behind a read-only proxy:
1 | function readonly(target) { |
I like (automatically) testing very much: weather writing C# or Java/Kotlin code, weather I study a new language or that new library, I like to take a test-first approach or, at the very least, cover with test the code I’ve (or someone else has) already written.
My day-to-day activities tipically involve technical stacks that support testing very well: JUnit (for JVM languages), xUnit, NUnit (working on .Net platform), Jasmine, Jest, Mocha (when I write JavaScript/TypeScript code, weather client and server side), … all these are widely known and used testing frameworks/libraries, with first class support for IDEs and text editors and CLI-ready runners.
Occasionally (but not too much occasionally) though I need to write some shellish code: tipically Bash scripts that automate boring and repetitive tasks: setting up a new Gradle/Maven/Whatever-you-want project from scratch, adding one more module to it, cleaning up a codebase removing generated binaries, and so on.
What about the idea of testing such scripts automatically, or even of developing them according to a test-driven approach?
I have been looking around and experimenting for a solution to this problem: at the very least, what we need is something similar to CLI runners for widely adopted testing frameworks that I mentioned earlier - a runner that ideally
Surprisingly (but maybe not too much), it’s not particularly difficult to write such a runner script, exploiting feature of declare
command and its ability to provide the list of the functions currently available in the script.
Given that list, we can select (by convention) the functions representing test cases (e.g. functions whose name starts with test_
), executing them and collecting their result (exit code), providing a report to the user.
Finally, the runner exits with zero only when all test cases have been performed successfully.
So, show me the code:
1 |
|
Each test case is implemented through a function:
1 | test_a () { |
Each assertion can be something like an invocation of test
command, as in previous examples, but can be something more complicated, like a complex test of the content of a generated file, a query to a database, a ping
over the network… any task for which a command exists can be used to implement a test case, by formulating an assertion on command’s output or exit status.
Here you can find a very very simple CI/CD pipeline configuration that calls the runner just shown for each push on each branch of the codebase’s repository: so you can adopt a TDD approach, getting feedback from you CI infrastructure.
I recently wrote about implementing in Kotlin a very simple DSL for expressing distances. Here is its Scala version:
1 | object Main { |
I like very much the fluency which you can define a micro-DSL with when you’re lucky enough to write Kotlin code: e.g., let’s implement a very very simple DSL to express distances:
1 | fun main() { |
You can run the previous code here.
Microservice - the most overused word of the last ten years in software engineering…
We all love microservices - in words if nothing else.
But are we really speaking about microservices ? Is this new distributed system consisting of microservices or is it a distributed monolith? And what’s the difference?
Certainly we have all heard phrases like these:
We could continue, but… you got the point: not all of what we regularly call microservices really are: maybe they are services, but for sure they are not so micro.
Each of us undoubtedly has a very personal, opinionated list of characteristics that a true microservice should not exhibit - this is mine (in increasing order of severity), at least at the time of writing: please, don’t call them “microservices” if