I already wrote about testing bash scripts automatically in this post: if you missed it, reading it can be useful to get the context.
A step forward
Moving a step forward, when you tests your bash scripts you can be interested both in mocking standard input and spyingstandard output (and standard error): here is how.
Spying standard output and error
You can write a test case that checks an assertion about produced output simply exploiting command substitution: you can write a test case that collects standard output (into the output variable) and makes an assertion about its content.
Simply adding some redirection stuff you can do the same about standard error:
1 2 3 4
test_spying_stderr () { error=$(cat /proc/crypto no-file.txt 2>&1 > /dev/null) echo$error | grep "no-file.txt: No such file or directory" > /dev/null }
Mocking standard input
If you need to implement a test invoking a script that gets input from stding, both through redirection or through read command, you can provide mocked input providing a hardcoded version of stdin through <<EOF syntax:
you’re instructing the shell to use the text stream provided after the command invocation and before the closing EOF token as stdin: now your_command can read from stdin getting the hardcoded, well known input you want to base your test case on.
bash-unit
Reached this point, I think we can extract a slightly reworked version of the script I have shown in the previous post into a reusable tool we can rely on to run our scripts’ test cases. With little imagination I called it bash-unit and published it here.
Simply put, it allows you to simply launch
1
bash-unit [test_cases_dir]
in order to execute the test suite: you find full installation instructions and test cases samples in the README file.
Kubernetes (K8S) is for a few years now a trending topic. If you are approaching it, you need a way to test what you’re learning - the usual way for beginners consists in the use of minikube, an out-of-the-box solution that sets up a single-node K8S cluster you can use for learning purposes, tipically through virtualization (but deploy on container and bare-metal is supported).
If you want to experiment with a production-like multi-node cluster, you have to find another solution - tipically you end up using a cloud provider supporting free subscription, like okteto, or consuming your free initial credit on something like GKE, EKS, or AKS.
In the past three years I’ve explored another approach, installing a K8S cluster on a group of VMs running on my physical machine.
I initially tried installing and configuring from scratch everything I needed - very interesting way of learning, but very annoying way to proceed if you need a running cluster in minutes: you need to choose and install the OS, pick and install one container manager (e.g. containerd), install official K8S packages (kubelet, kubeadm), disable the swap (for real!), check and fix firewall’s rules, configure a cgroupdriver, restart a bunch of system daemons, … then you are ready to set up the cluster, starting a primary and at least one worker node, choosing and installing a network plugin, a metrics plugin, an ingress plugin… easy, right?
If you want play with the cluster without entering the maze of system installation and configuration, the way to go is to setup your cluster through one Kubernetes Distribution - something like MicroK8s or K3s.
And if you’re an automation addicted like me, may be you end up with a set of parameterizable script you can use to setup a cluster with a handful of CLI command: this is my version ot the game - you can try opening a terminal and lunching
1 2 3 4 5 6
git clone https://gitlab.com/pietrom/vagrant-microk8s-cluster.git cd vagrant-microk8s-cluster vagrant up mercury0 vagrant up mercury1 vagrant up mercury2 ...
Configuration (for cluster name, VMs’ IPs, …) is available editing variables in Vagrantfile.
The provided solution works with Vagrant and a virtualization provider supported by the vagrant box generic/ubuntu2004.
Good luck and enjoy playing with you brand new K8S cluster!!
More about the Proxy class introduced by ES6: providing an applytrap function in the handler passed to Proxy‘s constructor, we can intercept and modify call to the target function we pass to the same constructor. Exploiting this feature, we can e.g. implement a very simple memoize function, which return a wrapper function that call the original function and caches its return value, avoiding recalculation - this can be useful when such calculation is time- o money-expensive.
functionsum(a, b) { console.log(`${a} + ${b}`); return a+b ;}
sum(10, 20) // 10 + 20 sum(10, 20) // 10 + 20
let mem = memoize(sum) mem(10, 20) // 10 + 20 mem(10, 20) // no output sum(100, 200) // 100 + 200
The same approack works when the function we need do memoize is a member of an object, accessing siebling members of the object itself during the calculation:
EcmaScript 6 (ES6) introduces the Proxy class, which we can use to straightforwardly implement design patterns like (obviously) proxy, decorator and similar. The only thing we need is to create an instance of Proxy, providing the proxied object/function and a handler object, containing hook methods (the official doc calls them traps) that we can use to intercept and modify the proxied object’s behaviour.
For example, we can intercept and modify calls to object properties’ get calls providing a get(target, prop, handler) function in the handler object, or we can intercept and modify calls to a function providing a apply(target, thisArg, argumentsList) in the handler object.
The list of the supported traps is available here.
Here we use the settrap to simply implement a readonly factory function, which receieves an object and returns a new object wrapping the original object behind a read-only proxy:
functionreadonly(target) { returnnewProxy(target, { set: function(target, prop, recever) { throw`Can't set '${prop}': this is a read-only object!` } }) }
I like (automatically) testing very much: weather writing C# or Java/Kotlin code, weather I study a new language or that new library, I like to take a test-first approach or, at the very least, cover with test the code I’ve (or someone else has) already written.
My day-to-day activities tipically involve technical stacks that support testing very well: JUnit (for JVM languages), xUnit, NUnit (working on .Net platform), Jasmine, Jest, Mocha (when I write JavaScript/TypeScript code, weather client and server side), … all these are widely known and used testing frameworks/libraries, with first class support for IDEs and text editors and CLI-ready runners.
Occasionally (but not too much occasionally) though I need to write some shellish code: tipically Bash scripts that automate boring and repetitive tasks: setting up a new Gradle/Maven/Whatever-you-want project from scratch, adding one more module to it, cleaning up a codebase removing generated binaries, and so on.
What about the idea of testing such scripts automatically, or even of developing them according to a test-driven approach? I have been looking around and experimenting for a solution to this problem: at the very least, what we need is something similar to CLI runners for widely adopted testing frameworks that I mentioned earlier - a runner that ideally
we can launch from the CI/CD pipeline in order to execute all defined test cases
if one or more test cases fail
returns non-zero exit code
prints a summary of the failed test cases
requires no changes if one more test case is added to the list
Surprisingly (but maybe not too much), it’s not particularly difficult to write such a runner script, exploiting feature of declare command and its ability to provide the list of the functions currently available in the script. Given that list, we can select (by convention) the functions representing test cases (e.g. functions whose name starts with test_), executing them and collecting their result (exit code), providing a report to the user. Finally, the runnerexits with zero only when all test cases have been performed successfully.
# By convention, tets cases are defined in .sh files located in the 'test' directory # (or its subdirectories) for t in $(find test -name '*.sh') ; do . "$t" done
# Get all available functions whose name starts with 'test_' test_cases=$(declare -F | sed 's/declare -f //' | grep '^test_')
total=0 failures=0 failed_test_cases=() # Executes test cases, tracing # - the total count of executed test cases # - the count of failed test cases # - the names of failed test cases
for tc in$test_cases ; do echo"Executing ${tc}..." $tc if [ $? -ne 0 ] ; then failures=$(expr ${failures} + 1) failed_test_cases+=(${tc}) fi total=$(expr ${total} + 1) done
# Prints report echo"Test suite completed: total test cases ran: ${total}. Failures: ${failures}"
if [ $failures -ne 0 ] ; then echo"Failed test cases:" for ftc in"${failed_test_cases[@]}" ; do echo" ${ftc}" done # Makes pipeline fail id there are test failures exit 1 fi exit 0
Each test case is implemented through a function:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
test_a () { return 0 # should pass }
test_b () { test 1 -ge 1 # should pass }
test_c () { test -d /var/tmp # should pass }
test_d () { test -f /etc # should fail }
Each assertion can be something like an invocation of test command, as in previous examples, but can be something more complicated, like a complex test of the content of a generated file, a query to a database, a ping over the network… any task for which a command exists can be used to implement a test case, by formulating an assertion on command’s output or exit status.
Here you can find a very very simple CI/CD pipeline configuration that calls the runner just shown for each push on each branch of the codebase’s repository: so you can adopt a TDD approach, getting feedback from you CI infrastructure.
I like very much the fluency which you can define a micro-DSL with when you’re lucky enough to write Kotlin code: e.g., let’s implement a very very simple DSL to express distances:
dataclassDistance(val m: Double) { // Operator overloading made simple and OO-friendly: // nothing to do with C# similar, static-method based feature! operatorfunplus(that: Distance) = Distance(this.m + that.m) }
Microservice - the most overused word of the last ten years in software engineering… We all love microservices - in words if nothing else. But are we really speaking about microservices ? Is this new distributed system consisting of microservices or is it a distributed monolith? And what’s the difference?
Certainly we have all heard phrases like these:
Please, create a microservice that does this and that and that and that… and that…
Well, here you need two microservices: the first one should create this table and read from it, the second one should consume that message and write its content into the same table…
Let deploy these five microservices, in order to make the new feature available: beware that you should deploy the service A first, then B and C, then D, and only when the first four services will have been deployed you wil able to deploy the service E…
We could continue, but… you got the point: not all of what we regularly call microservices really are: maybe they are services, but for sure they are not so micro. Each of us undoubtedly has a very personal, opinionated list of characteristics that a true microservice should not exhibit - this is mine (in increasing order of severity), at least at the time of writing: please, don’t call them “microservices” if
they are developed by the same team
they are forced to share the same tech stack
they are forced to share libraries, e.g. for contract definitions
they are forced to be updated/deployed at the same time
they share code (not as external libraries), even infrastructural one
they share code implementing business logic (a special case for the previous case, but the most dangerous one)
they share the database (coupling in space)
they call each another through synchronous API (coupling in time)
the delivery of a new feature always requires a coordinated changes to more than one service (functional coupling)
In my daily work I usually deal with a large number of terminal windows or tabs; I feel it’s convenient to have a way to distinguish them one from the other at a glance, so I looked for a way to automatically change their background color when terminal starts. Different terminals (e.g. Terminator vs XFCE4-Terminal vs…) support different color schemes and enable different options, but I finally found a bash-only-based solution, which works fine whatever terminal I use.
The solution
Bash supports changing background color through special output sequences: something like
can be for instance used to set background color to #ff0000, #0000ff, or #000000.
So, everything I need is a way to
choose a (background) color based on TTY id
apply the chosen color to background through a command like those above
do both thing every time a new bash is launched
The first problem can be solved through tty command, which outputs something like
1 2
$ tty /dev/pts/3
So I can obtain tty number executing tty | tr -d [a-zA-Z/]. Given this tty number, I can select a color from an array, then use it to change background.
Adding to my path a script named change-background-color and calling it in .bashrc allows background color to be chosen automatically whenever I open an instance of bash.
Full code (with explanatory comments)
Bonus: my final implementation of the background color changing script allows two different usages:
you can simply issue change-background-color, cyclically choosing the color from a finite set depending upon the tty number, or
you can provide a color symbolic name as parameter, using something like change-background-color red or change-background-color olive.
// change-background-color, providing the file to be in the $PATH #!/bin/bash # Declares known colors as an associative array declare -A colorsByName colorsByName[red]=550000 colorsByName[black]=000000 colorsByName[blue]=000066 colorsByName[gray]=333333 colorsByName[purple]=440044 colorsByName[sugar]=004444 colorsByName[olive]=444400 colorsByName[green]=005500 colorsByName[brick]=773311 colorsByName[azure]=4444ff colorsByName[orange]=c0470e colorsByName[lightgray]=666666
# Turns known colors into an index-based array, too declare -a colorsByIndex n=0 for key in"${!colorsByName[@]}" ; do colorsByIndex[$n]=${colorsByName[$key]} n=$(expr $n + 1) done
if [ $# -eq 1 ] ; then # Gets color by name color=${colorsByName[$1]} else theTty=$(tty|tr -d [a-zA-Z/]) # Calculates color index as tty_number % known_colors_count i=$(expr $(tty | tr -d [a-zA-Z/]) % ${#colorsByName[@]}) # Gets color by index color=${colorsByIndex[${i}]} fi
if [ -n "$color" ] ; then echo -ne "\e]11;#${color}\a" fi
Now the only question is to choose a set of not eye-offending colors… ;-)
I like to define a simple range function in my JavaScript projects, in order to easily adopt a functional approach when I need to work with integer intervals:
1 2 3 4 5 6 7
functionrange(start, end) { const right = end || start const left = end && start || 0 returnArray.from({length: right - left}, (x, i) => left + i) // Alternative implementation: // return Array(right - left).fill(0).map((x, i) => left + i) }