if [ "$color_prompt" = yes ]; then PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\W\[\033[01;33m\]$(parse_git_branch)\[\033[00m\]\$ ' else PS1='${debian_chroot:+($debian_chroot)}\u@\h:\W$(parse_git_branch)\$ ' fi
I already wrote about testing bash scripts automatically in this post: if you missed it, reading it can be useful to get the context.
A step forward
Moving a step forward, when you tests your bash scripts you can be interested both in mocking standard input and spyingstandard output (and standard error): here is how.
Spying standard output and error
You can write a test case that checks an assertion about produced output simply exploiting command substitution: you can write a test case that collects standard output (into the output variable) and makes an assertion about its content.
Simply adding some redirection stuff you can do the same about standard error:
1 2 3 4
test_spying_stderr () { error=$(cat /proc/crypto no-file.txt 2>&1 > /dev/null) echo$error | grep "no-file.txt: No such file or directory" > /dev/null }
Mocking standard input
If you need to implement a test invoking a script that gets input from stding, both through redirection or through read command, you can provide mocked input providing a hardcoded version of stdin through <<EOF syntax:
you’re instructing the shell to use the text stream provided after the command invocation and before the closing EOF token as stdin: now your_command can read from stdin getting the hardcoded, well known input you want to base your test case on.
bash-unit
Reached this point, I think we can extract a slightly reworked version of the script I have shown in the previous post into a reusable tool we can rely on to run our scripts’ test cases. With little imagination I called it bash-unit and published it here.
Simply put, it allows you to simply launch
1
bash-unit [test_cases_dir]
in order to execute the test suite: you find full installation instructions and test cases samples in the README file.
Kubernetes (K8S) is for a few years now a trending topic. If you are approaching it, you need a way to test what you’re learning - the usual way for beginners consists in the use of minikube, an out-of-the-box solution that sets up a single-node K8S cluster you can use for learning purposes, tipically through virtualization (but deploy on container and bare-metal is supported).
If you want to experiment with a production-like multi-node cluster, you have to find another solution - tipically you end up using a cloud provider supporting free subscription, like okteto, or consuming your free initial credit on something like GKE, EKS, or AKS.
In the past three years I’ve explored another approach, installing a K8S cluster on a group of VMs running on my physical machine.
I initially tried installing and configuring from scratch everything I needed - very interesting way of learning, but very annoying way to proceed if you need a running cluster in minutes: you need to choose and install the OS, pick and install one container manager (e.g. containerd), install official K8S packages (kubelet, kubeadm), disable the swap (for real!), check and fix firewall’s rules, configure a cgroupdriver, restart a bunch of system daemons, … then you are ready to set up the cluster, starting a primary and at least one worker node, choosing and installing a network plugin, a metrics plugin, an ingress plugin… easy, right?
If you want play with the cluster without entering the maze of system installation and configuration, the way to go is to setup your cluster through one Kubernetes Distribution - something like MicroK8s or K3s.
And if you’re an automation addicted like me, may be you end up with a set of parameterizable script you can use to setup a cluster with a handful of CLI command: this is my version ot the game - you can try opening a terminal and lunching
1 2 3 4 5 6
git clone https://gitlab.com/pietrom/vagrant-microk8s-cluster.git cd vagrant-microk8s-cluster vagrant up mercury0 vagrant up mercury1 vagrant up mercury2 ...
Configuration (for cluster name, VMs’ IPs, …) is available editing variables in Vagrantfile.
The provided solution works with Vagrant and a virtualization provider supported by the vagrant box generic/ubuntu2004.
Good luck and enjoy playing with you brand new K8S cluster!!
More about the Proxy class introduced by ES6: providing an applytrap function in the handler passed to Proxy‘s constructor, we can intercept and modify call to the target function we pass to the same constructor. Exploiting this feature, we can e.g. implement a very simple memoize function, which return a wrapper function that call the original function and caches its return value, avoiding recalculation - this can be useful when such calculation is time- o money-expensive.
functionsum(a, b) { console.log(`${a} + ${b}`); return a+b ;}
sum(10, 20) // 10 + 20 sum(10, 20) // 10 + 20
let mem = memoize(sum) mem(10, 20) // 10 + 20 mem(10, 20) // no output sum(100, 200) // 100 + 200
The same approack works when the function we need do memoize is a member of an object, accessing siebling members of the object itself during the calculation:
EcmaScript 6 (ES6) introduces the Proxy class, which we can use to straightforwardly implement design patterns like (obviously) proxy, decorator and similar. The only thing we need is to create an instance of Proxy, providing the proxied object/function and a handler object, containing hook methods (the official doc calls them traps) that we can use to intercept and modify the proxied object’s behaviour.
For example, we can intercept and modify calls to object properties’ get calls providing a get(target, prop, handler) function in the handler object, or we can intercept and modify calls to a function providing a apply(target, thisArg, argumentsList) in the handler object.
The list of the supported traps is available here.
Here we use the settrap to simply implement a readonly factory function, which receieves an object and returns a new object wrapping the original object behind a read-only proxy:
functionreadonly(target) { returnnewProxy(target, { set: function(target, prop, recever) { throw`Can't set '${prop}': this is a read-only object!` } }) }
I like (automatically) testing very much: weather writing C# or Java/Kotlin code, weather I study a new language or that new library, I like to take a test-first approach or, at the very least, cover with test the code I’ve (or someone else has) already written.
My day-to-day activities tipically involve technical stacks that support testing very well: JUnit (for JVM languages), xUnit, NUnit (working on .Net platform), Jasmine, Jest, Mocha (when I write JavaScript/TypeScript code, weather client and server side), … all these are widely known and used testing frameworks/libraries, with first class support for IDEs and text editors and CLI-ready runners.
Occasionally (but not too much occasionally) though I need to write some shellish code: tipically Bash scripts that automate boring and repetitive tasks: setting up a new Gradle/Maven/Whatever-you-want project from scratch, adding one more module to it, cleaning up a codebase removing generated binaries, and so on.
What about the idea of testing such scripts automatically, or even of developing them according to a test-driven approach? I have been looking around and experimenting for a solution to this problem: at the very least, what we need is something similar to CLI runners for widely adopted testing frameworks that I mentioned earlier - a runner that ideally
we can launch from the CI/CD pipeline in order to execute all defined test cases
if one or more test cases fail
returns non-zero exit code
prints a summary of the failed test cases
requires no changes if one more test case is added to the list
Surprisingly (but maybe not too much), it’s not particularly difficult to write such a runner script, exploiting feature of declare command and its ability to provide the list of the functions currently available in the script. Given that list, we can select (by convention) the functions representing test cases (e.g. functions whose name starts with test_), executing them and collecting their result (exit code), providing a report to the user. Finally, the runnerexits with zero only when all test cases have been performed successfully.
# By convention, tets cases are defined in .sh files located in the 'test' directory # (or its subdirectories) for t in $(find test -name '*.sh') ; do . "$t" done
# Get all available functions whose name starts with 'test_' test_cases=$(declare -F | sed 's/declare -f //' | grep '^test_')
total=0 failures=0 failed_test_cases=() # Executes test cases, tracing # - the total count of executed test cases # - the count of failed test cases # - the names of failed test cases
for tc in$test_cases ; do echo"Executing ${tc}..." $tc if [ $? -ne 0 ] ; then failures=$(expr${failures} + 1) failed_test_cases+=(${tc}) fi total=$(expr${total} + 1) done
# Prints report echo"Test suite completed: total test cases ran: ${total}. Failures: ${failures}"
if [ $failures -ne 0 ] ; then echo"Failed test cases:" for ftc in"${failed_test_cases[@]}" ; do echo" ${ftc}" done # Makes pipeline fail id there are test failures exit 1 fi exit 0
Each test case is implemented through a function:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
test_a () { return 0 # should pass }
test_b () { test 1 -ge 1 # should pass }
test_c () { test -d /var/tmp # should pass }
test_d () { test -f /etc # should fail }
Each assertion can be something like an invocation of test command, as in previous examples, but can be something more complicated, like a complex test of the content of a generated file, a query to a database, a ping over the network… any task for which a command exists can be used to implement a test case, by formulating an assertion on command’s output or exit status.
Here you can find a very very simple CI/CD pipeline configuration that calls the runner just shown for each push on each branch of the codebase’s repository: so you can adopt a TDD approach, getting feedback from you CI infrastructure.