I like (automatically) testing very much: weather writing C# or Java/Kotlin code, weather I study a new language or that new library, I like to take a test-first approach or, at the very least, cover with test the code I’ve (or someone else has) already written.
My day-to-day activities tipically involve technical stacks that support testing very well: JUnit (for JVM languages), xUnit, NUnit (working on .Net platform), Jasmine, Jest, Mocha (when I write JavaScript/TypeScript code, weather client and server side), … all these are widely known and used testing frameworks/libraries, with first class support for IDEs and text editors and CLI-ready runners.
Occasionally (but not too much occasionally) though I need to write some shellish code: tipically Bash scripts that automate boring and repetitive tasks: setting up a new Gradle/Maven/Whatever-you-want project from scratch, adding one more module to it, cleaning up a codebase removing generated binaries, and so on.
What about the idea of testing such scripts automatically, or even of developing them according to a test-driven approach?
I have been looking around and experimenting for a solution to this problem: at the very least, what we need is something similar to CLI runners for widely adopted testing frameworks that I mentioned earlier - a runner that ideally
- we can launch from the CI/CD pipeline in order to execute all defined test cases
- if one or more test cases fail
- returns non-zero exit code
- prints a summary of the failed test cases
- requires no changes if one more test case is added to the list
Surprisingly (but maybe not too much), it’s not particularly difficult to write such a runner script, exploiting feature of declare
command and its ability to provide the list of the functions currently available in the script.
Given that list, we can select (by convention) the functions representing test cases (e.g. functions whose name starts with test_
), executing them and collecting their result (exit code), providing a report to the user.
Finally, the runner exits with zero only when all test cases have been performed successfully.
So, show me the code:
1 |
|
Each test case is implemented through a function:
1 | test_a () { |
Each assertion can be something like an invocation of test
command, as in previous examples, but can be something more complicated, like a complex test of the content of a generated file, a query to a database, a ping
over the network… any task for which a command exists can be used to implement a test case, by formulating an assertion on command’s output or exit status.
Here you can find a very very simple CI/CD pipeline configuration that calls the runner just shown for each push on each branch of the codebase’s repository: so you can adopt a TDD approach, getting feedback from you CI infrastructure.