Distances micro-DSL - the Scala version

I recently wrote about implementing in Kotlin a very simple DSL for expressing distances. Here is its Scala version:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
object Main {
import Distance._

def main(args: Array[String]) {
val marathon = 42.km + 195.m + 30.cm
println("Marathon " + marathon)
}
}

case class Distance(val m: Double) {
def + (that: Distance) = Distance(this.m + that.m)
}

object Distance {
implicit class IntDistanceExtension(val value: Int) {
def m = new Distance(value.toDouble)

def km = new Distance(value.toDouble * 1000)

def cm = new Distance(value.toDouble/ 100)
}
}

The beautiness of Kotlin - Episode 0

I like very much the fluency which you can define a micro-DSL with when you’re lucky enough to write Kotlin code: e.g., let’s implement a very very simple DSL to express distances:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
fun main() {
val marathon = 42.km + 195.m + 30.cm
println("Marathon = ${marathon}")
}

// Extension properties: Java/C# developer, can you exploit any similar feature? ;-)
val Int.km: Distance
get() = Distance(this.toDouble() * 1000)

val Int.m: Distance
get() = Distance(this.toDouble())

val Int.cm: Distance
get() = Distance(this.toDouble() / 100)

data class Distance(val m: Double) {
// Operator overloading made simple and OO-friendly:
// nothing to do with C# similar, static-method based feature!
operator fun plus(that: Distance) = Distance(this.m + that.m)
}

You can run the previous code here.

Please, don't call them microservices

Microservice - the most overused word of the last ten years in software engineering…
We all love microservices - in words if nothing else.
But are we really speaking about microservices ? Is this new distributed system consisting of microservices or is it a distributed monolith? And what’s the difference?

Certainly we have all heard phrases like these:

  • Please, create a microservice that does this and that and that and that… and that…
  • Well, here you need two microservices: the first one should create this table and read from it, the second one should consume that message and write its content into the same table…
  • Let deploy these five microservices, in order to make the new feature available: beware that you should deploy the service A first, then B and C, then D, and only when the first four services will have been deployed you wil able to deploy the service E…

We could continue, but… you got the point: not all of what we regularly call microservices really are: maybe they are services, but for sure they are not so micro.
Each of us undoubtedly has a very personal, opinionated list of characteristics that a true microservice should not exhibit - this is mine (in increasing order of severity), at least at the time of writing: please, don’t call them “microservices” if

  1. they are developed by the same team
  2. they are forced to share the same tech stack
  3. they are forced to share libraries, e.g. for contract definitions
  4. they are forced to be updated/deployed at the same time
  5. they share code (not as external libraries), even infrastructural one
  6. they share code implementing business logic (a special case for the previous case, but the most dangerous one)
  7. they share the database (coupling in space)
  8. they call each another through synchronous API (coupling in time)
  9. the delivery of a new feature always requires a coordinated changes to more than one service (functional coupling)
  10. … … …

Automatically customize your terminal's background color

The problem

In my daily work I usually deal with a large number of terminal windows or tabs; I feel it’s convenient to have a way to distinguish them one from the other at a glance, so I looked for a way to automatically change their background color when terminal starts.
Different terminals (e.g. Terminator vs XFCE4-Terminal vs…) support different color schemes and enable different options, but I finally found a bash-only-based solution, which works fine whatever terminal I use.

The solution

Bash supports changing background color through special output sequences: something like

1
2
3
echo -ne "\e]11;#ff0000\a"
echo -ne "\e]11;#0000bb\a"
echo -ne "\e]11;#000000\a"

can be for instance used to set background color to #ff0000, #0000ff, or #000000.

So, everything I need is a way to

  • choose a (background) color based on TTY id
  • apply the chosen color to background through a command like those above
  • do both thing every time a new bash is launched

The first problem can be solved through tty command, which outputs something like

1
2
$ tty
/dev/pts/3

So I can obtain tty number executing tty | tr -d [a-zA-Z/].
Given this tty number, I can select a color from an array, then use it to change background.

Adding to my path a script named change-background-color and calling it in .bashrc allows background color to be chosen automatically whenever I open an instance of bash.

Full code (with explanatory comments)

Bonus: my final implementation of the background color changing script allows two different usages:

  • you can simply issue change-background-color, cyclically choosing the color from a finite set depending upon the tty number, or
  • you can provide a color symbolic name as parameter, using something like change-background-color red or change-background-color olive.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
// .bashcr
change-background-color

// change-background-color, providing the file to be in the $PATH
#!/bin/bash
# Declares known colors as an associative array
declare -A colorsByName
colorsByName[red]=550000
colorsByName[black]=000000
colorsByName[blue]=000066
colorsByName[gray]=333333
colorsByName[purple]=440044
colorsByName[sugar]=004444
colorsByName[olive]=444400
colorsByName[green]=005500
colorsByName[brick]=773311
colorsByName[azure]=4444ff
colorsByName[orange]=c0470e
colorsByName[lightgray]=666666

# Turns known colors into an index-based array, too
declare -a colorsByIndex
n=0
for key in "${!colorsByName[@]}" ; do
colorsByIndex[$n]=${colorsByName[$key]}
n=$(expr $n + 1)
done

if [ $# -eq 1 ] ; then
# Gets color by name
color=${colorsByName[$1]}
else
theTty=$(tty|tr -d [a-zA-Z/])
# Calculates color index as tty_number % known_colors_count
i=$(expr $(tty | tr -d [a-zA-Z/]) % ${#colorsByName[@]})
# Gets color by index
color=${colorsByIndex[${i}]}
fi

if [ -n "$color" ] ; then
echo -ne "\e]11;#${color}\a"
fi

Now the only question is to choose a set of not eye-offending colors… ;-)

Credits

Implement a range function in JavaScript

I like to define a simple range function in my JavaScript projects, in order to easily adopt a functional approach when I need to work with integer intervals:

1
2
3
4
5
6
7
function range(start, end) {
const right = end || start
const left = end && start || 0
return Array.from({length: right - left}, (x, i) => left + i)
// Alternative implementation:
// return Array(right - left).fill(0).map((x, i) => left + i)
}

Basic usage:

1
2
3
4
5
6
7
8
range(0, 10)
// [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

range(5, 10)
// [5, 6, 7, 8, 9]

range(7)
// [0, 1, 2, 3, 4, 5, 6]

Please, don't use I in interface names

The context

People working in software development share a variety of idiosyncrasies about whatever they do when they are coding: they have very - very - strong opinions and habits about code formatting, naming convention, codebase organizations… and they love to “fight” about them.

It is often a question of development platform and community: for example, Java and C# are very similar languages (each one adding every day features inspired by other’s features…) but Java and C# code tends to be formatted in a slightly different way with regard to placement of braces.

Although I know people who make a war of religion out of braces’ placement, I’m convinced it’s only a matter of aesthetics - so you can imho format your code as you prefer: as long as you are consistent throughout the entire codebase, your choice is the right choice.

The issue

On the other hand, there is a community-related convention habit - or better, a set of similar community-related conventions habits - regarding interfaces naming I really dislike: people living and working in .Net echosystem, Microsoft’s follower, grown on bread and C#, tends to name interfaces starting with I: IPeopleRepository, IAuthenticationProvider, IThis and IThat.

It’s just another convention, you can say, as harmless as widespread: as long as you are consistent throughout the entire codebase, you can do as you prefer.

I totally disagree with the last sentence above: I think calling interfaces by I is not (only) a convention (sure it is!), but is a design error, too.
Some might say: But .Net standard library adopts and promotes this convention. So what? Is it fair because everyone does it? Is it fair because Microsoft does it? I think an error in an error, no matter how many people - now who - do it. But let me clarify why I think this is a very poor choice of design (naming is design, isn’t it?).

Violating DRY

Code like

1
2
3
public interface IPeopleRepository {
...
}

violates DRY - Don’t Repeat Yourself - principle: indeed, if you change your code moving this architecture component from interface to class, you have two things to change: the language keyword and the typename (so you must change type name throughout all its usages, too).

Violating SRP

Code like

1
2
3
public interface IPeopleRepository {
...
}

violates SRS - Single Responsibility Principle: indeed, the type name ha two reasons to change: you need to change it if you change its semantic (say you prefer renaming to [I]PersonRepository) and you need to change it if you want move from interface to class.

Poor modelling

1
2
3
public class List : IList {
...
}

is perhaps the most poor and inadequate naming choice I’ve ever seen in my almost twenty years of experience as a programmer; it communicates never about differences between interface and implementation:

  • what is the peculiarity of List as an implementation of IList?
  • are there other implementation of IList in the .Net standard library? In what are they different from List?

The Java way of naming things here is undoubtedly better and full of information: the name of the interface, List, describes the role in the code of objects refereed by a List reference; ArrayList, LinkedList, and so on describes what’s the implementation flavour the specific class is based on (e.g. giving programmers information about time- and memory-related behaviour of instances).

I think this is the way we should name things in our code: I like name interfaces trying to describe the abstract role played by runtime instances and classes trying to describe what are concrete implementation choices adopted: e.g. I prefer SqlServerPeopleRepository : PeopleRepository over PeopleRepository : IPeopleRepository, HttpWeatherForecastGateway : WeatherForecastGateway over WeatherForecastGateway : IWeatherForecast, and so on.

Breaking uniformity

So far I discussesed reasons why you shouldn’t name interfaces prefixing them with I from a design-related point of view.

But conventions are only conventions (I disagree about this specific habit, as I said above, but let’s face it), even when they seem like design errors, and whatever convention you adopt, you should adopt it evenly throughout your codebase: uniformity is a widely accepted best practice, when it comes to formatting convention, naming convention, code organization, …

So: you name interfaces starting with I and classes starting with C, isn’t it? And enumerations (wait: you don’t use enum in C#, do you?) starting with E.
And you name variables starting with prefixes remarking their type: s for strings, i for ints, d for doubles…

1
2
3
4
5
public class CMyClass : IMyInterface {
public void Foo(string sFirstParam, int iSecondParam, DateTimeOffset dtoThirdParam) {
...
}
}

Are you naming things this way? No? Only interfaces starting with I? Ok, you’re breaking uniformity, adopting a partial convention (and partial conventions aren’t conventions at all).

Conclusion

So: if you think design principles are an important and useful driving force when you’re writing code; if you think names should communicate something to the reader, and differences in names should communicate differences between things, allowing the reader to be able to easily understand moving parts of your code; if you think uniformity is a good quality of a codebase, even when it comes do adopting conventions… you should not use I as a mandatory, common prefix for interface names.

Bonus track

The same considerations apply for sure to other similar naming convention: many (Java) frameworks suggest you to name your interfaces and implementations with MyService and MyServiceImpl; many people are used to name AbstractSomething their implementation of [I]Something interface; others like to call asynchronous methods ending with Async - and so on.
Whenever you include a syntactical detail (I for interfaces, C or Impl for classes, Abstract for abstract things) in a name you’re facing a variant of the discussed problem: you’re violating DRY (repeating che syntactical detail both in syntax and name), you’re violating SRP (giving at least two responsibilities to the name), you’re likely to adopt an incoherent naming convention, you’re modelling your domain the wrong way.

Please, don't use enums in C#

Enumerated types

Enumerated (discrete) types are a powerful modeling tool for software developers: they allows them to explicitly state all and only the permitted values a variable can hold, with guarantee that

  • no invalid values can be pushed into a function, and
  • conditional (switch/case or pattern-matching based) depending on enumerated types can be recognized to be exhaustive by compilers.

This is strictly true for enums you can define in languages like Scala (case classes), Kotlin (enums) or even the old, mistreated Java (enums), but is only an unmaintained, misleading promise for C#’s enums.

The problem (or “The C# way” to enums”)

Defining an enum in C# indeed is only a syntactic sugar you can leverage to define related, “namespaced” integer constants:

1
2
3
public enum Ordinal {
First = 1, Second = 2, Third = 3
}

is in essence only a shortcut for

1
2
3
4
5
public static class Ordinal {
public const int First = 1;
public const int Second = 2;
public const int Third = 3;
}

I’m not saying the compiler produces the same output - I’m saying in both cases you can refer to something like Ordinal.Second in order to get an int constant whose value is 2.

Issue #1

No way to define a method, say

1
2
3
void DoSomething(Ordinal o) {
Console.WriteLine($"Ordinal value is {o:D}");
}

preventing callers to pass invalid values into:

1
2
DoSomething(Ordinal.First);
DoSomething((Ordinal)500);

is definitely valid code (from the compiler’s point of view) producing the following output:

1
2
Ordinal value is 1
Ordinal value is 500 // WTF??? Value not present in Ordinal declaration...

Issue #2

No way to rely on compiler in order to check exhaustiveness of conditional checks: you can indeed write

1
2
3
4
5
public int Foo(Ordinal o) => o switch {
Ordinal.First => 1,
Ordinal.Second => 2,
Ordinal.Third => 3,
};

but the compilers gives you a warning like The switch expression does not handle all possible inputs (is is not exhaustive), even if all values defined by the enum are explicitly treated; in order to avoid this inappropriate warning you must add a fourth, never used branch to the switch:

1
_ => throw new Exception("Unexpected value")

(or you can return a special value, if you like code smells ;-)…).

So, C#’s enums are syntactic sugar for int constants, and defining a method parameter of type Ordinal is nothing different from defining it of type int (yes, you can define an enum having byte or long or ${other integral type} as underlying representation (see here), but… you got the idea).

From a modelling point of view, C#’s enums are a very poor feature, which does not allow developers to define true enumerated (discrete) types: so… please, don’t use them, or at least don’t use them as if they were.

The right way

The right way to model enumerated/discrete types in C# is imho to adopt a pattern that I first heard about in 2004, reading Hardcore Java enlightening book:

1
2
3
4
5
6
7
public sealed class Ordinal {
public int Value { get; }
private Ordinal(int value) { Value = value; }
public static Ordinal First = new Ordinal2(0);
public static Ordinal Second = new Ordinal2(1);
public static Ordinal Third = new Ordinal2(2);
}

This does not solve the problem of exhaustiveness’ check, but models true discrete type allowing only intended values to be used where Ordinal parameters are required.

A variant of this pattern, based on inheritance, allow developers to attach polymorphic behaviours to enumeration cases.

Bonus (or “The Java way to enums”)

By the way, this is the way enum‘s implementation in Java (since 2004!!!) and Kotlin work: they provide substantially a syntactic sugar for the pattern above, allowing true enumerated/discrete types modelling:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
public enum Ordinal {
First {
@Override
void doSomething() {
// Behaviour for case First
}
}, Second {
@Override
void doSomething() {
// Behaviour for case Second
}
}, Third {
@Override
void doSomething() {
// Behaviour for case Third
}
};

abstract void doSomething();
}

Java supports exhaustiveness check out of the box, too:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
void doSomething(Ordinal o) {
var value = switch (o) { // compiler's error: 'switch' expression does not cover all possible input values
case First -> 1;
case Second -> 2;
};
}

void doSomething(Ordinal o) {
var value = switch (o) { // no errors, no warning
case First -> 1;
case Second -> 2;
case Third -> 3;
};
}

LSP: an opinionated discussion

Liskov’s Substitution Principle (LSP for friends) is one of the five SOLID Principles - maybe the most misunderstood.

According to Wikipedia, it states that
Let P(x) be a property provable about objects x of type T. Then P(y) should be true for objects y of type S where S is a subtype of T.

More informally, the idea behind this principle is that we should not violate the contract published by the T supertype when we use or extend it.

I think it’s worth analyzing this idea deeply, in order to explain both classical and less trivial ways to violate the principle.

Generally speaking, we can try to classify LSP violations into three main classes:

  • Bad Client: the principle is violated due to the usage of the supertype
  • Bad Child: the principle is violated due to a crooked subtype implementation
  • Poor Modelling: the principle is violated due to the usage of a (general) type to model (less general) domain concepts

So, let’s show many examples of violations belonging to the three classes.

Bad Client

The first example of LSP violation I would like to talk about is a classical one: a bad client of a types hierarchy can break LSP downcasting a reference to a specific, hardcoded subtype:

1
2
3
4
public <T> T lastElementOf(Collection<T> input) {
var theList = (List<T>)input;
return theList.isEmpty() ? null : theList.get(theList.size() - 1);
}

Callers of the method lastElementOf believe they can invoke it passing whatever instance of whatever concrete implementation of the Collection interface, but calls passing something other than instances of types implementing the List subinterface will fail systematically: lastElementOf is a bad client for the Collection type hierarchy because not all Collection‘s subtypes are fully substitutable to the supertype when it comes to invoke the method.

A subtle variation of this violation of LSP, which I have already written here about, involves two unrelated interfaces: here the cast assumes that the actual parameter type implements both interfaces, breaking BadInterfaceDowncastingClient‘s contract - the method below is therefore a bad client for FrontEndContext interface.

1
2
3
4
5
6
7
8
9
10
11
12
public interface FrontEndContext {}

public interface BackEndContext {}

public class MyContext : FrontEndContext, BackEndContext {}

public class ABoundaryService {
public void BadInterfaceDowncastingClient(FrontEndContext ctx) {
var context = (BackEndContext)ctx;
doSomethingWith(context);
}
}

It must be said that LSP violations belonging to the bad client class are not very usual in code written by experienced developers (but it happened to me to find something very similar to the last example in code written by a self-styled software architect).

Bad Child

The second class of LSP violations it’s worth to mention is the class I like to call bad children: the violation consists in a subtype bad implementing the contract stated by the supertype.
The tipical example you can find of this class of violations is that of a Square class, extending Rectangle in a way that violates some supertype invariant (e.g. the idea that width and height can be changed independently) leading to surprisingly behaviour.

A less didactic and more actual example can be the following, where the InMemoryBin<T> implementation of the Bin<T> interface implements its supertype subtly breaking the contract of the addForever(T item):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public interface Bin<T> {
void addForever(T item);
}

public class InMemoryBin<T> implements Bin<T> {
private static final int MAX_SIZE = 50;
private int currentIndex = -1;
private T[] items = new T[MAX_SIZE];

public void addForever(T item) {
currentIndex = (currentIndex + 1) % MAX_SIZE;
items[currentIndex] = item;
}
}

The method required by the interface clearly requires added elements to be kept forever, but the implementation use a capped data-structure to store references to added items. So, when a client adds the (MAX_SIZE+1)th item to the InMemoryBin, the first item added disappears from the collection: InMemoryBin.addForEver is not really for ever and the described class acts as a bad child for the Bin supertype, hence not fully substitutable to it.

A third way to violate LSP writing a subtype of an interfaces or a superclass is to implement a method misrepresenting its intended purpose: the classic example is that of a class implementing the toString() method (better: overriding Object.toString() base method) in order to construct not only a textual representation of an object, but also a meaningful one from a business perspective.
toString() method is generally intended as a way to describe an object for logging and debugging purposes, but it’s not uncommon to find code like the following, which overrides and uses it to implement some functional requirement:

1
2
3
4
5
6
7
8
9
public class SqlQuery {
public SqlQuery(String tableName) { ... }
public void addStringFilter(String fieldName, string operator, String value) { ... }
public void addIntFilter(String fieldName, string operator, int value) { ... }
...
public void toString() { // Maybe should the method to be named 'buildSql()' or 'toSql()'?
return "select * from " + tableName + " where " + buildWhereClause();
}
}

I wrote that toString() method is generally intended as a way to describe an object for logging and debugging purposes, but sure, you can object that this is a very opinionated sentence. No doubt in part it is, but… what about the name of the method? It is toString, not toSql nor something like toHtml or toUiMessage: this method is intended to generate a String representation of an object, and String is a very unstructured, general-purpose concept: about the idea to represent Strings with specific structure defining custom types please read the next section - the same can be valid when it comes to the choice of method names; in one sentence, if the method name asks for a String returning implementation, you should return a real String, with all its invariants… and a Sql query definitely isn’t.

Sadly, this nuance of LSP bad child violation is a very common one, even in code written by experienced developers.

Poor Modelling

So far, so good.
The last class of LSP violation which I think is interesting to talk about is a bit different from bad client and bad child, due the fact that it does not involve any subclassing: the violation resides in a misuse of an (usualy very general-purpose) existing type from a modelling point of we: let me call it poor modelling.

This may seem like a provocation, and it certainly is in part, but I think that whenever you are using a general-purpose type (tipically: String) to represent data like email addresses or credit card numbers all your code around… you’re violating the Liskov Substitution Principle - if not in its formal definition, at least in its general meaning.

Representing an email address as a String, without defining a dedicated EmailAddress type that ensures invariants that should be valid for such a value is not only a naive modelling error (from the point of view of a domain driven desing you should not have any doubt about this); it’s not only very unconfortable and error prone (what about mistakenly swap two String values, the first one representing an email address and the second one holding a credit card number?); it violates the contract of the String class, too, because the very general-purpose String is intended to exhibit behaviours (invariants) that are simply not valid (they are conversely wrong!) for an email address (or a credit card number).
If you are not completely convinced: what about concatenating two Strings? Is the resulting value still a valid String? Of course it is!! Can the same be said abuout concatenating thw email addresses? What about keeping only the first then characters of an existing String? It results in a valid String, of course, but the same is in general not true for a part of an email addresses

So… you should model email addresses and credit card numbers (and users IDs and VAT codes and Sql queries and… well, you got the point) not only to be a good DDDer, nor to let the compiler statically help you to avoid errors using those values. You should not use unwrapped general-purpose types to represent your domain’s concepts even to respect the LSP’s spirit: not only subtypes, but also values should be fully susbtitutable to the super(or general-purpose) type; if your values are subject to restrictions (in value domain or in behaviour/invariants) with respect to the use of the chosen, general purpose type, you are in my humble opinion violating LSP due to poor modelling.

"Refactoring" a constant-time method into linear time

Recently, I had the (dis-)pleasure to stumble upon a coding horror created by a colleague of mine. When I told Pietro about it, he graciously asked me to write a post about it. So, here we go!

If you know Java and haven’t lived with your head under a rock for the past 7-odd years, you surely know about streams. We all know and love streams, right? Well, what I love even more than streams is applying my judgement and thinking whether it is or isn’t a good idea to use one.

Take this simple and innocent-looking piece of code, for example:

1
2
3
4
5
6
int lastElement(int[] array) {
if (array.length == 0) {
throw new RuntimeException("Array is empty");
}
return array[array.length - 1];
}

It doesn’t get any simpler than that.

But if you just want to use streams everywhere, you might be tempted to convert it as follows:

1
2
3
4
5
int lastElement(int[] array) {
return Arrays.stream(array)
.reduce((first, second) -> second)
.orElseThrow(() -> new RuntimeException("Array is empty"));
}

Spot the difference? You just converted a constant-time array access into a linear-time scan!

This is not necessarily an issue with streams (the same coding horror can be achieved with a good old-fashioned for loop, of course), but it just serves to prove that:

  • applying your judgement is better than blindly use the new shiny API
  • it is important to always consider the complexity (both in time and space) of your code.

Needless to say, the pull request that contained this change was NOT approved!

Functional shell: a minimal toolbox

I already wrote a post about adopting a functional programming style in Bash scripts. Here I want to explore how to build a minimal, reusable functional toolbox for my bash scripts, avoiding redefinition of base functional bricks whenever I need them.

So, in short: I wish I could write a scripts (say use-functional-bricks.sh) like the following

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
#!/bin/bash
double () {
expr $1 '*' 2
}

square () {
expr $1 '*' $1
}

input=$(seq 1 6)
square_after_double_output=$(map "square" $(map "double" $input))
echo "square_after_double_output $square_after_double_output"

sum() {
expr $1 '+' $2
}

sum=$(reduce 0 "sum" $input)
echo "The sum is $sum"

referring to “globally” available functions map and reduce(and maybe others, too) without to re-write them everywhere they are needed and without to be bound to external scripts invocation.

The way I think we can solve the problem refers to three interesting features available in bash:

  • export functions from scripts (through export -f)
  • execute scripts in the current shell’s environment, through source command
  • execute scripts when bash starts

So I wrote the following script (say functional-bricks.sh):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#!/bin/bash
map () {
f=$1
shift
for x
do
$f $x
done
}
export -f map

reduce () {
acc=$1
f=$2
shift
shift
for curr
do
acc=$($f $acc $curr)
done
echo $acc
}
export -f reduce

and added the following line at the end of my user’s ~/.bashrc file:

1
. ~/common/functional-bricks.sh

and… voila!: now map and reduce implemented in functional-bricks.sh are available in all my bash sessions - so I can use them in all my scripts!
And because seeing is beleiving… if I launch the script use-functional-bricks.shdefined above, I get the following output:

1
2
3
4
5
6
7
square_after_double_output 4
16
36
64
100
144
The sum is 21