Functions as first-class citizens: the shell-ish version

The idea to compose multiple functions together, passing one or more of them to another as parameters, generally referred to as using higher order functions is a pattern which I’m very comfortable with, since I read about ten years ago the very enlighting book Functional Thinking: Paradigm Over Syntax by Neal Ford. The main idea behind this book is that you can adopt a functional mindset programming in any language, wheter it supports function as first-class citizens or not. The examples in that book are mostly written in Java (version 5 o 6), a language that supports (something similar to) functions as first-class citizens only from version 8. As I said, it’s more a matter of mindset than anything else.

So: a few days ago, during a lab of Operating System course, waiting for the solutions written by the students I was wondering If it is possible to take a functional approach composing functions (or something similar…) in a (bash) shell script.

(More in detail: the problem triggering my thinking about this topic was “how to reuse a (not so much) complicated piece of code involving searching files and iterating over them in two different use cases, that differed only in the action applied to each file)

My answer was Probably yes!, so I tried to write some code and ended up with the solution above.

The main point is - imho - that as in a language supporting functions as first class citizens the bricks to be put together are functions, in (bash) script the minimal bricks are commands: generally speaking, a command can be a binary, or a script - but functions defined in (bash) scripts can be used as commands, too. After making this mental switch, it’s not particularly difficult to find a (simple) solution:

action0.sh - An action to be applied to each element of a list

1
2
#!/bin/bash
echo "0 Processing $1"

action1.sh - This first action to be applied to each element of a list

1
2
#!/bin/bash
echo "1 Processing $1"

foreach.sh - Something similar to List<T>.ForEach(Action<T>) extension method of .Net standard library(it’s actually a high order program)

1
2
3
4
5
6
7
#!/bin/bash
action=$1
shift
for x
do
$action $x
done

main.sh - The main program, reusing foreach‘s logic in more cases, passing to the high order program different actions

1
2
3
4
5
6
#!/bin/bash
./foreach.sh ./action0.sh $(seq 1 6)
./foreach.sh ./action1.sh $(seq 1 6)

./foreach.sh ./action0.sh {A,B,C,D,E}19
./foreach.sh ./action1.sh {A,B,C,D,E}19

Following this approach, you can apply different actions to a bunch of files, without duplicating the code that finds them… and you do so applying a functional mindset to bash scripting!

In the same way it is possible to implement something like the classic map higher order function using functions in a bash script:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
double () {
expr $1 '*' 2
}

square () {
expr $1 '*' $1
}

map () {
f=$1
shift
for x
do
echo $($f $x)
done
}

input=$(seq 1 6)
double_output=$(map "double" $input)
echo "double_output --> $double_output"
square_output=$(map "square" $input)
echo "square_output --> $square_output"
square_after_double_output=$(map "square" $(map "double" $input))
echo "square_after_double_output --> $square_after_double_output"

square_after_double_output, as expected, contains values 4, 16, 36, 64, 100, 144.

In conclusion… no matter what language you are using: using it functionally, composing bricks and higher order bricks together, it’s just a matter of mindset!

Set Of Responsibility and IoC

The original post was published here.

I recently read this Pietro’s post about a possible adaptation of the chain of responsability pattern: the “Set of responsibility”. This is very similar to its “father” because each Handler handles the responsibility of a Request but in this case it doesn’t propagate the check of responsibility to other handlers. There is a responsibility without chain!

In this article I’d like to present the usage of this pattern in an IOC Container, where the Handlers aren’t added to the HandlerSet list but provided from the container. In this way you can add new responsibility to the system simply adding a new Handler in the container without changing other parts of implemented code (es. the HandlerSet), in full compliance with the open closed principle.

For coding I’ll use the Spring Framework (JAVA) because it has a good IOC Container and provides a set of classes to manage with that. Inversion of control principle and the Dependency Injection are first class citizens in the SpringFramework.

Here is the UML class diagram with 3 responsabilities X,Y,Z and a brief description of the solution adopted.

Class diagram

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
@Component
public class XHandler implements Handler {
@Override
public Result handle(Request request) {
return ((RequestX) request).doSomething();
}
@Override
public boolean canHandle(Request request) {
return request instanceof RequestX;
}
}
@Component annotation on XHandler tells to Spring to instantiate an object of this type in the IOC container.
public interface HandlerManager {
Result handle(Request request) throws NoHandlerException;
}
@Service
public class CtxHandlerManager implements HandlerManager {

private ApplicationContext applicationContext;

@Value("${base.package}")
private String basePakage;

@Autowired
public CtxHandlerManager(ApplicationContext applicationContext) {
this.applicationContext = applicationContext;
}

@Override
public Result handle(Request request) throws NoHandlerException {
Optional handlerOpt = findHandler(request);
if ( !handlerOpt.isPresent() ) {
throw new NoHandlerException();
}
Handler handler = handlerOpt.get();
return handler.handle(request);
}

private Optional<Handler> findHandler(Request request) {
ClassPathScanningCandidateComponentProvider provider = createComponentScanner();

for (BeanDefinition beanDef : provider.findCandidateComponents(basePakage)) {
try {
Class clazz = Class.forName(beanDef.getBeanClassName());
Handler handler = (Handler) this.applicationContext.getBean(clazz);
//find responsible handler for the request
if (handler.canHandle(request)) {
return Optional.ofNullable(handler);
}
} catch (ClassNotFoundException e) {
e.printStackTrace();
}
}
return Optional.empty();
}

private ClassPathScanningCandidateComponentProvider createComponentScanner() {
ClassPathScanningCandidateComponentProvider provider
= new ClassPathScanningCandidateComponentProvider(false);
provider.addIncludeFilter(new AssignableTypeFilter(Handler.class));
return provider;
}
}

CtxHandlerManager works like a Handler dispatcher. The handle method finds the Handler and calls its handle method which invokes the doSomething method of the Request.

In the findHandler method I use the Spring ClassPathScanningCandidateComponentProvider object with an AssignableTypeFiler to the Handler class. I call the findCandidateComponent on a basePackage (the value is set by @Value Spring annotation) and for each candidate the canHandle method check the responsibility. And that’s all!

In Sender class the HandlerProvider implementation (CtxHandlerManager)is injected by Spring IOC by “Autowiring”:

1
2
3
4
5
6
7
8
9
@Service
public class Sender {
@Autowired
private HandlerManager handlerProvider;
public void callX() throws NoHandlerException {
Request requestX = new RequestX();
Result result = handlerProvider.handle(requestX);
...
}

This solution let you to add new responsibility simply creating new Request implementation and a new Handler implementation to manage it. By applying @Component annotation on the Handler you allow Spring to autodetect the class for dependency injection when annotation-based configuration and classpath scanning is used. On application reboot, this class can be provided by the IOC Container and instantiated simply invoking the HandlerManager.

In the next post I’d like present a possible implementation of a Component Factory using the set of responsability pattern in conjunction with another interesting pattern, the builder pattern.

Happy coding!

Set of responsibility

The original post was published here.

So, three and a half years later… I’m back.

According to wikipedia, the chain-of-responsibility pattern is a design pattern consisting of a source of command objects and a series of processing objects. Each processing object contains logic that defines the types of command objects that it can handle; the rest are passed to the next processing object in the chain.

In some cases, I will benefit from flexibility allowed by this pattern, without being tied to the chain-based structure, e.g. when there is an IoC container involved: Handlers in the pattern all have the same interface, so it’s difficult to leave their instantiation to the IoC container.

In such scenario I use a variation of the classic chain of responsibility: there are still responsibilities, off course, but there is no chain out there.

I like to call my variation set of responsibility (or list of responsibility - see above for discussion about this - or selectable responsibility) - the structure is the one that follows (C# code):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
interface Handler {
Result Handle(Request request);

bool CanHandle(Request request);
}


class HandlerSet {
IEnumerable handlers;

HandlerSet(IEnumerable < handler > handlers) {
this.handlers = handlers;
}

Result Handle(Request request) {
return this.handlers.Single(h => h.CanHandle(request)).Handle(request);
}
}

class Sender {
HandlerSet handler;

Sender(HandlerSet handler) {
this.handler = handler;
}

void FooBar() {
Request request = ...;
var result = this.handler.Handle(request);
}
}

One interesting scenario which I’ve applied this pattern in is the case in which the Handlers‘ input type Request hides a hierarchy of different subclasses and each Handler implementation is able to deal with a specific Request subclass: when use polymorphism is not a viable way, e.g. because those classes comes from an external library and are not under our control or they aren’t the best place that to implement the processing logic in, we can use set of responsibility in order to cleanup the horrible code that follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
class RequestX : Request {}

class RequestY : Request {}

class RequestZ : Request {}

class Sender {
var result = null;

void FooBar() {
Request request = ...;

if(request is RequestX) {
result = HandleX((RequestX)request);
} else if (request is RequestY) {
result = HandleY((RequestY)request)
} else if (request is RequestZ) {
result = HandleZ((RequestZ)request)
}
}
}

We can’t avoid is and () operators usage, but we can hide them behind a polymorphic interface, adopting a design than conform to open-closed principle:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
class Sender {
HandlerSet handler;
Sender(HandlerSet handler) {
this.handler = handler;
}

void FooBar() {
Request request = ...;
var result = this.handler.Handle(request);
}
}

class HandlerX : Handler {
bool CanHandle(Request request) => request is RequestX;

Result Handle(Request request) {
HandleX((RequestX)request);
}
}

class HandlerY : Handler {
bool CanHandle(Request request) => request is RequestY;

Result Handle(Request request) {
HandleY((RequestY)request);
}
}

class HandlerZ : Handler {
bool CanHandle(Request request) => request is RequestZ;

Result Handle(Request request) {
HandleZ((RequestZ)request);
}
}

Adding a new Request subclass case is now only a matter of adding a new HandlerAA implemementation of Handler interface, without the need to touch existing code.

I use in cases like the explained one the name of set of responsibility to stress the idea that only one handler of the set can handle a single, specific request (I use _handlers.Single(...) method in HandlerSet implementation, too).

When the order in which the handlers are tested matters, we can adopt a different _handlers.Single(...) strategy: in this case I like to call the pattern variation list of responsibility.

When more than one handler can handle a specific request we can think to variations of this pattern that select all applyable handlers (i.e. those handlers whose CanHandle method returns true for the current request) and apply them to the incoming request.

So, we have decoupled set/list/chain-processing logic from concrete Request processing logic, leaving them to vary independently, according to the Single Responsibility Principle, an advantage we would not have adopting the original chain of responsibility pattern…

Embrace change

So it is: three months ago I joined a new job position, switching from a big company to a little, agile one, after more than eight years of distinguished service :-).
Furthermore: I switched from a Java and JEE - centric technological environment to a more rich and various one - yet .NET and C# oriented.
So, my Java Peanuts will maybe become in the future C# Peanuts (or Node.js Peanuts, who knows…) or, more generally, Programming Peanuts: for the moment I’m still planning a little post series about my way from Java to .NET, so… if you are interested… stay tuned!

Seven things I really hate in database design

  1. Common prexif in all table names
    eg: TXXX, TYYY, TZZZ, VAAA, VBBB - T stays for Table, V stays for View
    eg: APPXXX, APPYYY, APPZZZ - APP is an application name
  2. Common prefix in all field names in every table
    eg: APPXXX.XXX_FIELD_A, APPXXX.XXX_FIELD_B, APPXXX.XXX_FIELD_C
  3. Fields with the same meaning and different names (in different tables)
    es: TABLE_A.BANK_ID, TABLE_B.BK_CODE
  4. Fields with the same logical type and different physical types
    eg: TABLE_A.MONEY_AMOUNT NUMBER(20,2)
    TABLE_B.MONEY_AMOUNT NUMBER(20,0) – value * 100
    TABLE_B.MONEY_AMOUNT VARCHAR(20) –value * 100 as char
  5. No foreign-keys nor integrity constraints at all - by design
  6. Date (or generally structured data type) representation with generic and not specific types
    eg: TABLE_A.START_DATE NUMBER(8,0) – yyyyddmm as int
    eg: TABLE_B.START_DATE VARCHAR(8) – yyyyddmm as char
  7. (possible only in presenceof 6.) Special values for semantic corner-cases which are syntactically invalid
    eg: EXPIRY_DATE = 99999999 – represents “never expires case”,
    but… IT’S NOT A VALID DATE!!! - why not 99991231??

Mocking static methods and the Gateway pattern

This post was originally posted here.

A year ago I started to use mocking libraries (e.g., Mockito, EasyMock, …), both for learning something new and for testing purpose in hopeless cases.
Briefly: such a library makes it possible to dynamically redefine the behaviour (return value, thrown exceptions) of the methods of the class under test, in order to run tests in a controlled environment. It makes it possible even to check behavioural expectations for mock objects, in order to test the Class Under Test’s interactions with its collaborators.
A few weeks ago a colleague asked me: “[How] can I mock a static method, maybe using a mock library?”.
In detail, he was looking for a way to test a class whose code was using a static CustomerLoginFacade.login(String username, String password) method provided by an external API (an authentication custom API by a customer enterprise).
His code looked as follows:

1
2
3
4
5
6
7
8
9
10
11
12
public class ClassUnderTest {
...
public void methodUnderTest(...) {
...
// check authentication
if(CustomerLoginFacade.login(...)) {
...
} else {
...
}
}
}

but customer’s authentication provider was not accessible from test environment: so the main (but not the only: test isolation, performances, …) reason to mock the static login method.

A quick search in the magic mocking libraries world revealed that:

  • EasyMock supports static methods mocking using extensions (e.g, Class Extension, PowerMock)

  • JMock doesn’t support static method mocking

  • Mockito (my preferred [Java] mocking library at the moment) doesn’t support static method mocking, because Mockito prefers object orientation and dependency injection over static, procedural code that is hard to understand & change (see official FAQ). The same position appears even in a JMock-related discussion. PowerMock provides a Mockito extension that supports static methods mocking.
    So, thanks to my colleague, I will analize the more general question “Ho can I handle external / legacy API (e.g., static methods acting as service facade) for testing purposes?”. I can identify three different approaches:

  • mocking by library: we can use a mocking library supporting external / legacy API mocking (e.g, class’ mocking, static methods’ mocking), as discussed earlier

  • mocking by language: we can refer to the features of a dynamically typed programming language to dynamically change external / legacy API implementation / behaviour. E.g., the login problem discussed earlier can be solved in Groovy style, using the features of a language fully integrated with the Java runtime:

1
2
3
CustomerLoginFacade.metaClass.'static'.login = {
return true;
};

Such an approach can be successfully used when CustomerLoginFacade.login‘s client code is Groovy code, not for old Java client code.

  • Architectural approach: mocking by design. This approach refers to a general principle: hide every external (concrete) API behind an interface (i.e.: coding on interfaces, not on concrete implementation). This principle is commonly knows as dependency inversion principle.
    So, we can solve my colleague’s problem this way: first, we define a login interface:
1
2
3
public interface MyLoginService {
public abstract boolean login(final String username, final String password);
}

Then, we refactor the original methodUnderTest code to use the interface:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
public class ClassUnderTest {
private MyLoginService loginService;
// Collaborator provided by Constructor injection (see here for
// a discussion about injection styles)
public ClassUnderTest(final LoginService loginService) {
this.loginService = loginService;
}
...
public void methodUnderTest(...) {
...
// check authentication
if(loginService.login(...)) {
...
} else {
...
}
}
}

So, for testing pourposes, we can simply inject a fake implementation of the MyLoginService interface:

1
2
3
4
5
public void myTest() {
final ClassUnderTest cut = new ClassUnderTest(new FakeLoginService());
cut.methodUnderTest(..., ...);
...
}

where FakeLoginService is simply

1
2
3
4
5
6
7
8
9
10
11
12
public class FakeLoginService implements MyLoginService {
public boolean login(final String username, final String password) {
return true;
}
}
and the real, pruduction implementation of the interface looks simply like this:

public class RealLoginService implements MyLoginService {
public boolean login(final String username, final String password) {
return CustomerLoginFacade.login(username, password);
}
}

Ultimately, the interface defines an abstract gateway to the external authentication API: changing the gateway implementation, we can set up a testing environment fully decoupled from real customer’ authentication provider
.
IMHO, i prefer the last mocking approach: it’s more object oriented, and after all… my colleague called me once the more OO person I know :-). I find this approach more clean and elegant: it’s built only upon common features of programming languages and doesn’t refer to external libraries nor testing-oriented dynamic languafe features.
In terms of design, too, I think it’s a more readable and more reusable solution to the problem, which allows a clearer identification of responsibilities of the various pieces of code: MyLoginService defines an interface, and every implementation represents a way to implement it (a real-life (i.e.: production) implementation versus the fake one).

However, method mocking (by library or by language, doesn’t matter) is in certain, specific situations a very useful technique, too, especially when code that suffers static dependencies (ClassUnderTest in our example) is an example of legacy code, designed with no testing in mind, and is eventually out of developer control.
[Incidentally: the solution adopted by my colleague was just that I have proposed (i.e., mocking by design)]

Credits: thanks to Samuele for giving me cause to analyze such a problem (and for our frequent and ever interesting design-related discussion).
Thanks to my wife for hers valuable support in writing in my pseudo-English.

How to automatically test Java Console

Some weeks ago, during a workroom lesson in University, I’ve faced a typical TDD-addicted dilemma: how can I test driven develop a console-based Java application?
The main problem is clearly how to automatically interact with the application, which relies to System.in for user input and to System.out for user output.
You can use System.setIn and System.setOut methods, of course: but this is IMHO a dirty solution to the console-interaction testability problem, which can be resolved in a cleaner way referring to the dependency inversion principle, ubiquitous in test driven design: rather than directly referring to concrete System.in and System.out streams (this reference is concrete because it’s direct, not because it points to a concrete class: InputStream is actually an abstract class), the console-based application should reference to some abstraction that encapsulates standard I/O streams dependency - for example to Scanner (for user input) and PrintStream (for user ouput: again, direct reference to System.out is concrete because it’s direct, not because it points to something concrete).
So, application behaviour can be encapsulated into a classe having a constructor like this:

1
public HelloApp(Scanner scanner, PrintStream out)

The application main method instances such a class and invokes a method thar triggers application logic execution, simply providing Scanner and PrintStream that wrap standard I/O streams:

1
2
3
4
5
6
7
public static void main(String[] args) {
Scanner scanner = new Scanner(System.in);
scanner.useDelimiter(System.getProperty("line.separator"));

HelloApp app = new HelloApp(scanner, System.out);
app.run();
}

Testing code, however, can provide to HelloApp testing oriented instances of Scanner and PrintStream:

1
2
3
4
5
6
7
8
9
10
11
12
final Scanner scanner = new Scanner("Duke y Goofy y Donald n");
scanner.useDelimiter(" ");

ByteArrayOutputStream outputBuffer = new ByteArrayOutputStream();
PrintStream out = new PrintStream(outputBuffer);

final HelloApp app = new HelloApp(scanner, out);
app.run();

final String output = outputBuffer.toString();
// Assertions about outputBuffer content:
assertTrue(output.startsWith("Welcome to HelloApp!"));

So, we have gracefully decoupled application logic from console-based user interactions, providing a solid framework for automated application testing and, even more satisfying for TDD addicted, for Test Driven Development.

A complete example can be found here: https://bitbucket.org/pietrom/automatically-testing-the-console

Code repository can be cloned using git:
git clone https://bitbucket.org/pietrom/automatically-testing-the-console.git

This post was originally published here on 21/05/2012.

How-to find a class in a JAR directory using shell scripting

This post war originally published here.

The biggest problems in J2EE applications deployment come often from classloader hierarchies and potential overlapping between server-provided and application-specific libraries. So, searching classes through collection of JARs is oftwen the main activity in order to identifiy and fix classloader issues.
This is surely a tedious and repetitive task: so, here’s a shell script you can use to automate JAR collection traversing and tar command’s output analysis to search a pattern, which is provided as script parameter.

Credits: Thanks to sirowain for parameter check and return code related contributions.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
#!/bin/bash
# Commonly available under GPL 3 license
# Copyleft Pietro Martinelli - javapeanuts.blogspot.com
if [ -z $1 ]
then
echo "Usage: $0 <pattern>"
echo "tar xf's output will be tested against provided <pattern> in order to select matching JARs"
exit 1
else
jarsFound=""
for file in $(find . -name "*.jar"); do
echo "Processing file ${file} ..."
out=$(jar tf ${file} | grep ${1})
if [ "${out}" != "" ]
then
echo " Found '${1}' in JAR file ${file}"
jarsFound="${jarsFound} ${file}"
fi
done
echo "${jarsFound}"

echo ""
echo "Search result:"
echo ""

if [ "${jarsFound}" != "" ]
then
echo "${1} found in"
for file in ${jarsFound}
do
echo "- ${file}"
done
else
echo "${1} not found"
fi
exit 0
fi

This script is available on github.com:
https://github.com/pietrom/javapeanuts-shell-utils/blob/master/find-jar.sh

Never executed - Never tested

Code samples I’ll publish in this post are not fakes: they come from real code, released into production.
And they are not only brilliant samples of never tested code: they are samples of never executed code!!! Indeed there are in these code snippets execution path which ever - ever! - fail. Read to believe…

Sample #1 - NullPointerException at each catch execution

1
2
3
4
5
6
MyClass result = null;
try {
result = callMethod(...);
} catch(Exception e) { // throws ever NullPointerException...
result.registerException(e);
}

Sample #2: ArrayIndexOutOfBoundException at each catch execution

1
2
3
4
5
6
7
8
9
10
11
try {
result = callSomeMethod(...);
} catch(Exception e) {
String[] messages = new String[3];
messages[0] = ... ;
messages[1] = ... ;
messages[2] = ... ;
// throws ever ArrayIndexOutOfBoundException ...
messages[3] = ... ;
throw new CustomException(messages);
}

Sample #3: ClassCastException whenever if‘s condition is verified

1
2
3
4
5
6
7
8
9
10
11
public class AClass {
...
public void aMethod(final Object obj) {
...
if(!obj instanceof InterfaceXYZ) {
final InterfaceXYZ xyz = (InterfaceXYZ)obj;
...
}
...
}
}