Please, don't use I in interface names

The context

People working in software development share a variety of idiosyncrasies about whatever they do when they are coding: they have very - very - strong opinions and habits about code formatting, naming convention, codebase organizations… and they love to “fight” about them.

It is often a question of development platform and community: for example, Java and C# are very similar languages (each one adding every day features inspired by other’s features…) but Java and C# code tends to be formatted in a slightly different way with regard to placement of braces.

Although I know people who make a war of religion out of braces’ placement, I’m convinced it’s only a matter of aesthetics - so you can imho format your code as you prefer: as long as you are consistent throughout the entire codebase, your choice is the right choice.

The issue

On the other hand, there is a community-related convention habit - or better, a set of similar community-related conventions habits - regarding interfaces naming I really dislike: people living and working in .Net echosystem, Microsoft’s follower, grown on bread and C#, tends to name interfaces starting with I: IPeopleRepository, IAuthenticationProvider, IThis and IThat.

It’s just another convention, you can say, as harmless as widespread: as long as you are consistent throughout the entire codebase, you can do as you prefer.

I totally disagree with the last sentence above: I think calling interfaces by I is not (only) a convention (sure it is!), but is a design error, too.
Some might say: But .Net standard library adopts and promotes this convention. So what? Is it fair because everyone does it? Is it fair because Microsoft does it? I think an error in an error, no matter how many people - nor who - do it. But let me clarify why I think this is a very poor choice of design (naming is design, isn’t it?).

Violating DRY

Code like

1
2
3
public interface IPeopleRepository {
...
}

violates DRY - Don’t Repeat Yourself - principle: indeed, if you change your code moving this architecture component from interface to class, you have two things to change: the language keyword and the typename (so you must change type name throughout all its usages, too).

Violating SRP

Code like

1
2
3
public interface IPeopleRepository {
...
}

violates SRS - Single Responsibility Principle: indeed, the type name ha two reasons to change: you need to change it if you change its semantic (say you prefer renaming to [I]PersonRepository) and you need to change it if you want move from interface to class.

Poor modelling

1
2
3
public class List : IList {
...
}

is perhaps the most poor and inadequate naming choice I’ve ever seen in my almost twenty years of experience as a programmer; it communicates never about differences between interface and implementation:

  • what is the peculiarity of List as an implementation of IList?
  • are there other implementation of IList in the .Net standard library? In what are they different from List?

The Java way of naming things here is undoubtedly better and full of information: the name of the interface, List, describes the role in the code of objects refereed by a List reference; ArrayList, LinkedList, and so on describes what’s the implementation flavour the specific class is based on (e.g. giving programmers information about time- and memory-related behaviour of instances).

I think this is the way we should name things in our code: I like name interfaces trying to describe the abstract role played by runtime instances and classes trying to describe what are concrete implementation choices adopted: e.g. I prefer SqlServerPeopleRepository : PeopleRepository over PeopleRepository : IPeopleRepository, HttpWeatherForecastGateway : WeatherForecastGateway over WeatherForecastGateway : IWeatherForecastGateway, and so on.

Breaking uniformity

So far I discussesed reasons why you shouldn’t name interfaces prefixing them with I from a design-related point of view.

But conventions are only conventions (I disagree about this specific habit, as I said above, but let’s face it), even when they seem like design errors, and whatever convention you adopt, you should adopt it evenly throughout your codebase: uniformity is a widely accepted best practice, when it comes to formatting convention, naming convention, code organization, …

So: you name interfaces starting with I and classes starting with C, isn’t it? And enumerations (wait: you don’t use enum in C#, do you?) starting with E.
And you name variables starting with prefixes remarking their type: s for strings, i for ints, d for doubles…

1
2
3
4
5
public class CMyClass : IMyInterface {
public void Foo(string sFirstParam, int iSecondParam, DateTimeOffset dtoThirdParam) {
...
}
}

Are you naming things this way? No? Only interfaces starting with I? Ok, you’re breaking uniformity, adopting a partial convention (and partial conventions aren’t conventions at all).

Conclusion

So: if you think design principles are an important and useful driving force when you’re writing code; if you think names should communicate something to the reader, and differences in names should communicate differences between things, allowing the reader to be able to easily understand moving parts of your code; if you think uniformity is a good quality of a codebase, even when it comes do adopting conventions… you should not use I as a mandatory, common prefix for interface names.

Bonus track

The same considerations apply for sure to other similar naming convention: many (Java) frameworks suggest you to name your interfaces and implementations with MyService and MyServiceImpl; many people are used to name AbstractSomething their implementation of [I]Something interface; others like to call asynchronous methods ending with Async - and so on.
Whenever you include a syntactical detail (I for interfaces, C or Impl for classes, Abstract for abstract things) in a name you’re facing a variant of the discussed problem: you’re violating DRY (repeating che syntactical detail both in syntax and name), you’re violating SRP (giving at least two responsibilities to the name), you’re likely to adopt an incoherent naming convention, you’re modelling your domain the wrong way.

Additional content

Please, don't use enums in C#

Enumerated types

Enumerated (discrete) types are a powerful modeling tool for software developers: they allows them to explicitly state all and only the permitted values a variable can hold, with guarantee that

  • no invalid values can be pushed into a function, and
  • conditional (switch/case or pattern-matching based) depending on enumerated types can be recognized to be exhaustive by compilers.

This is strictly true for enums you can define in languages like Scala (case classes), Kotlin (enums) or even the old, mistreated Java (enums), but is only an unmaintained, misleading promise for C#’s enums.

The problem (or “The C# way” to enums”)

Defining an enum in C# indeed is only a syntactic sugar you can leverage to define related, “namespaced” integer constants:

1
2
3
public enum Ordinal {
First = 1, Second = 2, Third = 3
}

is in essence only a shortcut for

1
2
3
4
5
public static class Ordinal {
public const int First = 1;
public const int Second = 2;
public const int Third = 3;
}

I’m not saying the compiler produces the same output - I’m saying in both cases you can refer to something like Ordinal.Second in order to get an int constant whose value is 2.

Issue #1

No way to define a method, say

1
2
3
void DoSomething(Ordinal o) {
Console.WriteLine($"Ordinal value is {o:D}");
}

preventing callers to pass invalid values into:

1
2
DoSomething(Ordinal.First);
DoSomething((Ordinal)500);

is definitely valid code (from the compiler’s point of view) producing the following output:

1
2
Ordinal value is 1
Ordinal value is 500 // WTF??? Value not present in Ordinal declaration...

Issue #2

No way to rely on compiler in order to check exhaustiveness of conditional checks: you can indeed write

1
2
3
4
5
public int Foo(Ordinal o) => o switch {
Ordinal.First => 1,
Ordinal.Second => 2,
Ordinal.Third => 3,
};

but the compilers gives you a warning like The switch expression does not handle all possible inputs (is is not exhaustive), even if all values defined by the enum are explicitly treated; in order to avoid this inappropriate warning you must add a fourth, never used branch to the switch:

1
_ => throw new Exception("Unexpected value")

(or you can return a special value, if you like code smells ;-)…).

So, C#’s enums are syntactic sugar for int constants, and defining a method parameter of type Ordinal is nothing different from defining it of type int (yes, you can define an enum having byte or long or ${other integral type} as underlying representation (see here), but… you got the idea).

From a modelling point of view, C#’s enums are a very poor feature, which does not allow developers to define true enumerated (discrete) types: so… please, don’t use them, or at least don’t use them as if they were.

The right way

The right way to model enumerated/discrete types in C# is imho to adopt a pattern that I first heard about in 2004, reading Hardcore Java enlightening book:

1
2
3
4
5
6
7
public sealed class Ordinal {
public int Value { get; }
private Ordinal(int value) { Value = value; }
public static Ordinal First = new Ordinal2(0);
public static Ordinal Second = new Ordinal2(1);
public static Ordinal Third = new Ordinal2(2);
}

This does not solve the problem of exhaustiveness’ check, but models true discrete type allowing only intended values to be used where Ordinal parameters are required.

A variant of this pattern, based on inheritance, allow developers to attach polymorphic behaviours to enumeration cases.

Bonus (or “The Java way to enums”)

By the way, this is the way enum‘s implementation in Java (since 2004!!!) and Kotlin work: they provide substantially a syntactic sugar for the pattern above, allowing true enumerated/discrete types modelling:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
public enum Ordinal {
First {
@Override
void doSomething() {
// Behaviour for case First
}
}, Second {
@Override
void doSomething() {
// Behaviour for case Second
}
}, Third {
@Override
void doSomething() {
// Behaviour for case Third
}
};

abstract void doSomething();
}

Java supports exhaustiveness check out of the box, too:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
void doSomething(Ordinal o) {
var value = switch (o) { // compiler's error: 'switch' expression does not cover all possible input values
case First -> 1;
case Second -> 2;
};
}

void doSomething(Ordinal o) {
var value = switch (o) { // no errors, no warning
case First -> 1;
case Second -> 2;
case Third -> 3;
};
}

LSP: an opinionated discussion

Liskov’s Substitution Principle (LSP for friends) is one of the five SOLID Principles - maybe the most misunderstood.

According to Wikipedia, it states that
Let P(x) be a property provable about objects x of type T. Then P(y) should be true for objects y of type S where S is a subtype of T.

More informally, the idea behind this principle is that we should not violate the contract published by the T supertype when we use or extend it.

I think it’s worth analyzing this idea deeply, in order to explain both classical and less trivial ways to violate the principle.

Generally speaking, we can try to classify LSP violations into three main classes:

  • Bad Client: the principle is violated due to the usage of the supertype
  • Bad Child: the principle is violated due to a crooked subtype implementation
  • Poor Modelling: the principle is violated due to the usage of a (general) type to model (less general) domain concepts

So, let’s show many examples of violations belonging to the three classes.

Bad Client

The first example of LSP violation I would like to talk about is a classical one: a bad client of a types hierarchy can break LSP downcasting a reference to a specific, hardcoded subtype:

1
2
3
4
public <T> T lastElementOf(Collection<T> input) {
var theList = (List<T>)input;
return theList.isEmpty() ? null : theList.get(theList.size() - 1);
}

Callers of the method lastElementOf believe they can invoke it passing whatever instance of whatever concrete implementation of the Collection interface, but calls passing something other than instances of types implementing the List subinterface will fail systematically: lastElementOf is a bad client for the Collection type hierarchy because not all Collection‘s subtypes are fully substitutable to the supertype when it comes to invoke the method.

A subtle variation of this violation of LSP, which I have already written here about, involves two unrelated interfaces: here the cast assumes that the actual parameter type implements both interfaces, breaking BadInterfaceDowncastingClient‘s contract - the method below is therefore a bad client for FrontEndContext interface.

1
2
3
4
5
6
7
8
9
10
11
12
public interface FrontEndContext {}

public interface BackEndContext {}

public class MyContext : FrontEndContext, BackEndContext {}

public class ABoundaryService {
public void BadInterfaceDowncastingClient(FrontEndContext ctx) {
var context = (BackEndContext)ctx;
doSomethingWith(context);
}
}

It must be said that LSP violations belonging to the bad client class are not very usual in code written by experienced developers (but it happened to me to find something very similar to the last example in code written by a self-styled software architect).

Bad Child

The second class of LSP violations it’s worth to mention is the class I like to call bad children: the violation consists in a subtype bad implementing the contract stated by the supertype.
The tipical example you can find of this class of violations is that of a Square class, extending Rectangle in a way that violates some supertype invariant (e.g. the idea that width and height can be changed independently) leading to surprisingly behaviour.

A less didactic and more actual example can be the following, where the InMemoryBin<T> implementation of the Bin<T> interface implements its supertype subtly breaking the contract of the addForever(T item):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public interface Bin<T> {
void addForever(T item);
}

public class InMemoryBin<T> implements Bin<T> {
private static final int MAX_SIZE = 50;
private int currentIndex = -1;
private T[] items = new T[MAX_SIZE];

public void addForever(T item) {
currentIndex = (currentIndex + 1) % MAX_SIZE;
items[currentIndex] = item;
}
}

The method required by the interface clearly requires added elements to be kept forever, but the implementation use a capped data-structure to store references to added items. So, when a client adds the (MAX_SIZE+1)th item to the InMemoryBin, the first item added disappears from the collection: InMemoryBin.addForEver is not really for ever and the described class acts as a bad child for the Bin supertype, hence not fully substitutable to it.

A third way to violate LSP writing a subtype of an interfaces or a superclass is to implement a method misrepresenting its intended purpose: the classic example is that of a class implementing the toString() method (better: overriding Object.toString() base method) in order to construct not only a textual representation of an object, but also a meaningful one from a business perspective.
toString() method is generally intended as a way to describe an object for logging and debugging purposes, but it’s not uncommon to find code like the following, which overrides and uses it to implement some functional requirement:

1
2
3
4
5
6
7
8
9
public class SqlQuery {
public SqlQuery(String tableName) { ... }
public void addStringFilter(String fieldName, string operator, String value) { ... }
public void addIntFilter(String fieldName, string operator, int value) { ... }
...
public void toString() { // Maybe should the method to be named 'buildSql()' or 'toSql()'?
return "select * from " + tableName + " where " + buildWhereClause();
}
}

I wrote that toString() method is generally intended as a way to describe an object for logging and debugging purposes, but sure, you can object that this is a very opinionated sentence. No doubt in part it is, but… what about the name of the method? It is toString, not toSql nor something like toHtml or toUiMessage: this method is intended to generate a String representation of an object, and String is a very unstructured, general-purpose concept: about the idea to represent Strings with specific structure defining custom types please read the next section - the same can be valid when it comes to the choice of method names; in one sentence, if the method name asks for a String returning implementation, you should return a real String, with all its invariants… and a Sql query definitely isn’t.

Sadly, this nuance of LSP bad child violation is a very common one, even in code written by experienced developers.

Poor Modelling

So far, so good.
The last class of LSP violation which I think is interesting to talk about is a bit different from bad client and bad child, due the fact that it does not involve any subclassing: the violation resides in a misuse of an (usualy very general-purpose) existing type from a modelling point of we: let me call it poor modelling.

This may seem like a provocation, and it certainly is in part, but I think that whenever you are using a general-purpose type (tipically: String) to represent data like email addresses or credit card numbers all your code around… you’re violating the Liskov Substitution Principle - if not in its formal definition, at least in its general meaning.

Representing an email address as a String, without defining a dedicated EmailAddress type that ensures invariants that should be valid for such a value is not only a naive modelling error (from the point of view of a domain driven desing you should not have any doubt about this); it’s not only very unconfortable and error prone (what about mistakenly swap two String values, the first one representing an email address and the second one holding a credit card number?); it violates the contract of the String class, too, because the very general-purpose String is intended to exhibit behaviours (invariants) that are simply not valid (they are conversely wrong!) for an email address (or a credit card number).
If you are not completely convinced: what about concatenating two Strings? Is the resulting value still a valid String? Of course it is!! Can the same be said abuout concatenating thw email addresses? What about keeping only the first then characters of an existing String? It results in a valid String, of course, but the same is in general not true for a part of an email addresses

So… you should model email addresses and credit card numbers (and users IDs and VAT codes and Sql queries and… well, you got the point) not only to be a good DDDer, nor to let the compiler statically help you to avoid errors using those values. You should not use unwrapped general-purpose types to represent your domain’s concepts even to respect the LSP’s spirit: not only subtypes, but also values should be fully susbtitutable to the super(or general-purpose) type; if your values are subject to restrictions (in value domain or in behaviour/invariants) with respect to the use of the chosen, general purpose type, you are in my humble opinion violating LSP due to poor modelling.

"Refactoring" a constant-time method into linear time

Recently, I had the (dis-)pleasure to stumble upon a coding horror created by a colleague of mine. When I told Pietro about it, he graciously asked me to write a post about it. So, here we go!

If you know Java and haven’t lived with your head under a rock for the past 7-odd years, you surely know about streams. We all know and love streams, right? Well, what I love even more than streams is applying my judgement and thinking whether it is or isn’t a good idea to use one.

Take this simple and innocent-looking piece of code, for example:

1
2
3
4
5
6
int lastElement(int[] array) {
if (array.length == 0) {
throw new RuntimeException("Array is empty");
}
return array[array.length - 1];
}

It doesn’t get any simpler than that.

But if you just want to use streams everywhere, you might be tempted to convert it as follows:

1
2
3
4
5
int lastElement(int[] array) {
return Arrays.stream(array)
.reduce((first, second) -> second)
.orElseThrow(() -> new RuntimeException("Array is empty"));
}

Spot the difference? You just converted a constant-time array access into a linear-time scan!

This is not necessarily an issue with streams (the same coding horror can be achieved with a good old-fashioned for loop, of course), but it just serves to prove that:

  • applying your judgement is better than blindly use the new shiny API
  • it is important to always consider the complexity (both in time and space) of your code.

Needless to say, the pull request that contained this change was NOT approved!

Functional shell: a minimal toolbox

I already wrote a post about adopting a functional programming style in Bash scripts. Here I want to explore how to build a minimal, reusable functional toolbox for my bash scripts, avoiding redefinition of base functional bricks whenever I need them.

So, in short: I wish I could write a scripts (say use-functional-bricks.sh) like the following

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
#!/bin/bash
double () {
expr $1 '*' 2
}

square () {
expr $1 '*' $1
}

input=$(seq 1 6)
square_after_double_output=$(map "square" $(map "double" $input))
echo "square_after_double_output $square_after_double_output"

sum() {
expr $1 '+' $2
}

sum=$(reduce 0 "sum" $input)
echo "The sum is $sum"

referring to “globally” available functions map and reduce(and maybe others, too) without to re-write them everywhere they are needed and without to be bound to external scripts invocation.

The way I think we can solve the problem refers to three interesting features available in bash:

  • export functions from scripts (through export -f)
  • execute scripts in the current shell’s environment, through source command
  • execute scripts when bash starts

So I wrote the following script (say functional-bricks.sh):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#!/bin/bash
map () {
f=$1
shift
for x
do
$f $x
done
}
export -f map

reduce () {
acc=$1
f=$2
shift
shift
for curr
do
acc=$($f $acc $curr)
done
echo $acc
}
export -f reduce

and added the following line at the end of my user’s ~/.bashrc file:

1
. ~/common/functional-bricks.sh

and… voila!: now map and reduce implemented in functional-bricks.sh are available in all my bash sessions - so I can use them in all my scripts!
And because seeing is beleiving… if I launch the script use-functional-bricks.shdefined above, I get the following output:

1
2
3
4
5
6
7
square_after_double_output 4
16
36
64
100
144
The sum is 21

Functional way of thinking: higher order functions and polymorphism

I think higher order functions are the functional way to polymorphism: the same way you can write a generic algorithm in an OO language referring to an interface, which you can plug specific behaviour into the generic algorithm through, you can follow the sameplug something specific into something generic“ advice writing a high order function referring to a function signature.

Put it another way, function signatures are the functional counterpart for OO interfaces.

This is a very simple concept having big implications about you can design and organize your code. So, I think the best way to metabolize this concept is to get your hands dirty with higher order functions, in order to become faimilar with thinking in terms of functions that consume and return (other) functions.

For example, you can try to reimplement simple higher order functions from some library like lodash, ramdajs or similar. What about implementing an afterfunction that receives an integer n and another function f and returns a new function that invokes f when it is invoked for the n-th time?

1
2
3
4
5
6
7
8
function after(n, f) {
return function() {
n--
if(n === 0) {
f()
}
}
}

You can use like this:

1
2
3
4
5
6
const counter = after(5, () => console.log('5!'))
counter()
counter()
counter()
counter()
counter() // Writes '5!' to the console

So you have a simple tool for count events, reacting to the n-th occurrence (and you honored the Single Responsibility Principly, too, separating counting responsibility from business behavior implemented by f). Each invocation of after creates a scope (more technically: a closure for subsequent executions of the returned function - the value of n or of variables defined in the lexical scope of after‘s invocation are nothing different from the instance fields you can use in your class implementing an interface.
Generalizing this approach, you can implement subtle variation of the after function: you can for example write an every function that returns a function that call the f parameter of the every invocation every n times

1
2
3
4
5
6
7
8
9
10
function every(n, f) {
let m = n
return function() {
m--
if(m === 0) {
m = n
f()
}
}
}

This is my way to see functional composition through higher order functions: another way to plug my specific, business-related behavior into e generic - higher order - piece of code, without reimplement the generic algorithm the latter implements.

Bonus track: what is the higher order behaviour implemented by the following function?

1
2
3
function canYouGuessMyName (items, f) {
return items.reduce((acc, curr) => ({ ...acc, [f(curr)]: (acc[f(curr)] || []).concat([curr]) }), {})
}

Written with StackEdit.

Functions as first-class citizens: the shell-ish version

The idea to compose multiple functions together, passing one or more of them to another as parameters, generally referred to as using higher order functions is a pattern which I’m very comfortable with, since I read about ten years ago the very enlighting book Functional Thinking: Paradigm Over Syntax by Neal Ford. The main idea behind this book is that you can adopt a functional mindset programming in any language, wheter it supports function as first-class citizens or not. The examples in that book are mostly written in Java (version 5 o 6), a language that supports (something similar to) functions as first-class citizens only from version 8. As I said, it’s more a matter of mindset than anything else.

So: a few days ago, during a lab of Operating System course, waiting for the solutions written by the students I was wondering If it is possible to take a functional approach composing functions (or something similar…) in a (bash) shell script.

(More in detail: the problem triggering my thinking about this topic was “how to reuse a (not so much) complicated piece of code involving searching files and iterating over them in two different use cases, that differed only in the action applied to each file)

My answer was Probably yes!, so I tried to write some code and ended up with the solution above.

The main point is - imho - that as in a language supporting functions as first class citizens the bricks to be put together are functions, in (bash) script the minimal bricks are commands: generally speaking, a command can be a binary, or a script - but functions defined in (bash) scripts can be used as commands, too. After making this mental switch, it’s not particularly difficult to find a (simple) solution:

action0.sh - An action to be applied to each element of a list

1
2
#!/bin/bash
echo "0 Processing $1"

action1.sh - This first action to be applied to each element of a list

1
2
#!/bin/bash
echo "1 Processing $1"

foreach.sh - Something similar to List<T>.ForEach(Action<T>) extension method of .Net standard library(it’s actually a high order program)

1
2
3
4
5
6
7
#!/bin/bash
action=$1
shift
for x
do
$action $x
done

main.sh - The main program, reusing foreach‘s logic in more cases, passing to the high order program different actions

1
2
3
4
5
6
#!/bin/bash
./foreach.sh ./action0.sh $(seq 1 6)
./foreach.sh ./action1.sh $(seq 1 6)

./foreach.sh ./action0.sh {A,B,C,D,E}19
./foreach.sh ./action1.sh {A,B,C,D,E}19

Following this approach, you can apply different actions to a bunch of files, without duplicating the code that finds them… and you do so applying a functional mindset to bash scripting!

In the same way it is possible to implement something like the classic map higher order function using functions in a bash script:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
double () {
expr $1 '*' 2
}

square () {
expr $1 '*' $1
}

map () {
f=$1
shift
for x
do
echo $($f $x)
done
}

input=$(seq 1 6)
double_output=$(map "double" $input)
echo "double_output --> $double_output"
square_output=$(map "square" $input)
echo "square_output --> $square_output"
square_after_double_output=$(map "square" $(map "double" $input))
echo "square_after_double_output --> $square_after_double_output"

square_after_double_output, as expected, contains values 4, 16, 36, 64, 100, 144.

In conclusion… no matter what language you are using: using it functionally, composing bricks and higher order bricks together, it’s just a matter of mindset!

Set Of Responsibility and IoC

The original post was published here.

I recently read this Pietro’s post about a possible adaptation of the chain of responsability pattern: the “Set of responsibility”. This is very similar to its “father” because each Handler handles the responsibility of a Request but in this case it doesn’t propagate the check of responsibility to other handlers. There is a responsibility without chain!

In this article I’d like to present the usage of this pattern in an IOC Container, where the Handlers aren’t added to the HandlerSet list but provided from the container. In this way you can add new responsibility to the system simply adding a new Handler in the container without changing other parts of implemented code (es. the HandlerSet), in full compliance with the open closed principle.

For coding I’ll use the Spring Framework (JAVA) because it has a good IOC Container and provides a set of classes to manage with that. Inversion of control principle and the Dependency Injection are first class citizens in the SpringFramework.

Here is the UML class diagram with 3 responsabilities X,Y,Z and a brief description of the solution adopted.

Class diagram

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
@Component
public class XHandler implements Handler {
@Override
public Result handle(Request request) {
return ((RequestX) request).doSomething();
}
@Override
public boolean canHandle(Request request) {
return request instanceof RequestX;
}
}
@Component annotation on XHandler tells to Spring to instantiate an object of this type in the IOC container.
public interface HandlerManager {
Result handle(Request request) throws NoHandlerException;
}
@Service
public class CtxHandlerManager implements HandlerManager {

private ApplicationContext applicationContext;

@Value("${base.package}")
private String basePakage;

@Autowired
public CtxHandlerManager(ApplicationContext applicationContext) {
this.applicationContext = applicationContext;
}

@Override
public Result handle(Request request) throws NoHandlerException {
Optional handlerOpt = findHandler(request);
if ( !handlerOpt.isPresent() ) {
throw new NoHandlerException();
}
Handler handler = handlerOpt.get();
return handler.handle(request);
}

private Optional<Handler> findHandler(Request request) {
ClassPathScanningCandidateComponentProvider provider = createComponentScanner();

for (BeanDefinition beanDef : provider.findCandidateComponents(basePakage)) {
try {
Class clazz = Class.forName(beanDef.getBeanClassName());
Handler handler = (Handler) this.applicationContext.getBean(clazz);
//find responsible handler for the request
if (handler.canHandle(request)) {
return Optional.ofNullable(handler);
}
} catch (ClassNotFoundException e) {
e.printStackTrace();
}
}
return Optional.empty();
}

private ClassPathScanningCandidateComponentProvider createComponentScanner() {
ClassPathScanningCandidateComponentProvider provider
= new ClassPathScanningCandidateComponentProvider(false);
provider.addIncludeFilter(new AssignableTypeFilter(Handler.class));
return provider;
}
}

CtxHandlerManager works like a Handler dispatcher. The handle method finds the Handler and calls its handle method which invokes the doSomething method of the Request.

In the findHandler method I use the Spring ClassPathScanningCandidateComponentProvider object with an AssignableTypeFiler to the Handler class. I call the findCandidateComponent on a basePackage (the value is set by @Value Spring annotation) and for each candidate the canHandle method check the responsibility. And that’s all!

In Sender class the HandlerProvider implementation (CtxHandlerManager)is injected by Spring IOC by “Autowiring”:

1
2
3
4
5
6
7
8
9
@Service
public class Sender {
@Autowired
private HandlerManager handlerProvider;
public void callX() throws NoHandlerException {
Request requestX = new RequestX();
Result result = handlerProvider.handle(requestX);
...
}

This solution let you to add new responsibility simply creating new Request implementation and a new Handler implementation to manage it. By applying @Component annotation on the Handler you allow Spring to autodetect the class for dependency injection when annotation-based configuration and classpath scanning is used. On application reboot, this class can be provided by the IOC Container and instantiated simply invoking the HandlerManager.

In the next post I’d like present a possible implementation of a Component Factory using the set of responsability pattern in conjunction with another interesting pattern, the builder pattern.

Happy coding!

Set of responsibility

The original post was published here.

So, three and a half years later… I’m back.

According to wikipedia, the chain-of-responsibility pattern is a design pattern consisting of a source of command objects and a series of processing objects. Each processing object contains logic that defines the types of command objects that it can handle; the rest are passed to the next processing object in the chain.

In some cases, I will benefit from flexibility allowed by this pattern, without being tied to the chain-based structure, e.g. when there is an IoC container involved: Handlers in the pattern all have the same interface, so it’s difficult to leave their instantiation to the IoC container.

In such scenario I use a variation of the classic chain of responsibility: there are still responsibilities, off course, but there is no chain out there.

I like to call my variation set of responsibility (or list of responsibility - see above for discussion about this - or selectable responsibility) - the structure is the one that follows (C# code):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
interface Handler {
Result Handle(Request request);

bool CanHandle(Request request);
}


class HandlerSet {
IEnumerable handlers;

HandlerSet(IEnumerable < handler > handlers) {
this.handlers = handlers;
}

Result Handle(Request request) {
return this.handlers.Single(h => h.CanHandle(request)).Handle(request);
}
}

class Sender {
HandlerSet handler;

Sender(HandlerSet handler) {
this.handler = handler;
}

void FooBar() {
Request request = ...;
var result = this.handler.Handle(request);
}
}

One interesting scenario which I’ve applied this pattern in is the case in which the Handlers‘ input type Request hides a hierarchy of different subclasses and each Handler implementation is able to deal with a specific Request subclass: when use polymorphism is not a viable way, e.g. because those classes comes from an external library and are not under our control or they aren’t the best place that to implement the processing logic in, we can use set of responsibility in order to cleanup the horrible code that follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
class RequestX : Request {}

class RequestY : Request {}

class RequestZ : Request {}

class Sender {
var result = null;

void FooBar() {
Request request = ...;

if(request is RequestX) {
result = HandleX((RequestX)request);
} else if (request is RequestY) {
result = HandleY((RequestY)request)
} else if (request is RequestZ) {
result = HandleZ((RequestZ)request)
}
}
}

We can’t avoid is and () operators usage, but we can hide them behind a polymorphic interface, adopting a design than conform to open-closed principle:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
class Sender {
HandlerSet handler;
Sender(HandlerSet handler) {
this.handler = handler;
}

void FooBar() {
Request request = ...;
var result = this.handler.Handle(request);
}
}

class HandlerX : Handler {
bool CanHandle(Request request) => request is RequestX;

Result Handle(Request request) {
HandleX((RequestX)request);
}
}

class HandlerY : Handler {
bool CanHandle(Request request) => request is RequestY;

Result Handle(Request request) {
HandleY((RequestY)request);
}
}

class HandlerZ : Handler {
bool CanHandle(Request request) => request is RequestZ;

Result Handle(Request request) {
HandleZ((RequestZ)request);
}
}

Adding a new Request subclass case is now only a matter of adding a new HandlerAA implemementation of Handler interface, without the need to touch existing code.

I use in cases like the explained one the name of set of responsibility to stress the idea that only one handler of the set can handle a single, specific request (I use _handlers.Single(...) method in HandlerSet implementation, too).

When the order in which the handlers are tested matters, we can adopt a different _handlers.Single(...) strategy: in this case I like to call the pattern variation list of responsibility.

When more than one handler can handle a specific request we can think to variations of this pattern that select all applyable handlers (i.e. those handlers whose CanHandle method returns true for the current request) and apply them to the incoming request.

So, we have decoupled set/list/chain-processing logic from concrete Request processing logic, leaving them to vary independently, according to the Single Responsibility Principle, an advantage we would not have adopting the original chain of responsibility pattern…

Embrace change

So it is: three months ago I joined a new job position, switching from a big company to a little, agile one, after more than eight years of distinguished service :-).
Furthermore: I switched from a Java and JEE - centric technological environment to a more rich and various one - yet .NET and C# oriented.
So, my Java Peanuts will maybe become in the future C# Peanuts (or Node.js Peanuts, who knows…) or, more generally, Programming Peanuts: for the moment I’m still planning a little post series about my way from Java to .NET, so… if you are interested… stay tuned!