Saturday, April 16, 2016

Darren on the JVM: Intro to TDD Part 1: Some Ideas, Principles & Techniques


A while ago now I applied for a Java role at a certain newspaper,
having heard they were Agile and proponents of TDD. After a successful
initial telephone screening interview, I was issued a programming
exercise - Battleships! These exercises serve to reveal a candidates ability to

  • Digest a problem description
  • Extract requirements and define scope
  • Identify expected program behaviour
  • but above all else… WRITE CODE!

In my case, I was given the opportunity to demonstrate my aptitude for TDD, as their required approach to every-day programming

SPOILER: I did not get the job :(

There is a some personal irony in this. Also I was unfortunately not given any constructive feedback either.
TDD to some maybe old-hat or second-nature, but it is pervasive and still very much a component of many interview processes - which many, many candidates still fail - reflecting it is still a concern for hiring teams.

In this short series of posts hopefully I can help those newcomers ramp-up and secure those much desired jobs and to perhaps offer those existing TDD’ers and indeed TDD-interviewers some added perspective.

In this blog I will demonstrate how I do Test Driven Development, with an approach - I call it ‘The Introspective Approach’ - that helps me get going when I get stuck.

In subsequent posts in the series I will use a variant of the Battleships game, run from the command-line, as the context for this learning exercise.
Battleships provides a good context as the game is quite familiar and, from a programming point of view, the inputs are easy to define and outputs easy to capture.

The complete exercise spec, along with the solution code will be available on GitHub; the code excerpts wherever included will link back to the relevant file and version there.

First the Fundamentals

Let’s boil down what testing and programming in general is; I offer:

A program is just a function over some data; Given some input, a program will deterministically produce some output.

I say ‘deterministic’ as computers in general are well-behaved and we have dominion over all inputs, so basically the same inputs will produce the same outputs, ALWAYS!
~… although, what we don’t know may… segmentation fault, NPE, BSD :D #geek-humour~

Testing is merely defining the behaviour of a program - by stating your expectations for its outputs given a set of inputs - and verifying this behaviour holds true. The task then is to state the minimal set of expectations representative of all realistic input-output pairs, in the context of the business-domain or production-environment.

Thus programming is the task of implementing the most concise yet general solution, for which that expected behaviour is verified.

Some Informal Guidelines… Techniques within Techniques

When TDD’ing, I use a few techniques to keep me on track and help me decide what to do next; these techniques are:

  • Defer-and-Delegate
  • Mock
  • Expand-to-Contract (Precursor to Refactor)
  • Refactor

The order isn’t meant to be prescriptive, but if you have a go you’ll see it emerges naturally; let’s look at how these techniques are related and facilitate one another.

Defer and Delegate

Typically a program has a single point of entry e.g. like the public static void main(String[] args) method, but this seldom is the best place to contain all the application logic or make all the decisions.

At the crux of it is the Single Responsibility principle

This technique advocates breaking down the program input and thus breaking down the problem space. If you defer any decision making over any part of the program input as late as possible, ironically this helps you to progress with the solution.

When delegating as well, you decompose the problem and decision making and push those responsibilities off to other components; you end up with a bunch of classes collaborating - some orchestrating, some decision making, some consensus building - to form the system.

If you focus on testing one part of the system, it’s one decision or responsibility, then you can fix the behaviour of its collaborators to how you need them, in order to support implementing that decision logic or fulfilling that responsibility.

Of course, this ‘fixing of behaviour’ is mocking.


For this exercise I’ve used Mockito and ScalaMock, but in general a mocking library allows you to mock or stub at least the public API of a component.

This advocates the Design by Interface technique, which in turn supports/facilitates the High Cohesion, Low Coupling and Interface Segregation principles.

If we delegate certain decision making to component dependencies, for each we define a public API with which to interact with that component dependency and that public API can be mocked.

We know the part of the input that must be submitted to the component and we know what we need back - mocking allows us to install that behaviour for some or all of the duration of the test. Furthermore we can specify only that input will produce that particular output or we can fix it so that the output we want is always produce, regardless of input.

Mocking provides quite a lot of flexibility like that, but it’s best not to get too carried away.
Generally it’s not good to recursively mock mocks returning other mocks. That said, mocking factories - as you will see in the code - solves the problem of not being able to instantiate an interface, where they do not have implementations or constructors to call.


A former boss once taught me this one; as an Agile coach and mentor, he observed me trying to go from no-code straight to the perfect-code. I have since observed many colleagues and clients do the same thing, that is:

  1. Assume omniscience
  2. Think ourselves super clever and don’t need to go through the motions
  3. By pure genius we can just materialise that perfect-code at the first attempt, BOOOM!

Nah! We are mere mortals - the compiler and tests will never let us forget this :D

To get an understanding of what we are doing, we should first write code objectively (read: with focus), letting if-else and switch statements expand and for- and while-loops nest as deep as needed, duplicating freely to create a first-cut solution that works; this may seem counter intuitive and look a little silly, but hopefully it makes the problem space easier to understand and reveals whether you have covered all the bases.

I want to say the trick here is to make duplication really big and obvious.

Once it’s working, THEN refactor! AND don’t forget to revisit the nested loops and review their performance.


Refactoring is more an activity than a technique; there are a tonne of books on the topic and I won’t delve too deep into it here. I’ll just propose refactoring serves to simplify the code, make it more concise, readable, maintainable, debuggable - simple!

One of the main techniques for refactoring is to hunt down and eliminate code duplication; this technique transcends paradigm boundaries (i.e. OO, to functional, to procedural, etc) and is particularly relevant for testing as…

Every line of code written is a line of code that must be tested

In other words, make less work for yourself and reduce you maintenance and testing overhead.

So those are the techniques; nothing really new, yet quite common to not see them done ;P
Also, if you’ve ever heard someone mention the term ‘Design by Testing’, these techniques are what they were talking about… or the outcome of these techniques, anyway.

The Introspective Approach

In my early years of practicing Agile and TDD specifically, I would often struggle to find a starting point.

You may, for instance be given a grand vision for some system or a solution or get a bright idea, but some how you have to bridge the gap between that vision and your first TDD unit-test. As always when trying to achieve something, the first step is always the hardest! So I started using this Introspective Approach to overcome this inertia and get the TDD-ball rolling.

Essentially you defer defining the system-, component-, class-, object- or subject-under-test and sort-of start in the first-person as the test itself and let the implementation develop and emerge from there.

You start with the highest level requirement and the highest level expectation i.e. the program input and the program output. Then define a single public method to consume that input and produce that output.

If designed well, any kind of test should be a black-box test, with which in this context public methods are the only means of interaction - this should always give us 100% coverage over any private methods

Next Up

As mentioned at the start, the follow-up to this post in the series are:

Intro to TDD Part 2: Battleships, Java-style
A walkthrough of these techniques with concrete examples in Java
Intro to TDD Part 3: Battleships, Scala-style
A walkthrough of these techniques with concrete examples in Scala.

Again, the context for both walk-throughs is the Battleships game, which will be specified by a few example scenarios each containing input data and expected output results.

As of this writing, the implementations for both are complete and the posts will (read: should) follow shortly.

Stay tuned!

Thursday, January 07, 2016

Darren on the JVM: Scala: Null Coalescing Operator for Scala

Null (Safe|Dereferencing|Coalescing) Operator

I stumbled on to this concept a while ago when researching the many cool features of Scala. I had come across it before, as I’ll mention, but not so concretely. There is a wealth of information on SO and Wiki, which again leaves mythed why Java (my roots) does not support this.

Anyway, what I offer here - which btw is not new - is slightly different to the normal NCO as found in many other languages, for which it is a supported langauge feature.

Note, NCO is supported implicitly by a wider set of laguages i.e. those where every value or expression has a truth value: any value can be coerced to True or False, after which you defer to logical operators; I found this to be the case in ActionScript (I used to be a Flex developer) and is also the case with Bash, where booleans are applicable.

Having read a few SO posts and seeing what others have put out there, I figured I would have a go at implementing a NCO, seeing what features of Scala can be utilized and how far I can take it.

The aim is to get from this

to this

And just for emphasis, compare the amount of horizontal scrolling you have to do to read those code snippets.

Maybe Null, Maybe Not… It’s Optional

Scala offers Option, Try and Either as ways to program with null and exceptions. Using these aren’t quite the same as programming defensively as the null-check is done automatically under-the-hood. However, like most things Scala, it is a noteworthy paradigm shift… that is, going from Null Programming to Option Programming.

While Option, its various composition functions and pattern matching are cool for handling null, given checking for null is so common in OO (or at least Java), it’s a little surprising that there isn’t seamless support for crossing over from the Null-world to the Option-world in Scala (… presumably seeking to succeed where Java fails).

There is the PartialFunction[A, B].lift which helps, but it only goes one-level deep: the benefit is lost if you are required to use a library with a complex Option-less object model; think of some legacy Java library that does not make use of Options, you may as well stick to defensive programming with null-checks.

Extension Methods a.k.a. Implicits

Thanks to Scala’s extension methods i.e. implicit type conversions, the crossover from null to Option is made very easy.

While we haven’t defined any extension methods explicitly, any function of Option is now available to clients of E.

We could at this point use the familiar composition functions on Option, but we want some nice operator syntactic sugar… it’s all about the sugar!

So taking the idea a step further, we introduce an intermediate wrapper class - a facade if you like - to provide these operators and translate them onto Option compositon functions.

So let’s start again with

What we produce is a very concise way to dereference null and chain a sequence of HOFS together: with this we can drill down into the object graph, evaluating to the targeted value or expression; or to None if null is encountered anywhere along the way.

Here is an excerpt from a ScalaTest spec I wrote while exploring all this:

Here you see an example each for passing functions and passing lambdas (anonymous functions); the latter turns out a little unfortunate with all the programming punctuation.

Nonetheless it works! At this stage we are free to match { ... } or map { ... } and so on.

We can extends NSCOps to include a whole raft of convenience operators.

One very cool addition would be performing value checks, that is, for values other than null

How cool is that; all operators continue down the chain if the option is not None and:

  • ?~ the filter expression evaluates to true
  • ?? the non-None value matches the by-name (lazy) operand
  • ?! the non-None value does not match the by-name (lazy) operand

The by-name operators are probably more useful after the Immutable paradigm shift where case classes are the norm, for which == does the right thing.

Since originally posting, I have made edits to reference the published gist which you can see in it’s entirety here.

PLEASE take note of the non-contiguous line numbers here in the post; the Gist integration I use lets you select line number ranges to show, but does not make it obvious to the reader ellipses would have been nice. Please copy-paste from the gist.

Wednesday, January 06, 2016

Darren on the JVM: Scala: Timely Type-Classes Brings Rails-style Date Arithmetic

For those of you who have ever done any Ruby development on Rails super-charged with ActiveRecord (as I recall back from 2008), you would have been blessed with the most intuitive date-arithmetic facility known to programming-man.

At that time in my life, Java was my main bread-and-butter and I longed for this simplicity… I am sure we all did.

Why it did not exist was simple: primitives such as int and String had these binary operators, as supported by the Java language, but of course Objects like java.util.Date did not; to get them would obviously require a change to the language, but where would you draw the line? What would be the default java.lang.Object behaviour in a obj1 + obj2 = ??? situation?

Then came a long Scala: read ‘…change in the language’ becomes ‘… change the language’

Writing Without Punctuation

Before getting into the main meat of this post, let’s digress for a moment to recall another cool and important feature of Scala.

We know binary operators in Scala are just single-parameter functions in disguise. So flipping that on its head, we can use single-parameter functions like operators:

"Scala" concat "Is" concat "The" concat "Bees" concat "Knees"

is exactly the same as


Try them both in sbt console

There are some quirks and special steps to be taken if intermixing alphanumeric characters and symbols in your function names, but you get the gist.

So basically Scala allows us to remove punctuation around function invocations, when those functions are single-parameter functions; you can still remove the . (dot) in invocations of functions with multiple parameters, but the parentheses are still required (…why? Off topic!)

Now exposing binary operators for what they, it stands to reason they can be overriden… or more importantly, can be defined where once they were not.

Extension Methods a.k.a. Implicits

Extension methods are a language feature I discovered when consulting on a C# project Germany many years ago. I remember thinking how cool they were back then and still think they are awesome now in Scala.

Other languages support this feature in their own form too: Ruby has monkey-patching and JavaScript has prototype augmentation. I suppose if you are really desperate for this facility in Java, one can introduce Aspects.

Of course in Scala they are merely an application of implicits, or more accurately, the use of implicit type-conversion.

// Runnable in 'sbt console'
import java.util.Date

implicit class SuperfluousDate(date: Date) {

  def someSuperfluousFunction = date.toString

case class UnimaginableDate(date:Date) {

  def someUnimaginableFunction = date.toString

implicit def toTheUnimaginable(date:Date) = UnimaginableDate(date)

val now = new Date



As with C# and Extension Methods, one needs to be careful with how implicit type-conversion is applied, so to preserve the semantic/conceptual/behavioural integrity of the class being extended.

Type Classes a.k.a. Implicits²

Type classes are a powerful feature that allows Scala developers to define an interface to capture the behaviour to be added to a class. That type-class can then be used to declare dependencies and expectations in other classes.

An out-of-the-box example of this is Scala’s Ordering[T] trait: collections have both min() and max() functions, for example; they only work for the collection, say of type List[A], if there is an implementation of Ordering[A] in implicit scope.

Note type-classes are typically defined as traits and are always parametric, that is, they are defined with a parameter type, [T]; they typically have many - at least one, to be useful - concrete implementations that are case-objects and specify [T].
This is a key difference of the application of simple extension methods, which require a concrete class, as above, that is not a case-object and can work without parameter-types.

The function definitions therein also differ, where it is typical in type-classes to define the first parameter as the this object of the function.

// ...continued
trait Chrono[T] {

  def now: T
  def toMillis(t: T): Long
  def fromMillis(millis: Long): T
  def -(t: T, duration: FiniteDuration): T
  def +(t: T, duration: FiniteDuration): T

implicit case object DateChrono extends Chrono[Date] {

  def now = fromMillis(System.currentTimeMillis())
  def toMillis(date: Date) = date.getTime
  def fromMillis(millis: Long) = new Date(millis)
  def -(date: Date, duration: FiniteDuration): Date = new Date(date.getTime - duration.toMillis)
  def +(date: Date, duration: FiniteDuration): Date = new Date(date.getTime + duration.toMillis)

As you can see, the type-class Chrono[T] is parametric in [T] and the concrete case-object implementation, DateChrono, specifies [T] to be [Date].

The usefulness of this is perhaps still not apparent

// ...continued
import scala.concurrent.duration.{FiniteDuration, DAYS}

val later = DateChrono.+(now, FiniteDuration(10, DAYS))


Implicits * (1 + Implicits)

Now we combine the basic implicits, as used to implement extensions methods, with the advanced implicits, to define type-classes, to give us the ActiveRecord-style data-arithmetic sugary goodness

// ...continued
implicit class ChronoArithmetic[T](t: T)(implicit ev: Chrono[T]) {

  def plus(duration: FiniteDuration) = ev.+(t, duration)
  def +(duration: FiniteDuration) = plus(duration)
  def minus(duration: FiniteDuration) = ev.-(t, duration)
  def -(duration: FiniteDuration) = minus(duration)

now + FiniteDuration(10, DAYS)

and more sugar

// ...continued
val days20 = FiniteDuration(20, DAYS)

now + days20

and yet more SUGGGAAAARRR

// ...continued
import scala.concurrent.duration._

now + 30.days

How cool is that!

Perhaps there was a bit of a leap there: deciphering ChronoArithmetic, it offers an implicit conversion of instances of type [T] but only where there is an implementation of Chrono[T] available in implicit scope.

It doesn’t end there…

Testing With ChronoTime

Now you may wonder how this applies to testing: have you ever been in the situation where you wanted to set when now is?

Say in a batch of time sensitive tests, by virtue of covering time sensitive production code, where you must have the test code and the production code reliably agree on when now is, down to the microsecond.

You can’t monkey around with the system time as that will break with parallelized testing… and probably break a whole bunch of other things.

Joda-Time offers a mechanism to do this with DateTimeUtils.setCurrentMillisFixed(long), made thread-safe with DateTimeUtils.setCurrentMillisProvider(DateTimeUtils.MillisProvider millisProvider) … which is cool if you are already using Joda-Time and can ensure that all past and future requests for the current time is made via DateTimeUtils.currentTimeMillis() and not the typical System.currentTimeMillis().

An old colleague of mine faced this exact problem and enhanced the implementation with a clock!

// ...continued
trait Clock {
  def millis: Long

Now this is essentially the same as DateTimeUtils.MillisProvider. Originally, the API was updated such that an instance of this Clock was explicitly passed around wherever now was needed.

// ...continued... but hold off pasting this in
trait Chrono[T] {
  def now(clock: Clock) = fromMillis(clock.millis)

def rightNow[T](clock: Clock)(implicit ev: Chrono[T]): T =

It seemed a shame to litter the API with the introduction of this new Clock class, but that was easily remedied by making it implicit also.

// ...continued... go ahead and paste this in (update Chrono[T])
trait Chrono[T] {
  def now(implicit clock: Clock) = fromMillis(clock.millis)

def rightNow[T](implicit ev: Chrono[T], clock: Clock): T =

The implementation of now(...) is now (ha) completed provided by Chrono[T]; it is no longer an abstract method and can be removed from all type-classes, such as DateChrono.

Notice how implicit parameters are automatically in implicit scope for invocations made inside the function body; this has the effect that implicit parameters get passed along.

Another quick digression to bring to mind another cool feature of Scala and application of it.

Have Your Slice of Cake

In Scala, it is very common to implement the Cake pattern and compose the execution environment with layers, whether that be the production execution environment or the test execution environment.

The Cake pattern is made possible by Scala’s Traits language feature.
In some ways Scala Traits are similar to Aspects in Java. That said, I have only ever seen Aspects used to augment behaviour; Scala Traits can mixin behaviour AND mixin state - the latter along with Scala’s self-type are utilised in implementing the Cake pattern.

The Cake pattern and layering is really beneficial in that it provides an environment and object-graph that is type-checkable at compile-time; compare that to the dynamic nature of Spring-powered dependency-injection – you know if the Scala + Cake compiles, all dependencies are satisfied.

Execution Environments

Lets introduce a Clock implementation that defers to the system time

// ...continued
trait ClockProvider {

  implicit val clock: Clock

case object SystemClock extends Clock {
  def millis = {

trait SystemClockProvider extends ClockProvider {

  implicit val clock = SystemClock

This implementation can be used to compose an object graph in a production environment

// ...continued
class Main extends SystemClockProvider {
  self: ClockProvider =>

  val now: Date = rightNow[Date]

val then = new Main().now

val later = then + 20.days

For testing, we can provide an alternate implementation that always returns the same value

// ...continued
class Production(implicit clock: Clock) {

  def get5SecondsLater = rightNow[Date] + 5.seconds

case object FixedClock extends Clock {
  def millis = {
    0 // since Unix Epoch, 1st January 1970

class FixedClockTest extends FixedClockProvider {
  self: ClockProvider =>
  val later5Seconds = rightNow[Date] + 5.seconds
  val prod = new Production

  assert(prod.get5SecondsLater == later5Seconds, "FAIL")

new FixedClockTest

Here’s a negative test using the SystemClock

// ...continued
class SystemClockTest extends SystemClockProvider {
  self: ClockProvider =>
  val later5Seconds = rightNow[Date] + 5.seconds
  val prod = new Production

  val prodLater = prod.get5SecondsLater
  assert(prodLater != later5Seconds, "FAIL")

new SystemClockTest

That’s about it.

Feel free to copy paste from here; there is now also a Gist including a ScalaTest spec available here