The problem: You’ve got a bunch of text replacements you’d like to make in strings. Practical application, if you need one: You’re generating messages to be read by a speech-to-text system and there are certain words or names that come up often that it doesn’t pronounce very well, so you’d like to replace them with alternate “phonetic” spellings.

This example is in Javascript but the concepts are broadly applicable. The list of replacements you want to make is stored in key-value pairs, the key being what to replace and the value being what to replace it with: we have them in a Javascript object. Your favorite language’s equivalent might be a Hash, a Dictionary, a Map<String, String>, etc.

Here’s a totally reasonable imperative-OO solution one might come up with:1

class MessageTransformer {
  constructor(replacements) {
    this.replacements = replacements;

  transform(text) {
    for (let str in this.replacements) {
      text = input.replace(
        RegExp(`\\b${str}\\b`, 'gi'),
    return text;

To use this you’d create a MessageTransformer instance during the initialization of your program like const transformer = new MessageTransformer(replacements) and then use it to transform your message like const fixedMessage = transformer.transform(message).

Now, I have a slightly funny history with functional programming. In college I learned some Scheme and thought FP was just about the coolest thing ever, but almost nobody was using it in industry in those days, Java didn’t even have lambdas yet. Then I got a job where I was writing quite a bit of Actionscript and upon discovering that, being a ECMAscript dialect, it has closures, I went on to write some of the worst wannabe-Lisp-in-Actionscript ever, and had great fun doing it. More recently however, I was traumatized at a previous workplace by “pure” FP Scala. There’s a great community to hang out if you really want to get your impostor syndrome fired up. I now require trigger warnings for terminology like “kleisli arrow” and “final tagless.” I’m undergoing a long slow recovery. But I got some good things out of it, like an appreciation for immutability. And looking at this solution made that particular spot in my brain itch a little: we keep reassigning new values to the text variable (some languages won’t even let you do this to function parameters, not that you can’t get around it easily enough with a local variable). And then this came to me – or rather the idea for it did; it took some work to get the actual code right:

function messageTransformer(replacements) {
  return Object.keys(replacements).map(str =>
    text => text.replace(
      RegExp(`\\b${str}\\b`, 'ig'),
  ).reduce((acc, f) => text => f(acc(text)), _ => _);

There’s actually two significant and completely independent refactors applied here, relative to the first version. First is that I moved away from using a class. The function accepts your replacements object as a parameter, analogous to the constructor, and returns a function that does the transformations, analogous to the method. In my head I think of this as the “objects are just a poor-man’s closures/closures are just a poor-man’s objects” pattern, after something I heard in that class where I learned Scheme. It could probably use a shorter name. It changes the usage syntax a bit: to initialize it, you’d go const transform = messageTransformer(replacements) and then use it like const fixedMessage = transform(message), or if you want to do both all in one go, const fixedMessage = messageTransformer(replacements)(message). This is potentially a pretty handy pattern you can use anytime you might create a class with only one public method.

The second and weirder refactor is that I replaced looping through the replacements and assigning the result of performing each replacement back to the variable with… something else. It has two parts and they are a “map” and a “reduce”. You might have heard of MapReduce during the Big Data craze a few years back. This is literally the same concept, but with small data, and turns out it’s a really common FP pattern. “Map” can mean taking a collection of something and turning it into a collection of something else by applying the same function to each element2; “reduce” would then mean taking that collection and reducing it down to one value, for example summing a list of numbers, or even just counting how many things are in the list.

In the map stage, each item in replacements (or more precisely, the array if replacements’s keys) is mapped to a function that performs that replacement. By the end of it we have an array of functions. Each function in that array is analogous to one iteration of the for loop in the first version.

The reduce stage rolls all those functions into one single function by “folding” function composition over it. This is the conceptually densest part of this whole thing, but I’ll try my best. Imperatively speaking, it sort of loops through the array of functions and adds them all together, so that by the end we have a kind of chain of functions that the text gets piped through, but all in one function. How does this work?

Any time you have some function that accepts two parameters of some type and returns something of that same type – its type signature is of the form (A, A) => A – you can use that function over a whole list of As by using what’s called reducing or folding; it’s done by successively applying that function to each item in the collection and the “so far” value. To analogize to the example of summing a list of numbers, if you were doing it with a for-loop, each time through the loop you add the next number in the list to the sum so far; to do the same thing with reduce, you just give it a function that does that, (x, y) => x + y. Given a function that adds two numbers, reduce can use it to add a whole bunch of numbers. Give reduce a function that returns the larger of two numbers and it can use it to find the largest of several numbers. Give reduce a function that just returns 1 and you’ll end up with a count of how many numbers were in the list. And so on.

Function composition is what it’s called when you make a new function out of two functions – all this new function has to do is pass its argument to one of the two functions and then pass the result of that on to the other one. This is especially easy to do with two functions where their parameters and return values are all the same type – that is, they both have a type signature of the form B => B (the actual letter doesn’t matter, but I don’t want to get too confusing by re-using A). If you have two such functions f and g, the composition of them is x => g(f(x)). You can write a function to do it: compose(f, g) = x => f(g(x)). Anything about this sound familiar?

Yes! You can “compose” the concepts of the previous two paragraphs! A function that accepts two functions and returns the same kind of function, like our compose, is yet another example of a function of the form (A, A) => A – it just so happens that its parameters are functions too – A = B => B – making function composition yet another thing we can use in a reduce! And that’s exactly what we’ve done in the reduce above: (acc, f) => text => f(acc(text)), that is, given two functions acc and f, return a function that takes text and returns f(acc(text)). Since we’re working with a collection of B => B functions (where B means strings in our example), with function composition we can roll them all up into one single B => B function.

Oh, but there’s still that other weird looking parameter I’ve been glossing over, _ => _. That’s just a function that takes one parameter and just returns it. _ is a valid Javascript identifier, so this is just a little style choice on my part. But why does this need to be here? Because a reduce needs a starting value. In yet another analogy to summing a list of numbers with a loop: you need to initialize the sum to 0 before starting the loop. _ => _ is actually a pretty special function to FP heads: they call it the “identity function”. It comes in handy for just this sort of thing, because the identity function is to function composition what 0 is to adding numbers. There’s a scary FP terminology for this that’s hopefully about to be a bit less scary if I can explain it well enough.

It’s a concept borrowed from abstract algebra, a term for things that you can put two of together and get the same kind of thing, and hence, things you can reduce with: they’re called monoids. For example, to speak yet again of summing numbers, they say in algebra that “the set of real numbers under addition forms a monoid.” A monoid is made up of a set, analogous to a type in programming; a binary operation on that set (“binary” in the sense that it has two operands), analogous to our (A, A) => A; and an identity element. The identity element is that member of A where, when used in the binary operation, the result is equal to the other argument – like how in real numbers, x + 0 == 0 + x == x.3 In the same way, functions like B => B form a monoid under function-composition having the identity function, often called i, as its identity element, because composing some f with i gets you, for all practical purposes, the same function: i(f(x)) == f(i(x)) == f(x).4

Anyway, to get back to our little example, I thought this functional version turned out pretty slick, it’s a neat way to conceptualize the problem, the code is really succinct and clean, and there’s no mutability to think about. I decided to write up a post about it for the benefit of functional programming fans who might appreciate it the way I do, and for folks who are in the early stages of exploring functional programming for whom all this explanation might be educational.

Now, I hear a couple of you in the back of the room there grumbling that this implementation is probably terrible on memory usage. There’s something to that. If your list of replacements is big enough (and that would probably have to be pretty big), this thing could overflow the stack. It’s a common issue with highly functional programming style, because you build just about everything out of functions, and function calls use stack space. That said, you know what they say about micro-optimizations. And besides, that sort of thing is quite dependent on the language implementation. If nothing else, this was a cool illustration of the power of functional abstractions.

Implementations for pure functional languages, and others that make a priority of enabling use of functional features, have ways of dealing with this stack usage issue. You’ve probably heard of the optimization of tail-recursion, which can be generalized to tail-calls. This is when a function calls another function as the last thing it does before itself returning to its caller. If call A’s last thing to do is to call function B (which in the case of recursion is the same function as A), and has no further work to do afterwards, then as soon as B’s stack frame is popped off, A’s will be too, so there wasn’t much point in keeping the A stack frame around. Tail-call optimization basically consists of popping off that stack frame pre-emptively. When this can be done for a recursive function, the resulting memory usage is essentially the same as for having used a loop. Downsides can include losing debugging information (think stack traces). There’s also a techique called trampolining that I don’t understand very well except that it somehow results in the memory allocation happening on the heap instead.

Anyway, hope this was interesting.

  1. The only thing slightly esoteric here is maybe the regular expression stuff. \b is just a regex thing that matches a “boundary”, that is, a word boundary, since we want to replace whole words; it matches the beginning or end of the string, or of a line, or of a word. The flags gi stand for “global” (replace all occurrences, not just the first one found) and “insensitive” (to letter case). 

  2. Technically the “context” in which a map occurs doesn’t have to be that it’s a collection, there are lots of other uses; but collections are commonly most people’s first introduction to some of these FP concepts. To give you some idea what other things map can operate on, the then method of a Promise is also basically a map; you give it a function that accepts A and returns B, and it uses it to turn a Promise with A in it into a Promise with B in it. 

  3. There’s a whole lot of other nuances I’m glossing over that you’re likely to run into and understand eventually. For instance, there is such a thing as a monoid without an identity element, except it’s called a semigroup (In fact I’ve made reference to one earlier, I’ll leave it as a challenge for the reader to spot it). And sometimes the order of arguments to the operation matters; when it doesn’t, you have a commutative monoid, like with numbers under addition, but sometimes it does, like with strings under concatenation, and in some of those type of cases you have elements that are only a left identity or right identity… it’s a whole deep and fascinating branch of mathematics that’s totally worth exploring further but I’m trying to keep this article from going off the rails, and this footnote is mostly here for the benefit of the Well-Actually Brigade. 

  4. I’m intentionally using notation here aimed at programmers rather than proper mathematical notation, don’t @ me. 

original dank memeage

I’m just going to come out and say it. I hate parameter aligning and I think it looks like crap. Especially for functions with long names. Nothing should be indented that far in the first place, but so much the less so when it’s just suddenly shunted over 20+ spaces rather than the product of a series of increasingly indented lines.

This is really just an extension of my distaste for long lines. They tire my eyes out. There is science about this. It’s why newspapers and magazines are printed in columns. But also, when reading code, I should have to use my horizontal scroll bar as little as possible. I used to be hardcore about an 80 column limit but with some programming languages that gets very restricting, and then there are the matters of long names, literal strings, or long complex type annotations, second parameter lists, implicits, and so on. There should probably be some maximum to which only certain exceptions are allowed but I don’t have a definite number offhand. But if I have to scroll horizontally to even see the arguments you’re passing instead of thinking I’m looking at blank lines, I’m going to be annoyed. And if I can’t fit your code within the width of my screen, you should really re-evaluate your life choices. Preferably I should be able to read it fairly easily when I have two emacs buffers or Intellij editors up side-by-side, subject to choice of a reasonable font size.

This is one of those dumb holy-war issues. People are irrationally attached to their coding styles, which themselves are a set of irrational aesthetic preferences onto which people subconsciously hang ideas about their identity and artistry. I think this is mostly a product of insecurity. As for parameter-aligning, as far as I can guess, it’s an idea people get from Lisp, and being a Lispy thing, it makes people feel smart. (Programmers do an awful lot of terrible things for that reason, like writing needlessly complex code where they should be abstracting something so it’s more readable, or gratuitously using esoteric language features or idioms. I used to do a hell of a lot of this kind of thing.) Well a lot of ideas people have gotten from Lisp have been bad ideas, and this would be one of them.

If your argument/parameter list is long enough that you don’t want to put it all on the same line with the function name and whatever else, it’s fine to split it over more lines. I quite like one parameter per line, especially with case classes, but if the names are short I don’t mind grouping a few together on the same line either – more commonly so for arguments at a function call than for parameters at the function declaration. But you should usually start it by line-breaking after the opening paren of the list and then just indent it a normal indentation amount. It still reflects that you’re continuing from the previous line, but now you won’t have to realign them all everywhere if you change the function’s name. People seem to feel weird about line-breaking after an opening paren even though they have no qualms about doing so after an opening curly. Well, if this was Lisp they would all be parens, so stop worrying.

There’s something to be said for having consistent style in a codebase that’s been worked on by several people, and getting it right by a standard should be as easy to do as possible; ideally, it should be possible to do automatically using something like scalafmt instead of leaving it up to the capricious whims and error-proneness of humans.

An indentation scheme should reflect code structure, not have to be too fiddly to accommodate changes, and easy to enforce with a static analyzer. It should not take up an inordinate amount of a coder’s time. Thus it is I propose a simple scheme that I do not strictly follow myself as yet, but which I think has quite a bit of potential. Indentation should be a number of spaces (or this works with tabs too, but not with mixing them together, which you should never do anyway) determined by a simple linear function y = mx + b, where b is some constant indentation level started with (probably 0 in most cases); m, also a constant, is your tab width (two spaces, or three, or whatever is common idiom of the language or decided on by team, project, or company); and x, the variable, is the number of expression delimiters (i.e. parens, brackets, begin/end pairs) left unclosed as of the start of the line. (This doesn’t include things that delimit literals, such as the quotes around strings.)

This does have some potential to look a little odd in places where x is more than one greater than on the preceding line, and it doesn’t account for case or if branches that don’t have brackets around them (consider them to have imaginary brackets maybe? Some companies’ standards just say to always use the brackets), but generally it allows you in such cases to put the closing delimiters on separate lines and reflect the depth of structure they are closing without having any of them end up at the same indentation level. Closing delimiters that are at the start of a line should end up aligned with the start of the line they were opened on pretty easily this way.

That’s all I have on that, but I’m interested in feedback. Naturally, none of this applies to Lisp dialects which tend to have their own established conventions and I imagine Lispers would have all sorts of reasons for hating this idea, but I don’t much care what those grey neckbeards think. (Maybe Clojure people would be more willing to give it a try but I don’t promise it will look good, I haven’t tried it in any Lispy syntaxen, which as we all know are weird anyway.) What I like about this method is its simplicity and relative lack of special cases or aesthetic judgement calls, which make it ideal for an automatic formatter.

Like many beginning Scala programmers, I was exposed to the Cake Pattern early on and told that this is how you do dependency injection in Scala. Coming from the Ruby world I thought it looked like an awfully heavy-weight method, but of course I didn’t know any other way yet. Right away I was placed on a project in which the Cake pattern was apparently very much in use, a CMS built on Play.

I was tasked with adding a sitemap feature, such that when the path /sitemap.xml was requested, a sitemap of the site would be rendered. This seemed straightforward enough. I would just need to pull some data about the site’s currently published pages from the database and massage it into some pretty straightforward XML. This being Play, I started with a controller, and right away knew I’d need to pull in whatever code pulls pages from the database, which was pretty easy to find. I soon found I would also want to pull in a trait for looking at the contents of the HTTP request. Again, no big deal.

trait SitemapController extends Controller
    with SiteRequestExtractorComponent
    with PageRepositoryComponent {

  def sitemap = {
    // the magic happens...

Simple enough, until I tried to compile:

[error] /Users/chuckhoffman/dev/cms/app/controllers/SitemapController.scala:48: illegal inheritance;
[error]  self-type controllers.SitemapController.type does not conform to's selftype with models.auth.UserRepositoryComponent with models.auth.GroupRepositoryComponent with models.approval.VersionApprovalComponent with
[error]   with CmsPageModule

Hm. Looks like somebody used that Cake pattern thingy to inject dependencies into CmsPageModule having to do with users, user “groups,” and approval of new content. That probably has to do with who can do what kind of updating of pages, so even though that isn’t relevant to what I’m after since I only want to read page data, not update it, it still seems reasonable. I’ll just find the right traits that satisfy those three things – even though I’m not really using them here – and add withs for them and all should be good.

One little snag, I guess… it turns out that those traits were “abstract”, which meant greping through the code to find the correct “implementations,” which turned out to be UserRepositoryComponentPostgres, GroupRepositoryComponentPostgres, and MongoVersionApprovalComponent. (This is a common sort of thing to do, since one often wants to mock out the database for tests.) Took a while to track them down, but eventually I did. So surely I should be able to just add those three withs to the SitemapController, add the imports of them to the top of the file, and now we’re off and running, yeah?

[error] /Users/chuckhoffman/dev/cms/app/controllers/SitemapController.scala:48: illegal inheritance;
[error]  self-type controllers.SitemapController.type does not conform to's selftype with models.auth.UserRepositoryComponent with models.auth.GroupRepositoryComponent with models.approval.VersionApprovalComponent with
[error]   with CmsPageModule
[error]        ^
[error] /Users/chuckhoffman/dev/cms/app/controllers/SitemapController.scala:51: illegal inheritance;
[error]  self-type controllers.SitemapController.type does not conform to models.approval.MongoVersionApprovalComponent's selftype models.approval.MongoVersionApprovalComponent with with models.treasury.TreasuryModule with models.auth.UserRepositoryComponent with models.auth.GroupRepositoryComponent with with com.banno.utils.TimeProviderComponent
[error]   with MongoVersionApprovalComponent
[error]        ^

Oh. Looks like there’s now some kind of dependency here being enforced between pages and something having to do with email; also, versions, in addition to depending on pages, users, groups, and that same email thing again, also depend on… treasuries? Huh?

Plainly there’s a design problem here because I’m now being forced to mixin traits having to do with treasuries (these are bank websites) into a controller that makes a sitemap. At this point, however, I don’t know Scala well enough to pull off the refactoring this needs with all these self-types in the way. So off I go to find more traits to mixin to satisfy those self-types. Then those traits turn out to have self-types forcing mixin of even more traits, and so on.

After a day and a half of work, I finally had a working SitemapController.scala file containing about ten lines of actual “pulling web pages data from the database” and “building some XML,” and a couple dozen lines of mostly irrelevant withs and imports just so the bastard would compile.

It’s Time We Had A Talk About What A “Dependency” is

Consider this: given two modules (in the general sense of “bunch of code that travels together”, so Scala traits and objects, class instances, Ruby modules, and so forth, all apply) A and B, having, let’s say, a dozen functions each, if one of the functions in A calls one of the functions in B, does that make B a dependency of A?

I’ll save you the suspense. No, it does not. Or at least, not that fact alone. In fact, laying aside the concern that a dozen functions might be too many for one module anyway, it’s clear that the dependency is between those two functions, not the whole modules they happen to be in. Which suggests that that one function in that module is responsible for some functionality that may not be all that relevant to what the other eleven are for. In other words, you have a case of poor cohesion.

To the extent that we promote the Cake pattern to new Scala programmers before they have a handle on what good design in Scala looks like, I believe we’re putting the cart before the horse. The cake pattern, or more generally, cake pattern-inspired self-typing, takes your bad design and enlists the compiler to help cram it down others’ throats. Couple this with the fact that a lot of new Scala programmers think that: (1) because I’m writing Scala, I’m doing functional programming; (2) functional programming is the wave of the future and OO is on its way out, therefore (3) The past couple decades of software design thinking, coming as it does from the old OO world, has no relevance to me; and we get situations like my humble little sitemap feature.

Cake-patterned code, especially badly cake-patterned code (which has been the majority of cake-patterned code I’ve seen, which isn’t surprising given the pattern’s complexity – literally nobody I’ve talked to seems to quite completely “get” it, myself included), is needlessly difficult to refactor, not just because of the high number of different modules and “components” involved and/or because you have to very carefully pick apart all the self-types (especially when those have even more withs in them), but also because you frequently find yourself wanting to move some function A, but need to make sure it can still call some function B, but B turns out to be very difficult to find, let alone move – it might be in some trait extended by the module A is in, or it might be in some trait extended by one of those, or some trait extended by one of those, and so on, to the point where B could literally be almost anywhere in your project or any library it uses, and likewise anywhere in there could easily be completely different functions with the same name. All this just so that you can get the compiler to beat the next developer that has to maintain this code over the head with errors if he doesn’t extend certain traits in certain places, despite the fact that the compiler is already perfectly good at knowing if you’re trying to call a function that isn’t in scope.

To make matters worse, most folks’ introduction to functional programming these days still consists of pretty basic Lisp or Haskell use throwing all your program’s functions in one global namespace with no real modularization. It’s no surprise then if they see either the cake pattern or trait inheritance in general as simply a way of cramming more stuff into one namespace. Old Rails hands will hear echoes of concerns or more generally, the Ruby antipattern of “refactoring” by shoving a bunch of seemingly-sorta-related stuff out of the way into a module (it makes your files shorter on average, but doesn’t necessarily improve your design any).

Cohesion and coupling, separation of concerns, connascence, even things like DCI, these things still matter in Scala and in any of today’s rising functional or mostly-functional programming languages – or for that matter, any programming language that gives you the ability to stick related things together, which is pretty much all the useful ones. (I posit that DCI may be especially relevant to the Scala world as it seems like it would play nicely with anemic models based on case classes.)

I hate to keep harping on my Ruby past, but I heartily recommend Sandi Metz’s book Practical Object-Oriented Design in Ruby. Scala is really just kind of like a verbose, statically-typed Ruby plus pattern matching, when you think about it. Both combine OO and functional concepts, both have “single-inheritance plus mixins” multiple-inheritance; heck, even implicit conversions are just a way better way of doing what refinements are trying to do.

Ultimately though, the cake pattern has the same problem as used to be pointed to about those other “patterns” when they were all the rage: people learned the patterns early on, and started using them everywhere because they thought that was how you’re supposed to program now. They ended up with overly convoluted designs because they were wedging patterns in where they weren’t necessary or didn’t make sense, rather than first understanding the reasons the patterns existed, reaching for the patterns only when they found themselves facing the design puzzles the patterns are intended for.

For all the talk on the interbutts about TDD and related topics, it sure seems like as a working programmer I run into a startling amount of projects – a great majority, really – that either have no tests, or have old useless tests that were abandoned long ago; and a startling number of developers who still don’t write any tests at all, let alone practice a TDD style of work. It’s as if as an industry we’re all putting up a big front about how important testing and TDD is to us but then when the fingers hit the keyboard, it’s all lies. That’s probably not really the case, but rather that test-infected developers are a small but vocal minority – that developers that test tend to also be the kinds of developers that blog, make podcasts, present at conferences, write books, and so on, but these happen to only be a sadly small percentage of all the developers out there cranking out code. But this minority has been talking about testing for what, a decade now at least? So why hasn’t the portion of developers seriously testing grown faster?

Once you’ve got going with TDD or even just a little automated testing, and have come to rely on it, one of the most frustrating things is to find yourself having to collaborate with others who have not, and have no interest in it. You really don’t want to leave an only partly-tested system, meanwhile these other developers on your project will make changes that break your tests with impunity. The path of least resistance is to fall back in line with the rest of your team and go back to what one of my professors back at UNI referred to as the “compile-crap” cycle – a loop of add or change some code, try to compile it, say “crap” when fails, repeat – except for interpreted languages, substitute in place of the compile step, running the application and trying to “use” it, so maybe call it the “run-crap” cycle. This friction may well be one of the biggest factors slowing the adoption of TDD; but the less developers are testing, the more it will happen, so it’s also an effect. It’s a vicious feedback loop.

Then there’s maintenance, and/or working with “legacy” code, without tests, or with bad tests. Many a project is written with no tests ever – just banging out code in a run-crap loop.

Others start out with tests, but somewhere during the development process something changes and the team reverts back to run-crap. Why do they do this? It may be that members of the development team have been swapped out for some, shall we say, ahem, “cheaper” ones; this might happen when the product is launched and comes to be seen as in “maintenance” phase, but it also happens earlier on. Or it may be that the developers reverted to comfortable old habits in the face of schedule pressure from management – after all TDD can be slower in the short-term, especially when you’re new at it, and it’s easy to lose focus on careful discipline in favor of short-term speed (or at least the appearance thereof) when the management is breathing down your neck or freaking out at you.

In any case, the eventual result is either no tests, or tests that are no help because most of them are failing because they express requirements that have since changed – which might be even worse than no tests at all; it can look like the best way to deal with it is to just nuke the whole suite.

But then what? Touching on how TDD informs design, it’s well established that code written without TDD is likely to contain design that is much harder to write tests for, with lots more coupling and dependency snarls. As requests for bug fixes and new features come in for such a system, how do you work on it in a test-driven manner? Stopping the world long enough to retrofit a complete suite of difficult-to-write tests isn’t feasible and chances are there’s no documentation you can consult when you hit all those ambiguities in what some code should be doing, so you’re likely not to even know what exactly to test for – the definition of “legacy code” as being that for which requirements have been lost. Practicing TDD on greenfield projects is relatively obvious; but the vast majority of development time is spent in maintenance, and legacy/maintenance is “advanced” TDD. I’m probably not telling you anything you don’t already know. Michael Feathers’ book Working Effectively With Legacy Code is the authoritative source on the subject, but if it’s not feasible to halt work long enough to Test All The Things, then is it feasible to halt work long enough to read a book, especially if you’re a painfully slow ADHD-stricken reader like myself? Yet again, it’s much easier to go back to the good old irritating-but-familiar run-crap loop.

It’s clear that as an industry we only stand to benefit by spreading the good word of TDD far and wide. The more it’s being done, the better. But the factors I’ve just outlined present very real obstacles to its adoption. It’s a long-term project of raising awareness and educating the developer public. Meanwhile, what can you as an individual developer do? For starters, if you really want to do TDD but are stuck in a job where everyone’s oblivious to the concept, it’s probably not worth your time trying to force that kind of sea change on your own. You’re swimming against a torrent. My advice? Find a company that’s as serious about it as you are, and go work there instead.

I myself don’t even consider my work to be test-driven. I’m a believer in TDD, and I make the best, sincerest attempts at it I can relative to the time and energy constraints within which I am working. I certainly don’t consider myself an enlightened TDD guru. I even come out and say just that right in the introduction to my résumé. What’s that, you’re supposed to talk yourself up in a résumé and make yourself sound like the answer to all a company’s prayers so that you get the job? I don’t believe in that. I’m hoping to score a gig working with test-driven developers but I don’t want to be expected to be perfect at it from day one if such a company hires me; I want such a job because I know I have a lot to learn and am looking for advantageous situations in which to learn. It pains me that such honesty should seem radical, but in my experience, the pains that come from getting oneself into the wrong situations are worse.

Developers can also tend to be a prickly lot with a healthy distrust of dogma. And sometimes the practices of what I might call “strong” or “pure” TDD can feel like a dogma, especially when delivered in a kind of hellfire-and-brimstone way a la your average Bob Martin conference talk. I don’t care for the idea that you cannot be considered a professional developer if you don’t practice TDD (and by whose standard/definition of TDD anyway?).

As I have begin to view it, TDD isn’t something you just start doing and are able to do all of it flawlessly from the get-go. Among the many concepts and tools you’ll need in order to be able to completely test-drive all parts of a system, there’s things like the delicate art of mocking, how to fake a logged-in user, how to make a unit test truly isolated, how to mock a collaborator without making the test useless, what different kinds of tests there are, and a lot of subjective experience-based intuition about what tools and techniques are best suited for what kinds of tests and situations. It can all feel really daunting.

Especially in the context of a web applications, and then especially when you’re working with a framework such as Rails, there’s a big learning curve, one that I think would be better viewed as a long process of continual improvement. There will be difficulties along the way, but in the meantime you still have to get work done and people are still paying you. To say you can’t call yourself a professional until you’ve already mastered every aspect of TDD feels, frankly, insultingly elitist. You have to crawl before you can walk before you can run before you can fly. Doing some testing still beats the pants off not doing any. I don’t think agile development processes was ever meant to be dogmatic. The processes should be flexible, adaptable, pragmatic – just like the code you hope to write when you use TDD to guide the design.

The problem so far is that too seldom is TDD presented in this way. Instead it’s usually framed as, you’re either TDD or you’re not. (And by the way what constitutes TDD is a constantly moving target.) That way of looking at TDD isn’t going to help you or anybody else adopt it. All it does is feed into your impostor syndrome.

I think it’s worth reminding oneself that guys like Corey Haines took years to get that good at a totally test-driven style. I mean just watch that video. He’s test-driving every little piece of a Rails application totally outside-in, that is, starting with the “outermost” layers, what the user sees, the GUI, the views, and working inward towards the hard candy database center. There are so many points where he shows techniques for isolating the piece he’s working on, hacks to circumvent the coupling inherent in Rails’s architecture in order to get Rails to let him keep working at an upper level of the application instead of bombing out with an error about some lower-level piece not existing yet. Techniques that I just don’t think I would be able to absorb by rote, that he seems to have arrived at on his own through leaps of intuition and experience that I don’t see myself being able to duplicate. It’s quite beautiful but even though I know he wants to sell these videos, I concluded that this wasn’t going to work for me. We all gotta find our own way, I guess.

That kind of outside-in TDD approach is very much in-vogue right now, though. And another thing that’s very in-vogue at the moment, and a very useful guiding concept, is the Rails Testing Pyramid. The tl;dr of it is that your unit tests are the most important, and should be the type of test you have the most of; and as you look up the Rails stack each kind of test is slower and more integrated and rests on the foundation of those below it.

The mosaic of types of tests you might use in a Rails application is larger than they present in that article, and I think several of them can be grouped together in the “service tests” category, but you can see approximately where they would live in the pyramid relative to each other – in order starting from the bottom: unit tests, model tests, controller tests, request tests, helper tests, view tests, client-side/javascript tests (which might be a whole other pyramid actually), and finally acceptance tests/features. As you go up the pyramid in this way too, you find that the tools and techniques become more advanced in skill, or at least are usually assumed to be and presented as such: testing literature usually begins with unit tests, and Rails-oriented testing literature usually begins with what the Rails community have traditionally called “unit tests,” which are tests at the model layer, which might be integrated with related models and might be tied to the database, or might be totally isolated from both, depending on how well you’ve gotten the hang of the higher-level skills of mocking and isolating from the database.

But here’s what I realized a while ago: when you put the outside-in approach together with the Rails Testing Pyramid, the implication is that you are building a pyramid top-first.

Does that even make sense? I mean, I realize we’re talking about software here, not big blocks of stone. It’s a metaphor, but I think there’s useful insight to be gotten from metaphors. The Agile and XP literature says so too.

You’ve got a pyramid of your own to build: your repertoire of testing skills. And if building a pyramid it top-first seems counterintuitive, building all of it at once certainly should.

All your favorite TDD gurus had to have started somewhere – probably with a few simple unit or model tests just like most of us probably did. If you get too attached to an ideal of TDD enlightenment, it can be discouraging. Better to keep TDD in mind as a guiding principle, an ideal, then just start testing. As you progress, keep a sharp eye on ways to get more test-driven – places where more testing, new kinds of tests, new techniques and tools, can help you be more confident in your code with more ease. Tackle learning those as you feel yourself become ready for them.

I recently had this idea for a presentation that would bring together concepts from Testivus with a sprinkling of Buddhist philosophy. The saying “if you meet the Buddha on the road, kill him” seemed prescient, but I wouldn’t want to be misinterpreted as advocating anyone’s murder.

I think it can be pretty easy to sell developers on some kind of automated testing. There’s a big win right away in that you can spend more time writing useful code versus less time filling out the same web form over and over like a trained monkey. That’s already going to make you more productive and your day more enjoyable. Traditionally the introduction to testing has been at the unit test level, but I almost wonder whether it would be better, now that there are good tools for it, to start from full-stack acceptance tests right away and go as far with that as you can. You may end up with slow, very coarse-grained tests this way (and it’s for this reason that so many testing advocates will tell you it’s wrong), but at least they will exercise most of the system and you will catch defects and regressions you were likely to miss otherwise. Of course any developer/team working in this way will end up experiencing some pain when the test uncovers a bug but can’t pinpoint where in the system it is originating; but that’s a good pain point to have if it can be turned into a motivation to dig into those deeper levels of testing.

Convincing developers to test shouldn’t be as hard as it looks like it’s been made. It’s time to simplify the pitch: Testing is a path to reduce suffering. You will be learning it forever.

Occasionally I surprise myself and end up feeling a desire to write about it and toot my own horn a little bit. What better place to do that than on a professional blog at least part of the purpose of which is to show prospective employers or clients that I’m good at stuff?

I’m pretty good, I guess

note: personal background jabber, skip this section at will

I’m largely self-taught in the area of databases and SQL. The only course I ever took on the subject was a quarter-length database class, circa 1999, at Hamilton College (since bought up by Kaplan, I think) as part of their two-year IT degree program. It used Microsoft Access and was very beginner-level and I think I might have been out sick on joins day. Later when pursuing my Computer Science degree I avoided the databases course out of dislike for the professor who taught it; the alternative course to meet the same requirement had more to do with text indexing, information theory – search-engine kind of stuff – and oddly enough, the course taught and used an open-source multi-dimensional hierarchical database and MUMPS compiler developed by the course’s professor (multi-dimensional databases are quite good at storing and comparing things like, vectors of the occurrences of hundreds of different words in a bunch of textual articles). So, yes, I learned MUMPS in college instead of SQL. Actually, you can download and make-install the C++ code for the MUMPS compiler we used yourself, which compiles MUMPS into C++, if you ever get a wild urge to do such a thing. In fact, I’d recommend it to my fellow programming language nerds, especially those interested in old, obscure, or just plain weird languages. At the very least you’ll have a little fun with it; and I believe MUMPS is even still in use in some corners of the health care industry, so you’d be picking up a skill that’s in some demand yet increasingly difficult to hire for. (While you’re at it, check out Dr. O’Kane’s MUMPS book and his rollicking, action-packed novel.)

At my first real programming job, I started out coding in Actionscript 2.0 but when a particular developer left the company, someone was needed to take over server-side development in PHP, so I took it upon myself to learn PHP, and, as it turned out, also ended up needing to learn SQL and relational databases. I read a PHP book or two and a whole lot of blogs, but mostly just dove right in to the existing code and gradually made sense out of it. Eventually I was working back and forth between Actionscript and PHP pretty regularly. That kind of pick-it-up-as-needed approach is pretty much how I roll, though it’s hard to explain this kind of adaptability to recruiters who are looking to basically keyword-match your experience against a job description, which can be a real drag if you’re the type of person who craves new experiences. When at UNI I had been the kind of student that made a point of taking the more theoretical computer-sciencey courses, on the rationale that things like programming languages are certain to change in the future, but they will most likely continue to build on the same underlying theory dating at last as far back as good ol’ Alan Turing. I would say that approach has paid off well for me in the years since. My first boss described me in a LinkedIn endorsement as being capable of working in multiple programming languages simultaneously, “something which drives most of us insane.”

But I digress (often). Like I said starting out this post, sometimes I still surprise myself. When I pull off something new or just more complex than I’m used to, it feels good, and I like to share it, not just to strut about, but also because I am sure others are out there trying to solve similar problems, and also to give credit to others whose work I drew on to arrive at my solution. And like I said, my SQL skills are largely the product of a few old blog posts and experience so I was pretty stoked at what I pulled off this week.

The assignment

I was given the task of populating a “related articles” part of a page on a news website. Naturally the first thing I thought we needed to hash out was how the system should conclude that two articles are related. After some discussion we arrived at this idea: we would score two articles’ relatedness based on:

  • The number of keyword tags they have in common (this was the same site using acts_as_taggable_on from which I drew this recent post)
  • The number of retailers they have in common (Article HABTM Retailer)
  • How close or far apart their published_at timestamps are (in months)

How this turns out to be slightly difficult

This sounds perfectly reasonable, even like it would be pretty easy to express in an OO/procedural kind of way in Ruby or any other mainstream programming language. But once this site gets a long history of articles, it’s likely that looping or #maping through all of them to work this out is going to get way too time and memory intensive to keep the site running responsively.

Another alternative is to store relatedness scores in a database table and update them only when they need to change; we could hook in to Rails’s lifecycle callbacks like after_save so that when an article is created or saved, we insert or update a record for its relatedness to every other article. That still sounds intensive but we could at least kick off a background worker to handle it. However, I got the feeling that there was potential for errors caused by overlooking some event that would warrant recalculating this table, or missing some pairs.

And there was still another wrinkle to work out: the relatedness scores pertain to pairs of articles, and those pairs should be considered un-ordered: the concept of article A’s relatedness to article B is identical to B’s relatedness to A. I don’t know if any databases have an unordered tuple data type and even if they did whether ActiveRecord would know how to use it. It seems wasteful and error-prone to maintain redundant records so as to have the pairings both ways around. Googling about for good ways to represent a symmetrical matrix in a SQL database didn’t bear much fruit. So it would probably be best to enforce an ordering (“always put the article with the lower ID first” seems reasonable). But then this means to look up related articles, we need to find the current article’s ID in one of two association columns, rather than just one, and then use the other column to find the related article. I’m pretty sure ActiveRecord doesn’t have a way to express this kind of thing as an association. Which is too bad, because ideally, if possible, we’d like to get the relatedness scores and related articles in the form of a Relation so that we can chain other operations like #limit or #order onto it. (Possibly we could write it as a scope with a lambda and give the model a method that passes to that, but I’m still not sure we would get a Relation rather than an Array. The point at which ActiveRecord’s magic decides to convert from one to the other is something I find myself constantly guessing on, guessing wrong, and getting confused and annoyed trying to come up with a workaround.) But so it goes.

Any way we look at this, it looks like we’re going to be stuck writing some pretty serious SQL “by hand”.

I’m not going to show my whole solution here, but you probably don’t need all of it anyway. I think the most useful bit of it to share is the shared-tags calculation.

Counting shared tags in SQL

acts_as_taggable_on has some methods for matching any (or all) of the tags on a list, and versions of this that are aware of tag contexts (the gem supports giving things different kinds/contexts of tags, which I’m not going into here but it’s a cool feature). So obviously you can call #tagged_with using an Article’s tag list to get Articles that share tags with it, but the documentation doesn’t mention anything about ordering the results according to how many tags are matched, or even finding out that number. Well, here’s the SQL query I arrived at that uses acts_as_taggable_on’s taggings table to build a list of article pairs and counts of their shared tags. One nifty thing about it is that it involves joining a table to itself. To do this, you have to alias the tables so that you can specify which side of the join you mean when specifying columns, otherwise you’ll either get an ambiguous column name error or you’ll just get confused. You’ll see I’ve also added a condition in the join that the “first” id be lower than the “second,” forcing an ordering to the ID pairs so as to eliminate duplicate/reversed-order rows and also eliminate comparing any article with itself, since we don’t care to consider an article related to itself. (Also, the way this is written Article pairings with no shared tags won’t be returned at all. Maybe try a left join if you want that.)

  first.taggable_id as first_article_id,
  second.taggable_id as second_article_id,
  count(first.tag_id) as shared_tags
from taggings as first
join taggings as second
  first.tag_id = second.tag_id and
  first.taggable_type = second.taggable_type and
  first.taggable_id < second.taggable_id
where first.taggable_type = 'Article'
group by first_article_id, second_article_id

Add a and first_article_id = 23 or second_article_id = 23 to the where clause here and you’ll get just the rows pertaining to article

  1. Add an order by shared_tags desc and the rows will come back with the highest shared-tag-counts, the “most related,” at the top. If you’re looking to know the number of shared acts_as_taggable_on tags among your articles or whatever other model you have, here you are.

Building a leaning tower of SQL

So, for the other two relatedness factors, I did a similar query to this against the articles_retailers table to count shared retailers, and another on articles to compute the number of months apart that pairs of articles were published to the site. Each query used the same “first id less than second id” constraint. Then I pulled the three queries together as subqueries of one larger query, joining them by first_article_id and second_article_id, and added a calculated column whose value was the shared tags count plus the shared retailers count minus the months-apart count and call this their score – a heuristic, arbitrary measure of “how related” each pairing of articles is. (The coalesce function came in mighty handy here. Despite its esoteric-sounding name, all it does is exchange a null value for something else you specify, like you might do with || in Ruby – so coalesce(shared_tags, 0) returns 0 if shared_tags is null, or otherwise returns whatever shared_tags is, for example.)

As you are probably picturing in your head, the resulting master relatedness-score query is huge. It took me a good couple hours at a MySQL command-line prompt composing the subqueries and overall query a little bit at a time. It felt awesome. But still: the result was one seriously big glob of SQL. (Incidentally iTerm2 acted up in a really weird way when I tried pasting these large blocks of code into it, but not when I was SSHed into a remote server; if this rings a bell to you, drop me a line.) I’m going to spare you the eye-bleeding caused by seeing the whole thing. You’re going to drop that big nasty thing in the middle of some ActiveRecord model? Yikes!

Views to the rescue

In a forum thread where I was looking for help on the implementation of all this, Frank Rietta suggested I consider using a database view. To be perfectly honest, I hadn’t used a view in years, if ever. I didn’t even think MySQL had them (yes, I’m using MySQL, don’t judge) – maybe some older version I used in the past didn’t and they’ve been added since? At first I wasn’t sure how this could help me, but then Frank wrote this excellent blog post on the subject. I read it, and the more I thought about it, the better the idea sounded.

Basically, a view acts like a regular database table, at least when it comes to querying it with a select. But underneath it’s based on some query you come up with of other tables and views. You can’t write to it, but it provides you with a different “view” of your data by what I would describe as “abstracting a query.” And because the view can be read from like any other table, it can also act as the table behind an ActiveRecord model (at least, until you try to #save to it). Go read Frank’s post so I don’t have to recap it here. You’ll be glad you did.

The great advantage of using a view to hold the relatedness scoring is that I don’t have to think about writing Ruby code to maintain the table of relatedness scores, I don’t have to think about background jobs or hooking into ActiveRecord lifecycle callbacks to maintain the data or any of that – the database itself keeps this “table” updated. Any time the tables it depends on change, it changes right along with them automatically. Plus it gets the big hairy SQL query out of my Ruby code where it won’t distract or confuse anyone; and it handles the issue of making sure first_article_id is always lower than second_article_id because that’s expressed right in the query it’s based on.

So that settles it, I create a view out of my big relatedness-scoring query and an ActiveRecord model over top of it! Only one problem, and it turned out to be pretty minor, but as I mentioned, my big relatedness query involved a join over three subqueries. Turns out that in MySQL, views can’t have subqueries. Perhaps they can in other database engines, I would not be surprised, but not in MySQL. The workaround for this is to create views for the subqueries and query those views. Honestly that probably makes the SQL read more easily anyway. On the other hand, I ended up creating four views. That was definitely the longest Rails migration I have ever written, by far.

The models and other miscellaneous thoughts

So, now I have a table called article_relations that contains pairs of Article id’s and their relatedness scores, I can give it a model like this:

class ArticleRelation < ActiveRecord::Base
  belongs_to :first_article,  class_name: 'Article'
  belongs_to :second_article, class_name: 'Article'

  def other_article(source)
    [first_article, second_article].find{|a| a != source}

  def readonly?

And give the Article model a couple methods like this:

  def article_relations
      'first_article_id = ? or second_article_id = ?', id, id).order('score desc')

  def related_articles{|r| r.other_article(self)}

Or something to this effect. You’ll likely want to have your view only contain records where the score is above 0, for instance, or give the above methods an optional parameter to use in a limit so you can limit the number of related articles you show.

Which reminds me, speaking of #limit… as I alluded to before, it would be great if I could do things like @article.related_articles.limit(10) here but I can’t. This bugs me a little bit, because it means that some of my queries to the Article class are going to call #limit and others will have to pass the limit as a parameter, or slice the array like [0..9] or something, so I have code where doing the “same” thing reads completely differently. (I am also unfortunate enough to still be working with Rails 2 regularly, where limit goes in an options hash. It appears if you try that syntax in Rails 3, it just ignores it.) There are other gems like punching_bag where this itches at me a little as well (not to mention, I’d like to be able to give my model a method or scope with a name more appropriate to my domain such as popular or hot and have that delegate to most_hit). I think this might just be a product of the usual leakiness of ORM abstractions and I’ll just have to get over it.

One caveat that should be pointed out is that Rails’s generating of schema.rb doesn’t handle views “properly” and probably can’t be made to when you think about it or depending on what you think the proper thing for it to do would be. Rails will dump the structure of your views out as regular tables, so if you use rake db:schema:load you’ll get tables rather than views with all their cool magic. At this point it’s probably a good idea to uncomment that config.active_record.schema_format = :sql line in your application.rb configuration file, which will make rake db:migrate spit out a structure.sql file instead of schema.rb, and get rid of schema.rb altogether.

Another thing worth considering, depending on the complexity of your view(s), is whether to make them materialized views. This is a view that’s backed by a physical table that gets updated as needed. It’s more efficient to query but a little slower to update so the effects of a change to one of the tables it depends on might not be reflected right away, but this may be a worthwhile trade-off to make.

Join me next time when I talk about technical debt or something like that.