The following was written June 1, 2020 and originally posted to mental health-related internal chat rooms where I worked at the time

Yesterday was the last day of Mental Health Awareness Month.

Yesterday would also have been Wesley Willis’s 57th birthday.

If you’re from the Alternative Nation Generation, you may remember Wesley Willis as something of a curiosity, maybe a bit of a joke and a distastefully exploitative one at that, the sort of thing that would never pass in today’s social climate and should probably be left where we left it. A mentally ill street musician from Chicago (to most, with the word “musician” enclosed in irony-quotes) whom seemingly somebody allowed to start releasing albums and playing clubs for amusements’ sake, sparking a weird 1990s fad with a rather long tail on it.

The truth has a lot more to it though, and today I want to talk about what that story obscures and misses that Wesley Willis fans and friends-of-friends like myself knew. I want to shine a light on the man Wesley Willis was. The man who faced adversity after adversity and never lost the beautiful soul that inspired friends, brought joy to mass numbers of music nerds, and ultimately made it possible for Wesley Willis to make way for himself to live authentically in a world all but designed specifically against him. Whose sincerity and enthusiasm made him an effortlessly savvy self-promoter and brought him opportunities and agency denied to so many others like him. Who worked hard at doing what he both loved and needed and radiated love as he did it. It would be to no one’s surprise his time on Earth was short, but he made the absolute most of it.

Two documentary films have been made about Mr. Willis, and of the two, one still has a bit of that snicker-at-the-freak smell on it. The one I’d like to invite people to check out if they can find it, Wesley Willis’s Joyrides, focuses on his life story and personal accounts from his closest friends and also gives space to his no less substantial visual arts career. While many have found Wesley Willis’s compositional methods primitive and maddeningly repetitive, there is much depth to be found for those willing to grant time and attention to his oeuvre. In this moment I would particularly like to highlight those songs in which Wesley Willis dealt frankly with his experiences with mental illness: “Chronic Schizophrenia” and “Outburst” from “Greatest Hits Vol. 1” are easy to find but there are many more.

On June 1 2020, I can say that not only is the world a bit cooler place for having had Wesley Willis in it, but fortunate and downright lucky that Wesley Willis never met the fate of George Floyd or so many other black men and neurodivergent black men throughout our country’s history.

RIP Wesley Willis, you are still loved like a milkshake.

It’s been weird times lately for me, and for many others. If it hasn’t been weird times for you, that must be some nice bubble you’re living in.

I write this on a Sunday. Friday was my last day as an employee of The Des Moines Forge Of Pillar Part Of Industry X.0 By Accenture. Fortunately I’m a techbro and had a job offer the same day at a slightly higher salary. I don’t think I’m going to take it.

I happened to rewatch Atari: Game Over yesterday. In it, Howard Warshaw, developer of not only the ill-fated E.T. game but also such unquestionable classics as Yar’s Revenge, reflected on how his brief experience at Atari kind of ruined him for subsequent job prospects. Made him expect better, want more, be less willing to compromise for the sake of going along with the typical corporate environment. Not just because of the wild party culture of Atari in the early 1980s but just as much because of the heights of creative achievement that he and his co-workers reached together, enabled by that openness in which they could bring their whole selves to work.

When I came on at Pillar, it was in the process of being acquired by Accenture. Accenture is the biggest corporation you never heard of. It employs literally half a million people globally in all sorts of consulting and professional services sectors. I knew of this, it was brought up in my interviews. I seem to have a knack for coming into companies at these moments; my hiring at Banno, my previous job, came at much the same juncture of their acquisition by Jack Henry & Associates. And just like with Banno, the message that went out in Pillar was that what the acquirer saw in the acquiree, the thing to which they attributed the smaller company’s success that made it such an attractive acquisition target, was its great culture, and that the big company’s intentions were to learn from that and spread the lessons out to the larger company and with that we were going to rock the world and it was going to be great.

I’ve had a fraught relationship with employment all my legally-employable life. The workplace was always just an even more repressive corner of an already repressive world where I never quite belonged and suffered because of it, yet if I didn’t find a way to slot myself into it, I would never be able to support myself or a family. Pillar brought me out of the shell I’d built around myself since I was a child, showed me my viewpoint was valid and valued in a way I didn’t realize was possible. It took time for me to adjust and really begin to open up, and I still feel like that process was interrupted in the middle, but I am so overwhelmingly grateful to the people I worked alongside at the Forge that helped that happen. It was a culture that delighted in being eccentric and that spirit made us powerful and because of it we did awesome work. Pillar showed me that I could be seen as more than just a human resource or a code monkey, that I could be valued as a fully three-dimensional human being with all my flaws and foibles and ambitions and silly jokes and wild ideas and nerdy interests.

A week ago Friday though, Accenture told me something very different. Even after all the apparently sincere and ground-breaking diversity and inclusion rhetoric I’d heard, and what I took to be so many refreshingly bullshit-free statements from CEO Julie Sweet, ultimately the message Accenture gave me was that I was expendable: that literally some opaque process running in some cloud, triggered by a fluctuation in a stock price or some such thing, could go and query an employee database according to some algorithm, based off data like how many of my work hours were directly billable to an already-won client, and by those results could decide my professional fate, and that that was just totally normal and the way things are done. Because that is evidently the world we are in now.

And this may be what sucks the most now, is that, after Pillar, I don’t think I can bring myself to settle for anything less… but what more, really, is out there?

Last Fall, I had the great honor, made possible by Accenture even, to be paid a normal work day’s salary to be a guest lecturer for Hour Of Code, which is to say that I along with several of my Forge co-workers spent a day helping to guide public-school classrooms full of middle-school kids through some fun coding exercises, where they could learn a little about what I do for a living. One of the biggest things I was struck by and took away from that day was how much more diverse those classrooms were from the rooms I typically spent a workday in. To be slightly flippant about it, there were actually Black kids there. And Hispanic kids. There was a Muslim girl (I’m assuming a little bit here – she was brown-skinned and had a hijab on) who was joyously competing with her Black friend next to her for who could complete the exercises quicker. I remember thinking that if I’m in a position to influence hiring decisions in ten years or so I sure hope these kids apply. And there were also kids that were totally not into it, who thought all this techie stuff was bullshit, like the Black girl who aspired to be a journalist, and I saw her point of view as no less valid, because kids like her are used to seeing white dudes with glasses like me doing this stuff, not people like them, and also because journalism is totally important and we need it, and yet, we’re talking about language here, which is all code really is, and even if you don’t make software your career, understanding how it works is only going to be more crucial to your success in this society going forward. I didn’t quite have the guts to say what I really wanted to say, because I wasn’t the real teacher and it wasn’t my class. But that was the beginnings of some notions in my head that I have higher purposes to serve than just building another accounts-payable system.

I don’t know what my future holds but I don’t think I’m likely to just take another plain old tech job. The kind of thing I really want to do involves sharing the joy of creativity that comes with my profession and helping young people find their way into it that might otherwise give it up before they get the chance; because I nearly did too, and that’s a whole other story. Pillar was the first place I worked that tried to truly treat that kind of ambition as part of the job, and whatever your company is, if you can’t get down with that, I don’t think I can fuck with you anymore.

Fortunately, I have maybe a couple months worth of financial cushion in which to try to sort this all out. That’s not a privilege I ever had before.

Like I said at the top of this rant, these are weird times. Hunter S. Thompson is famously credited with saying, “When the going gets weird, the weird turn pro.” I’ve been weird my whole life and this is my time to shine.

The problem: You’ve got a bunch of text replacements you’d like to make in strings. Practical application, if you need one: You’re generating messages to be read by a speech-to-text system and there are certain words or names that come up often that it doesn’t pronounce very well, so you’d like to replace them with alternate “phonetic” spellings.

This example is in Javascript but the concepts are broadly applicable. The list of replacements you want to make is stored in key-value pairs, the key being what to replace and the value being what to replace it with: we have them in a Javascript object. Your favorite language’s equivalent might be a Hash, a Dictionary, a Map<String, String>, etc.

Here’s a totally reasonable imperative-OO solution one might come up with:1

class MessageTransformer {
  constructor(replacements) {
    this.replacements = replacements;
  }

  transform(text) {
    for (let str in this.replacements) {
      text = input.replace(
        RegExp(`\\b${str}\\b`, 'gi'),
        this.replacements[str]);
    }
    return text;
  }
}

To use this you’d create a MessageTransformer instance during the initialization of your program like const transformer = new MessageTransformer(replacements) and then use it to transform your message like const fixedMessage = transformer.transform(message).

Now, I have a slightly funny history with functional programming. In college I learned some Scheme and thought FP was just about the coolest thing ever, but almost nobody was using it in industry in those days, Java didn’t even have lambdas yet. Then I got a job where I was writing quite a bit of Actionscript and upon discovering that, being a ECMAscript dialect, it has closures, I went on to write some of the worst wannabe-Lisp-in-Actionscript ever, and had great fun doing it. More recently however, I was traumatized at a previous workplace by “pure” FP Scala. There’s a great community to hang out if you really want to get your impostor syndrome fired up. I now require trigger warnings for terminology like “kleisli arrow” and “final tagless.” I’m undergoing a long slow recovery. But I got some good things out of it, like an appreciation for immutability. And looking at this solution made that particular spot in my brain itch a little: we keep reassigning new values to the text variable (some languages won’t even let you do this to function parameters, not that you can’t get around it easily enough with a local variable). And then this came to me – or rather the idea for it did; it took some work to get the actual code right:

function messageTransformer(replacements) {
  return Object.keys(replacements).map(str =>
    text => text.replace(
      RegExp(`\\b${str}\\b`, 'ig'),
      replacements[str])
  ).reduce((acc, f) => text => f(acc(text)), _ => _);
}

There’s actually two significant and completely independent refactors applied here, relative to the first version. First is that I moved away from using a class. The function accepts your replacements object as a parameter, analogous to the constructor, and returns a function that does the transformations, analogous to the method. In my head I think of this as the “objects are just a poor-man’s closures/closures are just a poor-man’s objects” pattern, after something I heard in that class where I learned Scheme. It could probably use a shorter name. It changes the usage syntax a bit: to initialize it, you’d go const transform = messageTransformer(replacements) and then use it like const fixedMessage = transform(message), or if you want to do both all in one go, const fixedMessage = messageTransformer(replacements)(message). This is potentially a pretty handy pattern you can use anytime you might create a class with only one public method.

The second and weirder refactor is that I replaced looping through the replacements and assigning the result of performing each replacement back to the variable with… something else. It has two parts and they are a “map” and a “reduce”. You might have heard of MapReduce during the Big Data craze a few years back. This is literally the same concept, but with small data, and turns out it’s a really common FP pattern. “Map” can mean taking a collection of something and turning it into a collection of something else by applying the same function to each element2; “reduce” would then mean taking that collection and reducing it down to one value, for example summing a list of numbers, or even just counting how many things are in the list.

In the map stage, each item in replacements (or more precisely, the array if replacements’s keys) is mapped to a function that performs that replacement. By the end of it we have an array of functions. Each function in that array is analogous to one iteration of the for loop in the first version.

The reduce stage rolls all those functions into one single function by “folding” function composition over it. This is the conceptually densest part of this whole thing, but I’ll try my best. Imperatively speaking, it sort of loops through the array of functions and adds them all together, so that by the end we have a kind of chain of functions that the text gets piped through, but all in one function. How does this work?

Any time you have some function that accepts two parameters of some type and returns something of that same type – its type signature is of the form (A, A) => A – you can use that function over a whole list of As by using what’s called reducing or folding; it’s done by successively applying that function to each item in the collection and the “so far” value. To analogize to the example of summing a list of numbers, if you were doing it with a for-loop, each time through the loop you add the next number in the list to the sum so far; to do the same thing with reduce, you just give it a function that does that, (x, y) => x + y. Given a function that adds two numbers, reduce can use it to add a whole bunch of numbers. Give reduce a function that returns the larger of two numbers and it can use it to find the largest of several numbers. Give reduce a function that just returns 1 and you’ll end up with a count of how many numbers were in the list. And so on.

Function composition is what it’s called when you make a new function out of two functions – all this new function has to do is pass its argument to one of the two functions and then pass the result of that on to the other one. This is especially easy to do with two functions where their parameters and return values are all the same type – that is, they both have a type signature of the form B => B (the actual letter doesn’t matter, but I don’t want to get too confusing by re-using A). If you have two such functions f and g, the composition of them is x => g(f(x)). You can write a function to do it: compose(f, g) = x => f(g(x)). Anything about this sound familiar?

Yes! You can “compose” the concepts of the previous two paragraphs! A function that accepts two functions and returns the same kind of function, like our compose, is yet another example of a function of the form (A, A) => A – it just so happens that its parameters are functions too – A = B => B – making function composition yet another thing we can use in a reduce! And that’s exactly what we’ve done in the reduce above: (acc, f) => text => f(acc(text)), that is, given two functions acc and f, return a function that takes text and returns f(acc(text)). Since we’re working with a collection of B => B functions (where B means strings in our example), with function composition we can roll them all up into one single B => B function.

Oh, but there’s still that other weird looking parameter I’ve been glossing over, _ => _. That’s just a function that takes one parameter and just returns it. _ is a valid Javascript identifier, so this is just a little style choice on my part. But why does this need to be here? Because a reduce needs a starting value. In yet another analogy to summing a list of numbers with a loop: you need to initialize the sum to 0 before starting the loop. _ => _ is actually a pretty special function to FP heads: they call it the “identity function”. It comes in handy for just this sort of thing, because the identity function is to function composition what 0 is to adding numbers. There’s a scary FP terminology for this that’s hopefully about to be a bit less scary if I can explain it well enough.

It’s a concept borrowed from abstract algebra, a term for things that you can put two of together and get the same kind of thing, and hence, things you can reduce with: they’re called monoids. For example, to speak yet again of summing numbers, they say in algebra that “the set of real numbers under addition forms a monoid.” A monoid is made up of a set, analogous to a type in programming; a binary operation on that set (“binary” in the sense that it has two operands), analogous to our (A, A) => A; and an identity element. The identity element is that member of A where, when used in the binary operation, the result is equal to the other argument – like how in real numbers, x + 0 == 0 + x == x.3 In the same way, functions like B => B form a monoid under function-composition having the identity function, often called i, as its identity element, because composing some f with i gets you, for all practical purposes, the same function: i(f(x)) == f(i(x)) == f(x).4

Anyway, to get back to our little example, I thought this functional version turned out pretty slick, it’s a neat way to conceptualize the problem, the code is really succinct and clean, and there’s no mutability to think about. I decided to write up a post about it for the benefit of functional programming fans who might appreciate it the way I do, and for folks who are in the early stages of exploring functional programming for whom all this explanation might be educational.

Now, I hear a couple of you in the back of the room there grumbling that this implementation is probably terrible on memory usage. There’s something to that. If your list of replacements is big enough (and that would probably have to be pretty big), this thing could overflow the stack. It’s a common issue with highly functional programming style, because you build just about everything out of functions, and function calls use stack space. That said, you know what they say about micro-optimizations. And besides, that sort of thing is quite dependent on the language implementation. If nothing else, this was a cool illustration of the power of functional abstractions.

Implementations for pure functional languages, and others that make a priority of enabling use of functional features, have ways of dealing with this stack usage issue. You’ve probably heard of the optimization of tail-recursion, which can be generalized to tail-calls. This is when a function calls another function as the last thing it does before itself returning to its caller. If call A’s last thing to do is to call function B (which in the case of recursion is the same function as A), and has no further work to do afterwards, then as soon as B’s stack frame is popped off, A’s will be too, so there wasn’t much point in keeping the A stack frame around. Tail-call optimization basically consists of popping off that stack frame pre-emptively. When this can be done for a recursive function, the resulting memory usage is essentially the same as for having used a loop. Downsides can include losing debugging information (think stack traces). There’s also a techique called trampolining that I don’t understand very well except that it somehow results in the memory allocation happening on the heap instead.

Anyway, hope this was interesting.

  1. The only thing slightly esoteric here is maybe the regular expression stuff. \b is just a regex thing that matches a “boundary”, that is, a word boundary, since we want to replace whole words; it matches the beginning or end of the string, or of a line, or of a word. The flags gi stand for “global” (replace all occurrences, not just the first one found) and “insensitive” (to letter case). 

  2. Technically the “context” in which a map occurs doesn’t have to be that it’s a collection, there are lots of other uses; but collections are commonly most people’s first introduction to some of these FP concepts. To give you some idea what other things map can operate on, the then method of a Promise is also basically a map; you give it a function that accepts A and returns B, and it uses it to turn a Promise with A in it into a Promise with B in it. 

  3. There’s a whole lot of other nuances I’m glossing over that you’re likely to run into and understand eventually. For instance, there is such a thing as a monoid without an identity element, except it’s called a semigroup (In fact I’ve made reference to one earlier, I’ll leave it as a challenge for the reader to spot it). And sometimes the order of arguments to the operation matters; when it doesn’t, you have a commutative monoid, like with numbers under addition, but sometimes it does, like with strings under concatenation, and in some of those type of cases you have elements that are only a left identity or right identity… it’s a whole deep and fascinating branch of mathematics that’s totally worth exploring further but I’m trying to keep this article from going off the rails, and this footnote is mostly here for the benefit of the Well-Actually Brigade. 

  4. I’m intentionally using notation here aimed at programmers rather than proper mathematical notation, don’t @ me. 

original dank memeage

I’m just going to come out and say it. I hate parameter aligning and I think it looks like crap. Especially for functions with long names. Nothing should be indented that far in the first place, but so much the less so when it’s just suddenly shunted over 20+ spaces rather than the product of a series of increasingly indented lines.

This is really just an extension of my distaste for long lines. They tire my eyes out. There is science about this. It’s why newspapers and magazines are printed in columns. But also, when reading code, I should have to use my horizontal scroll bar as little as possible. I used to be hardcore about an 80 column limit but with some programming languages that gets very restricting, and then there are the matters of long names, literal strings, or long complex type annotations, second parameter lists, implicits, and so on. There should probably be some maximum to which only certain exceptions are allowed but I don’t have a definite number offhand. But if I have to scroll horizontally to even see the arguments you’re passing instead of thinking I’m looking at blank lines, I’m going to be annoyed. And if I can’t fit your code within the width of my screen, you should really re-evaluate your life choices. Preferably I should be able to read it fairly easily when I have two emacs buffers or Intellij editors up side-by-side, subject to choice of a reasonable font size.

This is one of those dumb holy-war issues. People are irrationally attached to their coding styles, which themselves are a set of irrational aesthetic preferences onto which people subconsciously hang ideas about their identity and artistry. I think this is mostly a product of insecurity. As for parameter-aligning, as far as I can guess, it’s an idea people get from Lisp, and being a Lispy thing, it makes people feel smart. (Programmers do an awful lot of terrible things for that reason, like writing needlessly complex code where they should be abstracting something so it’s more readable, or gratuitously using esoteric language features or idioms. I used to do a hell of a lot of this kind of thing.) Well a lot of ideas people have gotten from Lisp have been bad ideas, and this would be one of them.

If your argument/parameter list is long enough that you don’t want to put it all on the same line with the function name and whatever else, it’s fine to split it over more lines. I quite like one parameter per line, especially with case classes, but if the names are short I don’t mind grouping a few together on the same line either – more commonly so for arguments at a function call than for parameters at the function declaration. But you should usually start it by line-breaking after the opening paren of the list and then just indent it a normal indentation amount. It still reflects that you’re continuing from the previous line, but now you won’t have to realign them all everywhere if you change the function’s name. People seem to feel weird about line-breaking after an opening paren even though they have no qualms about doing so after an opening curly. Well, if this was Lisp they would all be parens, so stop worrying.

There’s something to be said for having consistent style in a codebase that’s been worked on by several people, and getting it right by a standard should be as easy to do as possible; ideally, it should be possible to do automatically using something like scalafmt instead of leaving it up to the capricious whims and error-proneness of humans.

An indentation scheme should reflect code structure, not have to be too fiddly to accommodate changes, and easy to enforce with a static analyzer. It should not take up an inordinate amount of a coder’s time. Thus it is I propose a simple scheme that I do not strictly follow myself as yet, but which I think has quite a bit of potential. Indentation should be a number of spaces (or this works with tabs too, but not with mixing them together, which you should never do anyway) determined by a simple linear function y = mx + b, where b is some constant indentation level started with (probably 0 in most cases); m, also a constant, is your tab width (two spaces, or three, or whatever is common idiom of the language or decided on by team, project, or company); and x, the variable, is the number of expression delimiters (i.e. parens, brackets, begin/end pairs) left unclosed as of the start of the line. (This doesn’t include things that delimit literals, such as the quotes around strings.)

This does have some potential to look a little odd in places where x is more than one greater than on the preceding line, and it doesn’t account for case or if branches that don’t have brackets around them (consider them to have imaginary brackets maybe? Some companies’ standards just say to always use the brackets), but generally it allows you in such cases to put the closing delimiters on separate lines and reflect the depth of structure they are closing without having any of them end up at the same indentation level. Closing delimiters that are at the start of a line should end up aligned with the start of the line they were opened on pretty easily this way.

That’s all I have on that, but I’m interested in feedback. Naturally, none of this applies to Lisp dialects which tend to have their own established conventions and I imagine Lispers would have all sorts of reasons for hating this idea, but I don’t much care what those grey neckbeards think. (Maybe Clojure people would be more willing to give it a try but I don’t promise it will look good, I haven’t tried it in any Lispy syntaxen, which as we all know are weird anyway.) What I like about this method is its simplicity and relative lack of special cases or aesthetic judgement calls, which make it ideal for an automatic formatter.

Like many beginning Scala programmers, I was exposed to the Cake Pattern early on and told that this is how you do dependency injection in Scala. Coming from the Ruby world I thought it looked like an awfully heavy-weight method, but of course I didn’t know any other way yet. Right away I was placed on a project in which the Cake pattern was apparently very much in use, a CMS built on Play.

I was tasked with adding a sitemap feature, such that when the path /sitemap.xml was requested, a sitemap of the site would be rendered. This seemed straightforward enough. I would just need to pull some data about the site’s currently published pages from the database and massage it into some pretty straightforward XML. This being Play, I started with a controller, and right away knew I’d need to pull in whatever code pulls pages from the database, which was pretty easy to find. I soon found I would also want to pull in a trait for looking at the contents of the HTTP request. Again, no big deal.

trait SitemapController extends Controller
    with SiteRequestExtractorComponent
    with PageRepositoryComponent {

  def sitemap = {
    // the magic happens...
    Ok(sitemapXML)
  }
}

Simple enough, until I tried to compile:

[error] /Users/chuckhoffman/dev/cms/app/controllers/SitemapController.scala:48: illegal inheritance;
[error]  self-type controllers.SitemapController.type does not conform to models.page.CmsPageModule's selftype models.page.CmsPageModule with models.auth.UserRepositoryComponent with models.auth.GroupRepositoryComponent with models.approval.VersionApprovalComponent with models.email.EmailServiceComponent
[error]   with CmsPageModule

Hm. Looks like somebody used that Cake pattern thingy to inject dependencies into CmsPageModule having to do with users, user “groups,” and approval of new content. That probably has to do with who can do what kind of updating of pages, so even though that isn’t relevant to what I’m after since I only want to read page data, not update it, it still seems reasonable. I’ll just find the right traits that satisfy those three things – even though I’m not really using them here – and add withs for them and all should be good.

One little snag, I guess… it turns out that those traits were “abstract”, which meant greping through the code to find the correct “implementations,” which turned out to be UserRepositoryComponentPostgres, GroupRepositoryComponentPostgres, and MongoVersionApprovalComponent. (This is a common sort of thing to do, since one often wants to mock out the database for tests.) Took a while to track them down, but eventually I did. So surely I should be able to just add those three withs to the SitemapController, add the imports of them to the top of the file, and now we’re off and running, yeah?

[error] /Users/chuckhoffman/dev/cms/app/controllers/SitemapController.scala:48: illegal inheritance;
[error]  self-type controllers.SitemapController.type does not conform to models.page.CmsPageModule's selftype models.page.CmsPageModule with models.auth.UserRepositoryComponent with models.auth.GroupRepositoryComponent with models.approval.VersionApprovalComponent with models.email.EmailServiceComponent
[error]   with CmsPageModule
[error]        ^
[error] /Users/chuckhoffman/dev/cms/app/controllers/SitemapController.scala:51: illegal inheritance;
[error]  self-type controllers.SitemapController.type does not conform to models.approval.MongoVersionApprovalComponent's selftype models.approval.MongoVersionApprovalComponent with models.page.PageModule with models.treasury.TreasuryModule with models.auth.UserRepositoryComponent with models.auth.GroupRepositoryComponent with models.email.EmailServiceComponent with com.banno.utils.TimeProviderComponent
[error]   with MongoVersionApprovalComponent
[error]        ^

Oh. Looks like there’s now some kind of dependency here being enforced between pages and something having to do with email; also, versions, in addition to depending on pages, users, groups, and that same email thing again, also depend on… treasuries? Huh?

Plainly there’s a design problem here because I’m now being forced to mixin traits having to do with treasuries (these are bank websites) into a controller that makes a sitemap. At this point, however, I don’t know Scala well enough to pull off the refactoring this needs with all these self-types in the way. So off I go to find more traits to mixin to satisfy those self-types. Then those traits turn out to have self-types forcing mixin of even more traits, and so on.

After a day and a half of work, I finally had a working SitemapController.scala file containing about ten lines of actual “pulling web pages data from the database” and “building some XML,” and a couple dozen lines of mostly irrelevant withs and imports just so the bastard would compile.

It’s Time We Had A Talk About What A “Dependency” is

Consider this: given two modules (in the general sense of “bunch of code that travels together”, so Scala traits and objects, class instances, Ruby modules, and so forth, all apply) A and B, having, let’s say, a dozen functions each, if one of the functions in A calls one of the functions in B, does that make B a dependency of A?

I’ll save you the suspense. No, it does not. Or at least, not that fact alone. In fact, laying aside the concern that a dozen functions might be too many for one module anyway, it’s clear that the dependency is between those two functions, not the whole modules they happen to be in. Which suggests that that one function in that module is responsible for some functionality that may not be all that relevant to what the other eleven are for. In other words, you have a case of poor cohesion.

To the extent that we promote the Cake pattern to new Scala programmers before they have a handle on what good design in Scala looks like, I believe we’re putting the cart before the horse. The cake pattern, or more generally, cake pattern-inspired self-typing, takes your bad design and enlists the compiler to help cram it down others’ throats. Couple this with the fact that a lot of new Scala programmers think that: (1) because I’m writing Scala, I’m doing functional programming; (2) functional programming is the wave of the future and OO is on its way out, therefore (3) The past couple decades of software design thinking, coming as it does from the old OO world, has no relevance to me; and we get situations like my humble little sitemap feature.

Cake-patterned code, especially badly cake-patterned code (which has been the majority of cake-patterned code I’ve seen, which isn’t surprising given the pattern’s complexity – literally nobody I’ve talked to seems to quite completely “get” it, myself included), is needlessly difficult to refactor, not just because of the high number of different modules and “components” involved and/or because you have to very carefully pick apart all the self-types (especially when those have even more withs in them), but also because you frequently find yourself wanting to move some function A, but need to make sure it can still call some function B, but B turns out to be very difficult to find, let alone move – it might be in some trait extended by the module A is in, or it might be in some trait extended by one of those, or some trait extended by one of those, and so on, to the point where B could literally be almost anywhere in your project or any library it uses, and likewise anywhere in there could easily be completely different functions with the same name. All this just so that you can get the compiler to beat the next developer that has to maintain this code over the head with errors if he doesn’t extend certain traits in certain places, despite the fact that the compiler is already perfectly good at knowing if you’re trying to call a function that isn’t in scope.

To make matters worse, most folks’ introduction to functional programming these days still consists of pretty basic Lisp or Haskell use throwing all your program’s functions in one global namespace with no real modularization. It’s no surprise then if they see either the cake pattern or trait inheritance in general as simply a way of cramming more stuff into one namespace. Old Rails hands will hear echoes of concerns or more generally, the Ruby antipattern of “refactoring” by shoving a bunch of seemingly-sorta-related stuff out of the way into a module (it makes your files shorter on average, but doesn’t necessarily improve your design any).

Cohesion and coupling, separation of concerns, connascence, even things like DCI, these things still matter in Scala and in any of today’s rising functional or mostly-functional programming languages – or for that matter, any programming language that gives you the ability to stick related things together, which is pretty much all the useful ones. (I posit that DCI may be especially relevant to the Scala world as it seems like it would play nicely with anemic models based on case classes.)

I hate to keep harping on my Ruby past, but I heartily recommend Sandi Metz’s book Practical Object-Oriented Design in Ruby. Scala is really just kind of like a verbose, statically-typed Ruby plus pattern matching, when you think about it. Both combine OO and functional concepts, both have “single-inheritance plus mixins” multiple-inheritance; heck, even implicit conversions are just a way better way of doing what refinements are trying to do.

Ultimately though, the cake pattern has the same problem as used to be pointed to about those other “patterns” when they were all the rage: people learned the patterns early on, and started using them everywhere because they thought that was how you’re supposed to program now. They ended up with overly convoluted designs because they were wedging patterns in where they weren’t necessary or didn’t make sense, rather than first understanding the reasons the patterns existed, reaching for the patterns only when they found themselves facing the design puzzles the patterns are intended for.