Occasionally I surprise myself and end up feeling a desire to write about it and toot my own horn a little bit. What better place to do that than on a professional blog at least part of the purpose of which is to show prospective employers or clients that I’m good at stuff?

I’m pretty good, I guess

note: personal background jabber, skip this section at will

I’m largely self-taught in the area of databases and SQL. The only course I ever took on the subject was a quarter-length database class, circa 1999, at Hamilton College (since bought up by Kaplan, I think) as part of their two-year IT degree program. It used Microsoft Access and was very beginner-level and I think I might have been out sick on joins day. Later when pursuing my Computer Science degree I avoided the databases course out of dislike for the professor who taught it; the alternative course to meet the same requirement had more to do with text indexing, information theory – search-engine kind of stuff – and oddly enough, the course taught and used an open-source multi-dimensional hierarchical database and MUMPS compiler developed by the course’s professor (multi-dimensional databases are quite good at storing and comparing things like, vectors of the occurrences of hundreds of different words in a bunch of textual articles). So, yes, I learned MUMPS in college instead of SQL. Actually, you can download and make-install the C++ code for the MUMPS compiler we used yourself, which compiles MUMPS into C++, if you ever get a wild urge to do such a thing. In fact, I’d recommend it to my fellow programming language nerds, especially those interested in old, obscure, or just plain weird languages. At the very least you’ll have a little fun with it; and I believe MUMPS is even still in use in some corners of the health care industry, so you’d be picking up a skill that’s in some demand yet increasingly difficult to hire for. (While you’re at it, check out Dr. O’Kane’s MUMPS book and his rollicking, action-packed novel.)

At my first real programming job, I started out coding in Actionscript 2.0 but when a particular developer left the company, someone was needed to take over server-side development in PHP, so I took it upon myself to learn PHP, and, as it turned out, also ended up needing to learn SQL and relational databases. I read a PHP book or two and a whole lot of blogs, but mostly just dove right in to the existing code and gradually made sense out of it. Eventually I was working back and forth between Actionscript and PHP pretty regularly. That kind of pick-it-up-as-needed approach is pretty much how I roll, though it’s hard to explain this kind of adaptability to recruiters who are looking to basically keyword-match your experience against a job description, which can be a real drag if you’re the type of person who craves new experiences. When at UNI I had been the kind of student that made a point of taking the more theoretical computer-sciencey courses, on the rationale that things like programming languages are certain to change in the future, but they will most likely continue to build on the same underlying theory dating at last as far back as good ol’ Alan Turing. I would say that approach has paid off well for me in the years since. My first boss described me in a LinkedIn endorsement as being capable of working in multiple programming languages simultaneously, “something which drives most of us insane.”

But I digress (often). Like I said starting out this post, sometimes I still surprise myself. When I pull off something new or just more complex than I’m used to, it feels good, and I like to share it, not just to strut about, but also because I am sure others are out there trying to solve similar problems, and also to give credit to others whose work I drew on to arrive at my solution. And like I said, my SQL skills are largely the product of a few old blog posts and experience so I was pretty stoked at what I pulled off this week.

The assignment

I was given the task of populating a “related articles” part of a page on a news website. Naturally the first thing I thought we needed to hash out was how the system should conclude that two articles are related. After some discussion we arrived at this idea: we would score two articles’ relatedness based on:

  • The number of keyword tags they have in common (this was the same site using acts_as_taggable_on from which I drew this recent post)
  • The number of retailers they have in common (Article HABTM Retailer)
  • How close or far apart their published_at timestamps are (in months)

How this turns out to be slightly difficult

This sounds perfectly reasonable, even like it would be pretty easy to express in an OO/procedural kind of way in Ruby or any other mainstream programming language. But once this site gets a long history of articles, it’s likely that looping or #maping through all of them to work this out is going to get way too time and memory intensive to keep the site running responsively.

Another alternative is to store relatedness scores in a database table and update them only when they need to change; we could hook in to Rails’s lifecycle callbacks like after_save so that when an article is created or saved, we insert or update a record for its relatedness to every other article. That still sounds intensive but we could at least kick off a background worker to handle it. However, I got the feeling that there was potential for errors caused by overlooking some event that would warrant recalculating this table, or missing some pairs.

And there was still another wrinkle to work out: the relatedness scores pertain to pairs of articles, and those pairs should be considered un-ordered: the concept of article A’s relatedness to article B is identical to B’s relatedness to A. I don’t know if any databases have an unordered tuple data type and even if they did whether ActiveRecord would know how to use it. It seems wasteful and error-prone to maintain redundant records so as to have the pairings both ways around. Googling about for good ways to represent a symmetrical matrix in a SQL database didn’t bear much fruit. So it would probably be best to enforce an ordering (“always put the article with the lower ID first” seems reasonable). But then this means to look up related articles, we need to find the current article’s ID in one of two association columns, rather than just one, and then use the other column to find the related article. I’m pretty sure ActiveRecord doesn’t have a way to express this kind of thing as an association. Which is too bad, because ideally, if possible, we’d like to get the relatedness scores and related articles in the form of a Relation so that we can chain other operations like #limit or #order onto it. (Possibly we could write it as a scope with a lambda and give the model a method that passes self.id to that, but I’m still not sure we would get a Relation rather than an Array. The point at which ActiveRecord’s magic decides to convert from one to the other is something I find myself constantly guessing on, guessing wrong, and getting confused and annoyed trying to come up with a workaround.) But so it goes.

Any way we look at this, it looks like we’re going to be stuck writing some pretty serious SQL “by hand”.

I’m not going to show my whole solution here, but you probably don’t need all of it anyway. I think the most useful bit of it to share is the shared-tags calculation.

Counting shared tags in SQL

acts_as_taggable_on has some methods for matching any (or all) of the tags on a list, and versions of this that are aware of tag contexts (the gem supports giving things different kinds/contexts of tags, which I’m not going into here but it’s a cool feature). So obviously you can call #tagged_with using an Article’s tag list to get Articles that share tags with it, but the documentation doesn’t mention anything about ordering the results according to how many tags are matched, or even finding out that number. Well, here’s the SQL query I arrived at that uses acts_as_taggable_on’s taggings table to build a list of article pairs and counts of their shared tags. One nifty thing about it is that it involves joining a table to itself. To do this, you have to alias the tables so that you can specify which side of the join you mean when specifying columns, otherwise you’ll either get an ambiguous column name error or you’ll just get confused. You’ll see I’ve also added a condition in the join that the “first” id be lower than the “second,” forcing an ordering to the ID pairs so as to eliminate duplicate/reversed-order rows and also eliminate comparing any article with itself, since we don’t care to consider an article related to itself. (Also, the way this is written Article pairings with no shared tags won’t be returned at all. Maybe try a left join if you want that.)

  first.taggable_id as first_article_id,
  second.taggable_id as second_article_id,
  count(first.tag_id) as shared_tags
from taggings as first
join taggings as second
  first.tag_id = second.tag_id and
  first.taggable_type = second.taggable_type and
  first.taggable_id < second.taggable_id
where first.taggable_type = 'Article'
group by first_article_id, second_article_id

Add a and first_article_id = 23 or second_article_id = 23 to the where clause here and you’ll get just the rows pertaining to article

  1. Add an order by shared_tags desc and the rows will come back with the highest shared-tag-counts, the “most related,” at the top. If you’re looking to know the number of shared acts_as_taggable_on tags among your articles or whatever other model you have, here you are.

Building a leaning tower of SQL

So, for the other two relatedness factors, I did a similar query to this against the articles_retailers table to count shared retailers, and another on articles to compute the number of months apart that pairs of articles were published to the site. Each query used the same “first id less than second id” constraint. Then I pulled the three queries together as subqueries of one larger query, joining them by first_article_id and second_article_id, and added a calculated column whose value was the shared tags count plus the shared retailers count minus the months-apart count and call this their score – a heuristic, arbitrary measure of “how related” each pairing of articles is. (The coalesce function came in mighty handy here. Despite its esoteric-sounding name, all it does is exchange a null value for something else you specify, like you might do with || in Ruby – so coalesce(shared_tags, 0) returns 0 if shared_tags is null, or otherwise returns whatever shared_tags is, for example.)

As you are probably picturing in your head, the resulting master relatedness-score query is huge. It took me a good couple hours at a MySQL command-line prompt composing the subqueries and overall query a little bit at a time. It felt awesome. But still: the result was one seriously big glob of SQL. (Incidentally iTerm2 acted up in a really weird way when I tried pasting these large blocks of code into it, but not when I was SSHed into a remote server; if this rings a bell to you, drop me a line.) I’m going to spare you the eye-bleeding caused by seeing the whole thing. You’re going to drop that big nasty thing in the middle of some ActiveRecord model? Yikes!

Views to the rescue

In a forum thread where I was looking for help on the implementation of all this, Frank Rietta suggested I consider using a database view. To be perfectly honest, I hadn’t used a view in years, if ever. I didn’t even think MySQL had them (yes, I’m using MySQL, don’t judge) – maybe some older version I used in the past didn’t and they’ve been added since? At first I wasn’t sure how this could help me, but then Frank wrote this excellent blog post on the subject. I read it, and the more I thought about it, the better the idea sounded.

Basically, a view acts like a regular database table, at least when it comes to querying it with a select. But underneath it’s based on some query you come up with of other tables and views. You can’t write to it, but it provides you with a different “view” of your data by what I would describe as “abstracting a query.” And because the view can be read from like any other table, it can also act as the table behind an ActiveRecord model (at least, until you try to #save to it). Go read Frank’s post so I don’t have to recap it here. You’ll be glad you did.

The great advantage of using a view to hold the relatedness scoring is that I don’t have to think about writing Ruby code to maintain the table of relatedness scores, I don’t have to think about background jobs or hooking into ActiveRecord lifecycle callbacks to maintain the data or any of that – the database itself keeps this “table” updated. Any time the tables it depends on change, it changes right along with them automatically. Plus it gets the big hairy SQL query out of my Ruby code where it won’t distract or confuse anyone; and it handles the issue of making sure first_article_id is always lower than second_article_id because that’s expressed right in the query it’s based on.

So that settles it, I create a view out of my big relatedness-scoring query and an ActiveRecord model over top of it! Only one problem, and it turned out to be pretty minor, but as I mentioned, my big relatedness query involved a join over three subqueries. Turns out that in MySQL, views can’t have subqueries. Perhaps they can in other database engines, I would not be surprised, but not in MySQL. The workaround for this is to create views for the subqueries and query those views. Honestly that probably makes the SQL read more easily anyway. On the other hand, I ended up creating four views. That was definitely the longest Rails migration I have ever written, by far.

The models and other miscellaneous thoughts

So, now I have a table called article_relations that contains pairs of Article id’s and their relatedness scores, I can give it a model like this:

class ArticleRelation < ActiveRecord::Base
  belongs_to :first_article,  class_name: 'Article'
  belongs_to :second_article, class_name: 'Article'

  def other_article(source)
    [first_article, second_article].find{|a| a != source}

  def readonly?

And give the Article model a couple methods like this:

  def article_relations
      'first_article_id = ? or second_article_id = ?', id, id).order('score desc')

  def related_articles
    article_relations.map{|r| r.other_article(self)}

Or something to this effect. You’ll likely want to have your view only contain records where the score is above 0, for instance, or give the above methods an optional parameter to use in a limit so you can limit the number of related articles you show.

Which reminds me, speaking of #limit… as I alluded to before, it would be great if I could do things like @article.related_articles.limit(10) here but I can’t. This bugs me a little bit, because it means that some of my queries to the Article class are going to call #limit and others will have to pass the limit as a parameter, or slice the array like [0..9] or something, so I have code where doing the “same” thing reads completely differently. (I am also unfortunate enough to still be working with Rails 2 regularly, where limit goes in an options hash. It appears if you try that syntax in Rails 3, it just ignores it.) There are other gems like punching_bag where this itches at me a little as well (not to mention, I’d like to be able to give my model a method or scope with a name more appropriate to my domain such as popular or hot and have that delegate to most_hit). I think this might just be a product of the usual leakiness of ORM abstractions and I’ll just have to get over it.

One caveat that should be pointed out is that Rails’s generating of schema.rb doesn’t handle views “properly” and probably can’t be made to when you think about it or depending on what you think the proper thing for it to do would be. Rails will dump the structure of your views out as regular tables, so if you use rake db:schema:load you’ll get tables rather than views with all their cool magic. At this point it’s probably a good idea to uncomment that config.active_record.schema_format = :sql line in your application.rb configuration file, which will make rake db:migrate spit out a structure.sql file instead of schema.rb, and get rid of schema.rb altogether.

Another thing worth considering, depending on the complexity of your view(s), is whether to make them materialized views. This is a view that’s backed by a physical table that gets updated as needed. It’s more efficient to query but a little slower to update so the effects of a change to one of the tables it depends on might not be reflected right away, but this may be a worthwhile trade-off to make.

Join me next time when I talk about technical debt or something like that.

I wanted a really fly keyword tagging input in my app that let me do what I’m already pretty used to doing with Wordpress’s tagging: auto-complete existing tags to help me maintain consistency, but also let me make up new tags on the spot.

Select2 is nice as heck, and has a tagging functionality that does just what I’m looking for and is even prettier than what Wordpress has. The section on “Tagging Support” on the website looked like pretty much exactly what I wanted, but there were a few things to iron out: Firstly, I didn’t want to have to stick all the existing tags in the javascript or in the view. Yeah it’s cool that the asset pipeline lets us do .js.erb but it just feels wrong; and that list of all the existing tags could get pretty big, so jamming it all into an HTML attribute feels even more wrong. What I wanted was that AJAXy searching autocomplete where you start typing and it fetches a list from the server and that list narrows down as you type more letters. And on top of it all, I was doing this in Active Admin in a Rails 3 app.

image Select2 docs screenshot

select2-rails takes care of a good bit of that last bit, though it doesn’t do too much more than package it up for the asset pipeline. I had to wrestle with it quite a bit more, hacking bits and pieces together from different documentation, blogs, and StackOverflow threads, before everything would behave like I wanted, even though I didn’t think what I was after was particularly exotic. So naturally the right thing do to once I got it all working was to write it up here. I think even if you’re not using Active Admin, a lot of this will still help without too much adjustment, especially if you’re using Formtastic.

First off you’ll want gem 'select2-rails' and gem 'acts-as-taggable-on' in your Gemfile and bundle install’d. Then pull the select2 javascript into your app by putting //= require select2 in your active_admin.js – or application.js if you want to also have it available in non-admin parts of your app – and that same line in active_admin.css.scss. If some stuff still looks visually out of whack later on, try adding this at the end of active_admin.css.scss:

body.active_admin {
  @import 'select2'

So now we get into how to put this in your Active Admin form. We’ll make it an input for acts_as_taggable_on’s tag_list accessor because it does such a nice job of Doing What You Mean with very little fuss. Here’s a somewhat redacted excerpt from my app/admin/articles.rb:

form do |f|
  f.inputs do
    f.input :title
    f.input :content, as: :rich
    f.input :tag_list,
      label: "Tags",
      input_html: {
        data: {
          placeholder: "Enter tags",
          saved: f.object.tags.map{|t| {id: t.name, name: t.name}}.to_json,
          url: autocomplete_tags_path },
        class: 'tagselect'

As you can see, there’s quite a few attributes being given to the input’s HTML element, which Select2 will then hide and manipulate behind the scenes while presenting us the very cool tagging widget we love. The class could be whatever we want, but it’s what we’ll be using to find this element in the javascript we’ll get to momentarily.

The data hash gets placed on the input as data attributes. This is data we want to make available to said javascript. saved is for the article’s current tags, so that the widget can render those right away. Select2 expects to work with a JSON array of objects, but you’re probably wondering why I’m passing both an id and a name but setting both values to the tag’s name.

The thing is, since we’re using the tag_list accessor, we don’t really care about the tags’ IDs. I think that’s fine, after all, conceptually, a tag’s name is it’s identifying attribute. It would be a perfectly reasonable design for the tags database table to not have an id column at all and have name be the primary key – that would match our mental model of tags – but this is Rails where everything has to have an id. More to the point, Select2 won’t render the tags right, or at all, if they don’t have an id attribute with something in it. But when I used the tags’ actual ID there, the IDs were ending up among the array of tag names in the params coming in to the Rails app causing me to end up with extraneous tags getting created whose names were those IDs, and that was awful. There might be other ways around this.

The url data attribute is there to tell Select2 where to find the remote service to look up tags in for the auto-complete. It’s up to you whether you want to set this up in another controller, what you want to name it, and so on. In my case, just keeping it simple, I added it to Active Admin’s controller for my app/admin.articles.rb, like so:

controller do
  def autocomplete_tags
    @tags = ActsAsTaggableOn::Tag.
      where("name LIKE ?", "#{params[:q]}%").
    respond_to do |format|
      format.json { render json: @tags , :only => [:id, :name] }

and correspondingly, in config/routes.rb:

get '/admin/autocomplete_tags',
  to: 'admin/articles#autocomplete_tags',
  as: 'autocomplete_tags'

Fairly straightforward what’s going on here, we’ll be having Select2 pass in what we’ve typed so far in the q param and using a SQL LIKE query to give back tags to offer in the little auto-complete list.

And now, the javascript to fire up Select2’s tag input magic. Right now I just have this tacked on the end of active_admin.js but it’s a significant enough piece of code that I’d feel justified putting it in a separate file and //= require-ing it.

$(document).ready(function() {
    $('.tagselect').each(function() {
        var placeholder = $(this).data('placeholder');
        var url = $(this).data('url');
        var saved = $(this).data('saved');
            tags: true,
            placeholder: placeholder,
            minimumInputLength: 1,
            initSelection : function(element, callback){
                saved && callback(saved);
            ajax: {
                url: url,
                dataType: 'json',
                data:    function(term) { return { q: term }; },
                results: function(data) { return { results: data }; }
            createSearchChoice: function(term, data) {
                if ($(data).filter(function() {
                    return this.name.localeCompare(term)===0;
                }).length===0) {
                    return { id: term, name: term };
            formatResult:    function(item, page){ return item.name; },
            formatSelection: function(item, page){ return item.name; }

So at the top you can see I start with a jQuery selector of that “tagselect” class I put on in the input_html option, then grab the values off those data attributes, then call select2 on the element with a whole mess of the options it accepts. The most interesting bits:

  • tags: true is the simplest way to tell Select2 this is a tagging input without having to tell is what tags to autocomplete for up front.
  • minimumInputLength is how many letters we want the user to type before we start trying to suggest completions.
  • initSelection is used to set up the tagging input at the start, to get it to display what we brought in the saved data attribute.
  • ajax sets up the call to our autocomplete_tags action described before.
  • createSearchChoice is where we tell Select2 how to put the results of that call in the autocomplete list. The snarly-looking conditional here is just to filter out duplicates of tags we’ve already got picked out. As long as it’s not a duplicate, we whip up another id/name object just like we did when we set up the saved data attribute.
  • formatResult and formatSelection look for a text attribute if you don’t tell them otherwise so I’m telling them to use name.

And that’s pretty much all it takes. I had to complicate it up pretty heavily in order to see how to get it this simple, now you don’t have to. Have fun!

update 6 September 2014: Samo Zeleznik writes in:

When I create a new post with a tag that is the same as a tag that was already created prior to that it does not save it by it’s name, but by it’s id. So it creates a new entry in the tags table that has a unique id, but the name of that tag is the id of the real tag.

What I just wrote is probably a little bit confusing so let me explain it with an example: I have a post tagged with “math” and this tag has an ID of 5. Now I create a new post and I tag also tag it with “math”. Now when I save this post it will bi tagged with 5. So It creates a new tag with a unique id (6 for example) and names it 5 (id of math). Do you have any idea what could be causing this issue?

Around the same time, David Sigley tweets me with what appears to be the same issue.

As it’s been quite a while, all I could offer was that I sorta remembered having trouble with tags getting named their IDs instead of their names before and there was some hack I had to do, and I may not have done enough to point it out and explain it above. Later Samo sent me this StackOverflow question where he got it worked out, and the solution comports with the ruby code above that looks like this: f.object.tags.map{|t| {id: t.name, name: t.name}}.to_json. Note how the hash/JSON has an id key and a name key, but the value at both is the tag’s name. Later the Javascript does something siliar: return { id: term, name: term }; Then David figured it out too. I don’t have a really clear idea of why it has to be this way, it’s a hack, but there you have it.