Often in the not-too-distant past, I’ve seen Eugene Wallingford and others cite the declining popularity of Computer Science at colleges/universities in recent years, just at a time when we are going to be needing good new programmers more than ever before. All kinds of theories abound as to the causes – misplaced tech-outsourcing paranoia being constantly stirred up by the media, the crash of the 90s tech bubble, that American kids hate math now and CS, or just the way we’re teaching it, is too dry and mathy – as well as what they might be able to do to convey what computer science and programming are really all about and the joy and thrill that so many of us get from it, in order to reverse the trend.

Let the academics keep debating it, I just realized that Why The Lucky Stiff hit on the root of the problem nearly three years ago. Go read it right now.

The best way to convey to young people the thrill of programming is to let them experience it themselves. Unfortunately, the tools that allowed my generation to do that, and that interested so many young people (including both Why and myself) in a career in computing, all but disappeared in the 1990s as machines like the Apple II and Commodore 64, with their built-in BASIC environments, gave way to the now-ubiquitous Windows-based PC, with no accessible programming environment to speak of, and for which the programming tools available are generally expensive and/or way beyond the comprehension of children.

It seemed like all through the 1980s, kids were playing with computer programming, and learning computers bit by bit through their own tinkering. I believe that it was that generation that actually caused the “tech boom” of the 1990s, as those young computing enthusiasts, having never stopped building their knowledge and skills, suddenly reached adulthood. You may recall how when the newly exploded demand for tech-savvy employees outstripped the number of people with degrees in it, companies actually hired a lot of people who were largely self-taught from childhoods spent as computer nerds. More importantly, many such young-adult geeks, who had already grown up envisioning since childhood the promise of what computing could bring to business and to the everyday person, started companies of their own that brought about the huge burst of innovation that we are still reaping the benefits of. In those days, having been one of those computer kids was worth just about as much as a college education – my best friend from high school got his first admin job in a plastics company that, like many companies at the time, had suddenly found itself with PCs on every office employee’s desk and no one around that had a clue what to do when any of them didn’t work as expected. He was an Electrical Engineering dropout with a job on the factory floor running an injection molder, and just happened to be in the right place at the right time when something went down and he knew how to fix it. As far as I know right now, he’s made a pretty decent career of it ever since.

I used to spend hours as a child with other computer-enthusiast kids I got to know, trading our creations. We weren’t writing big commercial-quality applications, just a lot of simple games or things that drew cool pictures or animations. We would exchange tidbits of secret knowledge we’d managed to dig up from books, or by accident, or from other people’s programs (often complete strangers’) that had found their way onto our floppy disk collections through the grapevine of kids that hung around the Apple IIe’s at the public library. “Look what POKE -16302,0 does! Now you have more room for graphics at the bottom of the screen – the Y axis goes all the way to 191!” “If you PRINT CHR$(4) you can add any DOS comand right after it and the computer will run it!” We were the Little Coders in the title of Why’s article, and while nowadays there are a lot of Little iPoders and Little TextMessagers that we didn’t have in my days, there seems to have been no such growth in the numbers of Little Coders. If anything, there are even fewer of them.

Children no longer have the easily-accessible opportunities that my generation did in the 1980s to program computers at a level they can grasp, and thus experience the thrill of seeing their creations come to life, and thus have the opportunity to spend a lot of years growing and accumulating computing skills. The kids coming into college now are the generation after mine, the 1990s kids who saw computers everywhere, and thus didn’t see them as anything so novel, but are used to using them to do mundane things like play music and send email, rather than as creative tools. And the academics that are trying to figure out why kids aren’t flocking to the major like my generation did just a few years ago are mostly older than my generation, and thus came up in the age of the minicomputer, before the heyday of the likes of the Commodore and the TI99/4A – so they don’t even know what they missed out on. Most of them found their way into CS once already in college, by way of mathematics or electronics. So when Alan Kay tries to impress upon their community his ideas about the importance of bringing programming to children, they may praise him for pursuing interesting ideas, but not having had the kind of formative experiences that Why and I did, they don’t fully understand how on-track Kay really is.

My path to my present career took some detours, but I knew since middle school I was probably going to end up studying Comp. Sci and getting into a career in programming, because I had already had the chance to experience the thrill – and my family didn’t even own a computer for the first few years that I was doing it. Now every kid has a computer in their house and yet programming seems to be less accessible to them than it ever has since I am able to remember. That’s the problem.

At work today, I was trying to get a certain function to work properly… the function’s function (heh) was to take a button, a DataGrid, and a number indicating a column of the DataGrid that contains boolean values (meant to indicate whether the row is “selected” – this was being used in tandem with a CellRenderer containing a CheckBox allowing the user to select/deselect multiple rows) and cause a click of the button to perform a certain operation (namely, calling a remoting method to delete them from their database, using the value of another field as the key) on the records for which that field’s value was true (i.e. their checkboxes were checked). Oh, but only if the user confirms the action by clicking Yes in an Alert box after clicking the button. Not so bad, right?

Tricky, however. This involves setting a function-typed property of the button (onRelease) to the function you want run when the button is clicked (or more precisely, when the mouse button is released after clicking the button); the remoting method needed to also be specified (as either an argument, or defaulting to a reference to a method on the object), since I meant this to be used on multiple DataGrids in the application that viewed data from various different parts of the database that had different remote methods to deal with them – so I also required a function that returned the PendingCall returned by the appropriate remote method. Remoting PendingCalls also require two functions to be given to them, via an object called a RelayResponder, to specify the callback functions for success and failure – but these are only given by their string name (so they have to be named, class-level functions).

This much I had already accomplished a couple weeks ago – what I was trying to do today was to add the alert box to confirm the operation. An Alert is displayed by calling the static method show() on the Alert class with arguments specifying things like what buttons, title, and so on as well as a “handler” function or object, which Alert basically makes an EventListener, that is run when one of the buttons on the Alert is clicked. Until you get into that handler function, you really don’t know whether the user clicked Yes or No.

So yeah, there’s a lot of functions being thrown around here, but nothing too crazy for someone who’s had a little bit of introduction to functional programming. ActionScript is hardly functional programming, but functions are a first-order object in it, and many libraries and components exploit this fact. So here I was inside a superclass function that sets up this “delete-selected” button – which calls a function that has to pass a function to a function that shows an Alert box, and that function itself has to call another function that is defined in this set-up function.

My co-worker Judah thinks that nested functions are evil. I say he’s wrong, and that functions are the very essence of programming, and “nesting” them (using functions within other functions, and so on) magnifies their power exponentially, and I would even go so far as to say that in some modern programming situations there’s things you really can’t do very well (if at all) without them. But if you can find a good way to do what I just described without nesting some “anonymous” in-place functions with lexical closures, (particularly given the use of library functions that ask for functions as arguments without letting you pass your own arguments to those), then by golly post it in the comments section please.

It’s not hard to understand however, why this pretty talented Flash developer and right decent chap that I work with might have an aversion to this kind of use of functions. I think we can lay aside his lack of a formal CS college education, as he’s a pretty bright and very experienced guy, and there’s nonetheless plenty of people walking around with CS degrees who have a hard time wrapping their head around the idea of functions doing things to/with functions. There’s a more concrete reason, and that is that lexical scoping in ActionScript doesn’t really work the way people with prior experience in certain programming languages would expect. Functions defined within class-level functions have a way of losing their ability to access class-level members (be those variables or functions). I’m not sure if the general consensus is that this is a bug, but it’s my opinion, and I think Judah’s as well, that it is. Maybe ActionScript 2.0 is supposed to be using some other kind of scoping, but I think that such would be a departure from the ECMAScript specification from which it’s supposedly derived.

The idiomatic workaround is to use a class called Delegate. Delegate has basically one method (that anyone uses at least) called “create”, which, given a function and an object, returns a Function (capitalized to reflect the fact that the expression does indeed return an object of type Function) that is like the argument function, except set up to run in the scope of the object given. Reaction to any scope-related issue in ActionScript has become almost knee-jerk use of Delegate, to the point that code is littered with Delegate.create(this, …); – personally, I think anytime you have to wrap your code in an extra function to remind it to use the enclosing object (“this”) as its scope, something’s wrong. But then, lexical scoping is about the only scoping I know, other than Common Lisp’s dynamic variables. Anyway, it’s to the point where people have adopted a “just Delegate everything” policy, which means often it’s used where it isn’t necessary. I’m not sure how much unnecessary overhead this creates – probably not much – but I submit that it makes for some unnecessary wear on the ol’ fingers.

The other way of dealing with the issue is a very anti-“nested function” attitude; Flash coder forums are full of people claiming that defining a function literal within a class function is a “memory waste” and “bad form.” True, it can result in memory waste depending on how you use it – but what Flash developers decry as “bad form,” I claim is nothing more than that perfectly normal and very common thing called a closure (classes themselves are closures – your class methods reference variables that are declared outside the functions themselves, but within the class that encloses them), and the “wasted” memory is in fact, memory containing data you intend to use within the inner function. At least it should be – or else you probably don’t have sufficient cause to be using a “nested” literal function.

For an example of what I claim to be a good and proper use of “nested,” or literal, or anonymous, or in-place, functions (I should realy settle on a term), see my earlier post of the numeric DataGrid sorter. It creates a function whose job is to be a listener for the DataGrid’s headerRelease event. Of course the activation object of useNumericSort() persists as attached to that listener function – the listener function needs it because it contains the columns[] array. I go yet another level too – one of the arguments to the sort() method on the Array class is a comparison function, and since the elements to be compared in this case are rows of a dataProvider, that function needs know the column name corresponding to the column being sorted on, and the only place it can get that data is from the listener function inside which the comparison function is being defined. Sure, if you called this function a hundred times you’d have a whole bunch of these extra activation objects… but the purpose of useNumericSort() is to be called exactly once for each DataGrid you want numeric sorting to work on – and in many cases like this one where you’re assigning a function to one of those “on” properties, the new listener function will replace the old one, which, no longer having anything referencing it, will get garbage-collected. (That’s currently not the case with my DataGrid sorter, since it uses addEventListener, but it may be possible to fix it so that it is.) Now, if you actually need it on a hundred DataGrids, then by all means call this function a hundred times – but don’t claim that the extra memory taken up by the persistent activation objects is wasted – it’s being used!

Furthermore, I believe that the “no nested functions” policy results in considerable loss of flexibility in your programming. It prevents you from writing reusable code that solves general problems, and instead forces your code to be geared (hard-coded?) to the project-specific instance of a general problem – such that if you want to use the same technique again, you have to re-hack it to each specific case; exactly the kind of error-prone busy-work that object orientation was supposed to free us from. In other words, I’d be stuck rewriting useNumericSort() and/or the un-nested functions it uses, with a few lines changed slightly, each time I wanted a DataGrid to have numeric sorting. As written, I just paste it in, call it with the right arguments, and it should work.

So, to sum up for any Flash coders not wanting to bother with such hokey academic Comp Sci concepts as lexical closures, there is clearly a type of case where the so-called “nested function” is appropriate, and perhaps arguably necessary: when you need the inner function to be “customized” in some way according to data whose scope is the outer function (especially its arguments).

Anyway, back to the story. I had to pass the function that makes the remoting call (by calling another function that specifies just what remoting call that is) to Alert.show() as the handler function. And it turned out that, being three or four functions deep, suddenly the function that actually makes the specific remoting call annoyingly forgot an important class-level variable that it needed to send along to the remote method, unless everything was Delegated to “this” all the way down. Problem was that, the function that made the remoting call and returned its resulting PendingCall also needed to accept an argument: the keys identifying the records to be deleted.

I was beside myself trying to figure out how to get this done – pass a function to Delegate but also run it with arguments– while another co-worker, Andrew, the guy responsible for all the server-side PHP I was using via Flash Remoting, was sitting next to me trying to help me out, and it occured to me to try something as a joke that I thought I would amuse him. I stuck a pair of parentheses (which I now realize may not have been necessary) around the call to Delegate.create and placed the argument list right after it. “See, here’s the function, and here’s the arguments!” Grinning, Andrew replied that if this worked, it would take the “Weird prize” from C.

It worked.

Of course, it’s not hard to justify its working – the Delgate.create call does after all return/evaluate-to a function, and when you put a parenthesized argument-list after a function name, you’re supposedly calling the function. Replace “function name” with just “function,” thus implicitly including any expression that evaluates to a function, and it makes perfect sense. It’s just that I was surprised that ActionScript, a language that had yet to get lexical scoping quite right, would sport a parser that would do something that cool. And there are plenty of well-respected languages that, for whatever reason, wouldn’t allow such a thing to go on. But then again, ActionScript is the first non-functional language I’ve worked in that treats functions as first-order data.

Here’s where the “dangerous” part comes in… allegedly this “scope bug” is going to be fixed in the next version of ActionScript (3.0). If that happens, it would effectively break the last barrier to me writing ActionScript code structured like Lisp code – slipping in-place, single-use functions into argument lists, and generally turning the whole structure of my ActionScript coding inside out. And I could thus be in danger of making future, less Lisp-aware maintainers of my code very aggravated with me if I let myself take such a capability to its logical conclusion. ;-)

Okay, so that probably wouldn’t really happen. But it’s tantalizing to think about being able to write ActionScript that way.

Found myself spending entirely too much time with this earlier today (found via this post at tjic.com), and realized that, now that I’m not regularly attending or studying for classes, I haven’t been getting my recommended daily allowance of theoretical computer science and math lately. I particularly enjoyed the proof of Cayley’s formula on Wikipedia it linked to. Should I be worried about myself when I enjoy reading combinatorial proofs? ;)

Some comments on the Epsilon-Delta post seemed to question the efficiency or usefulness of the technique, but I was taken in by the beauty of this taking of something very mathy and theoretical (Prufer codes) and showing how it can be applied in a very real-work context (databases) in a potentially very useful way. For those people who doubt the importance and/or “real world” relevance of all this academic theory stuff – consider how many real-world things can be represented as trees, and how you might want to use a computer, with software that normal businesses use everyday, to store and operate on those representations.

Of course, one could just use a network model database in the first place… ;)

It’s an often-brought-up problem among Flash developers that the DataGrid component’s sorting behavior leaves something to be desired. To wit, it tends to sort lexicographically and only lexicographically, which means that sorting on a numeric column produces something rather other than you’d intended – i.e., 1, 10, 100, 11, … ,19, 2, 20, etc.

I’ve seen a few different solutions proposed for this, but without going into the details of their various shortcomings, I think I’ve done most of them one better. [code lang=”actionscript”] private function useNumericSort(columns:Array, dg:DataGrid) { dg.addEventListener(“headerRelease”, function (evt:Object) { var dataGrid = evt.target; var data = dataGrid.dataProvider; var column = evt.columnIndex; var columnName = dataGrid.columnNames[column]; var direction = evt.sortDirection.toUpperCase(); var sortOptions = 0; var useNumeric:Boolean = false; for (var index in columns) { if (columns[index] == column) { useNumeric = true; break; } } if (useNumeric) { data.sort(function(recordA, recordB) { var a:Number = Number(recordA[columnName]); var b:Number = Number(recordB[columnName]); if (isNaN(a)) { a = 0; } if (isNaN(b)) { b = 0; } return (direction == ‘DESC’) ? (b-a) : (a-b); }, sortOptions); } else { sortOptions |= (direction == ‘DESC’) ? 2 : 0; sortOptions |= 1; // case-insentitive data.sortOn(columnName, sortOptions); } }); } [/code]

Call this with an array of column numbers and a reference to the DataGrid to apply it to. For example, for a DataGrid called order_dg having numer values in columns 0, 3, 4, call it thusly: [code lang=”actionscript”] useNumericSort([0,3,4], order_dg); [/code] Naturally, there may be some modifications for your particular needs…

*update: Found some apparent typos in this function and fixed them. Apologies to anyone who tried to use it as it appeared originally.

Now and again a stray memory of something interesting I ran across in my past will pop into my head, and send me a’Googling. Often times, it’s something obscure and I come up empty and wondering if maybe the thing actually extists, or is a product of my meandering imagination, or simply too inaccurately recalled to be found again. Occasionally, however, a second attempt, or a bit more persistence, will turn up just the odd bit of trivia I was looking for. Those times, I want to share it, and maybe attract some new interest to it. Today that thing is the CEEMAC graphics programming language and its highly passed-around demo, Fire Organ.

Two old 5.25” floppy discs among the hundreds in my closet bear these names, but it’s been some time since I’ve tried to see if either of them will still boot on any of the old Apple II machines I still have around. If CEEMAC was ever implemented for any other platform than the Apple II series and its compatibles, I’m not aware of it. What I do remember is as a youngster being quite impressed with these collections of beautiful patterns of moving, changing curves and shapes. When the phenomenon called a “screen saver” first began to appear in my environment with similar moving patterns, I remember at first thinking that CEEMAC had hit the big time. I also remember once or twice stumbling upon the key command that would drop either program into a text-editor environment giving the source code for the particular piece being viewed, and being highly intrigued, but never quite getting around to learning how to make my own CEEMAC “scores.”

Anyway, I now find that someone else out there remembers CEEMAC and has put up a tribute on the web. Old-school Apple II geeks and anyone else who derives pleasure from near-forgotten gems from personal computing’s past ought to give it a look.