A celebration of code – 6 pieces of code that had an impact

Let the code speak animation

Code elicits a special response from programmers. It can be enchanting, amusing or inspiring. It can also be discouraging, confusing or even evoke fear. I recently asked my colleagues which pieces of code have made the biggest impression on them. Among a multitude of answers, I picked these six examples to showcase the diverse ecosystem that programming represents. Each one is different, but all of them are equally impressive.

Lines of code, which to the uninitiated may look like mere gibberish, tell us stories about their authors and users: stories of people who are clever, practical, creative, conscientious or brave. In some cases, lives depended on these lines of code.

But why do we care so much if the code we write is elegant and clear? The computer certainly does not care. It dutifully executes any and all instructions given to it without any consideration of whether or not said instructions are smart, elegant, performant or even necessary.

But if it will only ever be other humans who will glance at source code, doesn’t it make sense to write it for humans? Indeed, this is exactly the approach suggested by H. Abelson, G. Sussman and J. Sussman in The Structure and Interpretation of Computer Programs. According to the authors, “programs should be written for people to read, and only incidentally for machines to execute.” The following six pieces of code are perfect examples of this principle.

How diligent software design saved the moon landing

The first example in this article is a story of extraordinary circumstances and the need for ultimate in reliability.

Having successfully completed several suborbital and orbital manned spaceflight missions in Project Mercury, NASA aimed higher (pun intended) with the Apollo missions. With the ultimate goal of reaching lunar orbit culminating in a full-scale landing, the difficulty level skyrocketed. While it was possible to perform all of the necessary navigation calculations on the ground when you’re flying relatively close to Earth, this would no longer be an option for the lunar missions. An in-flight computer that would fly along in the spacecraft and control it was needed, and thus the Apollo Guidance Computer (AGC) was built.

Although modest by modern standards, the AGC was an impressive work of engineering. Packed in a form factor the size of a suitcase with a modest power draw of 55 watts, it could execute some 85,000 CPU instructions per second the equivalent of a full-blown, refrigerator-sized System/360 mainframe. Equally impressive was its programming: the custom computer came with is own assembly language, which, when completed, would be stored in core rope memory blocks, reminiscent of hand-weaving threads to form a fabric. Under the direction of Margaret Hamilton, the software created by MIT Instrumentation Laboratory would later be named “the foundation for ultra-reliable software design”.

Now, as programmers, most of us strive to build reliable software and surely nobody builds failing software on purpose. In the context of lunar missions, though, the reliability requirements for both software and hardware were far beyond what the vast majority of us will ever have to handle. Not only would the software be stored in read-only memory and thus could not be changed over the course of a single mission, even the tiniest of logic errors could lead to the astronauts being marooned on the moon or sent careening into outer space, never to return to Earth again. It was clear that there would be no room for mistakes and errors, yet it would be impossible to control all the possible ways that things could go wrong during such a long mission. These two constraints meant that significant emphasis would be placed on error handling and recovery. And things did go wrong indeed.

On the fourth day of the mission, Neil Armstrong and Buzz Aldrin separated the Apollo Lunar Module from the Command Module and began their descent towards the surface of the moon. With only 6,000 feet to go to the surface, the lunar module guidance computer began outputting error codes. What happened was that the real-time computer system suddenly started receiving thousands of interrupts from the rendezvous radar that was tracking the command module. These interrupts were dutifully handled by the guidance computer, which suddenly found itself overloaded with work. While a brief lock-up would hardly be a problem for anyone tabulating an Excel spreadsheet, it would surely spell disaster for an astronaut trying to coax a spacecraft to a controlled landing on a distant celestial body.

Thankfully the lunar module software was carefully designed and, upon facing the workload, the lightweight real-time operating system opted to discontinue processing tasks that had been designated lower priority. In such a tightly controlled system, cooperative multitasking was a viable choice, which enabled the system to resume critical tasks such as attitude control while discarding other operations until cycles were available again. Had it not been for the diligent work by Hamilton’s team, the mission would have very likely been something else than a resounding success. It was their work that very literally shaped the course of history, and years later these two lines of code (EXECUTIVE.agc#L147 and EXECUTIVE.agc#208) stand as testament to this remarkable accomplishment.

Fast inverse square root

Make it work, make it right, then make it fast is an old adage that aims to inform about the perils of premature optimization. There are times, though, when stretching the boundaries of performance unlocks innovation.

Anyone having at least the slightest interest in computers and the Internet got their mind blown in 1999. Starring Keanu Reeves as the protagonist, The Matrix offered a bleak look into a dystopian future where computers had enslaved all of the human race and subjected them to a simulation so realistic that it would remain completely undetectable to the human senses. In real life, however, the current state of consumer real time computer graphics was far from photorealism. It wasn’t from lack of skill, though, as the graphics programmers of that time worked to push the envelope of early OpenGL GPUs.

At that time, id Software was one of the most revered studios that was famous for creating groundbreaking hit after hit each one of them taking revolutionary steps in computer games graphics. In 1999, id Software released Quake III Arena, signaling a switch to the newest version of their in-house game engine id Tech 3. This brought several distinguishing new features including limited support for shaders years before these features would appear in GPUs. At the heart of the Quake III Arena rendering pipeline lay an inconspicuous, yet vitally important piece of code, the fast inverse square root algorithm.

It was not until the release of the Quake III Arena source code in 2005 that knowledge of the algorithm spread more widely. Before that, it was being passed on as arcane knowledge from one developer to the next, with only a handful of people aware of its existence. The original author of the algorithm remains unclear, but best efforts have traced its origins to Greg Walsh, Cleve Moler and William Kahan in the late 1980s. Yet, even though its importance was clearly shown, the algorithm remained largely unknown until the beginning of the new millennium. The only evidence of knowledge outside the original circle of authors being a couple of archived messages on a Chinese developer forum.

The algorithm (omitting preprocessor directives and comments) is as follows:

float Q_rsqrt( float number )
{
 long i;
 float x2, y;
 const float threehalfs = 1.5F;

 x2 = number * 0.5F;
 y  = number;
 i  = * ( long * ) &y;         
 i  = 0x5f3759df - ( i >> 1 );              
 y  = * ( float * ) &i;
 y  = y * ( threehalfs - ( x2 * y * y ) );  

 return y;
}

Source

Broken apart, the algorithm uses a clever trick to arrive quickly at a rough approximation of the inverse square root, and then refines it further using Newton’s method. What is unknown, however, is how the magic constant 0x5f3759df was found. It would appear that the number has been known for some decades, and passed onwards to a select few. Having since been made aware to a larger audience, an even better constant has been found which improves the approximation even further.



The importance of the speedup provided by the algorithm is directly related to the field it is used in. In order to render real-time full-screen shaders, calculating angles of incidence and reflections to simulate lighting, vector calculations would need to be performed millions of times per second. Before the advent of hardware-accelerated transform and lighting
these calculations would be performed on the CPU, and these calculations are what the fast inverse square root algorithm sped up significantly. Using the algorithm, Quake III Arena was able to support a number of advanced features in a software rendering pipeline that would only be available in hardware accelerated form years later.

Conway’s Game of Life in APL

There is a point in almost any programmer’s life when they will take on the joyous task of building their own version of John Horton’s 1970s the Game of Life. It is an entertaining challenge, does not take too much time, and the results and the associated feedback are pleasing. Most beginning programmers will opt for an imperative programming style, which offers heaps of learning opportunities as well. However, Joni one of our senior software developers was blown away (and curiosity towards functional programming piqued) after seeing the Game of Life implemented in APL.

The rules of the Game of Life are very simple. The playground is a simple two-dimensional matrix of cells. As the simulation is advanced one step at a time, simple rules govern whether or not individual cells are born, live, or die depending on the state of each cell’s neighbors. Out of these simple underpinnings that define the Game of Life, surprisingly complex behavior emerges. Combinations of cells can act together in predictable manners, and these combined patterns have been shown to be Turing complete.

From simple gliders and automatons, hobbyists have pushed the envelope to first implementing logic gates all the way to metapixels (which simulate pixels on a computer screen), eventually reimplementing the Game Of Life itself on the Game of Life. Over the course of several years, a group of CodeGolf StackExchange users eventually implemented a working, programmable CPU (with ROM, RAM and ALU units) in GoL, which is capable of running Tetris.

If The Matrix turns out to be true, and we’re all living in a giant simulation, there is now a definite chance that it’s all built on the venerable Game of Life.

The strange beauty of Fibonacci in Haskell

A key part of programming is choosing the right level of abstraction for the problem at hand. Different programming languages offer differing levels of expression in the source code, and especially in recent years functional programming languages have risen their head from the mists of history to become more and more popular. The reason these languages often receive praise from their users is that once you are able to cross the threshold of learning to think in a completely new manner especially if your previous experiences stem from the more imperative languages you are rewarded with a completely new level of abstraction to express problems and solutions with.

For our developer Juha, the author of the functional reactive programming library bacon.js, this revelation came in the form of implementing the Fibonacci sequence in Haskell. Yet another staple of programming puzzles, most solutions to the Fibonacci problem would opt for a simple for-loop. Haskell, being a pure functional programming language with lazy semantics, allows the solution to be constructed in a much more concise manner. Considering that the mathematical definition of a Fibonacci sequence is simply Fn = Fn-1+Fn-2, Haskell allows you to express this sequence in a form that’s almost as terse:

fib = 0 : 1 : zipWith (+) fib (tail fib)

This produces, of course, an endless list of numbers, which in many other contexts would lead to an infinite loop and a programming error. However, Haskell being a lazy language, the list can be expressed and evaluated separately, which allows the programmer to only take as many values as are needed for the particular use case. According to Juha, being able to express an endless computation drove home the point that structure and interpretation of computer programs could be separated. According to him, “it blew my mind when I saw this the first time. Dude, you just zip the list with its own tail!”

Peter Norvig’s ~20 line spell checker

We humans are by no means perfect. Our fingers lumber along keyboards and touchscreens, and every so often they end up not quite where we intended them to go. Often, these mistakes are rather benign and amusing, but there are cases where they have had disastrous consequences. In 1962, the Mariner 1 probe was lost due to a typo, a spelling error that cost NASA some $18.5 million. While a spell checker would likely not have saved Mariner 1, it is no wonder that such solutions are a staple of natural language processing.

While recent years have seen strides being made in the field of Deep Neural Networks, they are still considered to be heavyweight tools that require a large corpus of training data and a healthy dose of CPU time to train. The results are quite impressive, but equally impressive is Peter Norveig’s handcrafted implementation written back in 2007 on a transatlantic plane ride. For such a quick and short (some 36 lines of code) solution, this piece of software achieves a respectable ~70% accuracy.

Obviously spelling correction is an established field of science, and Norveig’s results have been outclassed by several factors. Still, this example is equally applicable to any less explored application of machine learning. Before jumping headfirst into training a fully-fledged, complex model, it pays off to quickly experiment and establish a simple baseline against which any potential improvements can be measured. Sometimes a “good enough” solution to a problem literally is “the simplest thing that could possibly work”.

“Marco! Polo!”

Of course, having your mind blown is a very subjective experience. What is amazing and new to some is mundane to others. Nevertheless, many programmers can still remember the exact time they got hooked. For one of our developers Marko, it was when he typed these two lines in a Commodore 64 Basic interpreter:

10 PRINT "MARKO"
20 GOTO 10

The rest is history. For a young kid, the very idea of being able to control and command a computer was exciting and new. In an instant, you turned from a consumer of games and entertainment to a producer of novel and fresh ideas. Naturally, the first stabs at writing our own programs were often cumbersome, but soon interest and experience grew as one by one more and more complex projects came to life. It wouldn’t take long for the limits of C64 Basic to be reached, but the immediacy of the computer brought along new opportunities.

Owing to lack of abstraction, the Commodore 64 allowed you to POKE and PEEK at memory locations directly, causing various effects (both wanted and unwanted, but mostly delightful). These would naturally be used to prank friends and unsuspecting store owners. Some proceeded even further and started programming in assembly language. No matter which level of knowledge you ended up, the starting point was the same and the system contained all the necessary tools to poke at its internals without having to fear breaking it. Even the worst mistreatment of memory addresses could always be reverted with a flick of a switch.

In a way we’ve taken many steps in a different direction lately: there’s more computing power than ever before available to us in the palm of our hand, yet it has become less and less possible to dig deep within its internals to understand how things really work. Tim Cook, if you’re reading this, I want my POKE back!

(Author’s note: the C64 programming scene is still alive and well)

What piece of code had an impact on you?

This article is just a small selection of what our people considered to be remarkable pieces of code. Naturally, in a field as vast as computing there are almost limitless examples that can cause a similarly dramatic expansion of grey matter. With that in mind, are there any other examples that you think should be on this list? Let us know!

Reaktor is a company that’s filled with people who love programming. We’re constantly looking for like-minded individuals, so if this article struck a chord in any way, we’d be happy to sit down and talk some more. You can either send us a freeform email at careers@reaktor.com or go to reaktor.com/careers to see what we’re doing. We look forward to hearing from you!