Stravinsky talked about the concept of creative limitation in music being useful in order to reach new and different results. There have been TED talks on limitations as an upside. (LaaU?) Publications like Harvard Business Review and Forbes have written about constraints breeding creativity. Since the concept became somewhat popular, it can sometimes feel like way too much lemonade-making and not enough being pelted with a dozen lemons in the face with your eyes open.
My first serious coding was on the Atari ST home computer. It had a Motorola 68000 CPU clocked at 8MHz and exactly zero helper chips for graphics or sound. Default RAM was either 512KB or 1MB depending on the model, upgradable to 4MB. There was exactly one way to squeeze everything possible out of this limited platform and that was coding on bare metal in assembly. There existed compilers for high-level languages but the output yielded executable code that was far too slow for any real-time graphics or sound processing. Both the demoscene and games scene were all about graphics and sound, because graphics and sound are awesome.
I’m not going to pretend the ST was history’s most limited platform; it was preceded by many less advanced home computers. Even so, by today’s standards, this 16-bit platform is stone age tech and now exists in actual museums. My amazing childhood superpower-granting sci-fi machine. In a museum. (Yes, I still have one. Yup, it still works. Sure, I still use it. Affirmative, I’m that guy.)
Within the hyper-competitive nerd swamp that was the OG demoscene, the most important thing was to code a faster graphics routine that moved more sprites around the screen than anyone prior. I say “sprites” but the Atari ST didn’t actually have hardware sprites. The other, equally most important thing was to make the YM2149 sound chip play more intricate music with better-sounding instruments (read: square waves) than any other geek odd enough to have both coding and composition skills.
In this pursuit, of course the hardware limitations felt annoying at times. On the last night of any given demo compo, you’d hear underslept, teary-eyed programmers scream in agony while wishing the ST had a blitter chip or more than three sound channels. Nearly, or exactly, zero percent of us knew what these frustrating limitations – what I now think of as nothing less than a border made of glorious mithril – would mean for our brains and lives in the future. We only knew flying cars were definitely going to be a thing.
I’m not here to quote Gandalf but, trust me, he’s totally into mithril. Strong, useful and beautiful. Mithril, that is.
So, how can this aggravating perimeter – whether hardware, time, budget or anything else – be not just annoyingly tough to smash through but actually something beneficial and downright gorgeous? Let’s talk about some examples of how the heavily constrained Atari ST became the best teacher ever.
As mentioned, this machine had no hardware sprites, so workarounds involving sprite masks (not the kind you find in modern game engines like Unity) and bitwise operations performed by the CPU were invented. This allowed us to render “sprites” on top of background graphics in the absence of a proper alpha channel. More importantly, it taught us logic – very useful and still relevant today.
An arguably bigger issue was the fact that these software sprites could only be rendered at 16-pixel intervals on the x-axis, due to the CPU’s inability to write data to anything but even RAM addresses. (There was no video RAM; you simply set the video pointer to somewhere in regular RAM and whatever data was stored there would be shown on your CRT.)
Early on, real-time bit shifting to offset the sprite data, thereby supporting rendering at any x-axis coordinate, proved too slow. Someone’s brain spawned the idea of preshifting the sprite data and storing the original plus 15 copies in RAM, essentially in a lookup table. This obviously consumed more memory due to the many copies and the fact that the sprite data had to be stored at double width to avoid truncation. Still, it was very much worth it in the pursuit of fast graphics routines. A great benefit, but not as great as learning the relationship between computing power and storage and how the availability of one can be used to compensate for the lack of the other – very useful and still relevant today.
I could provide many other examples like how the lack of hardware scrolling and the 512B limit of the boot sector taught us to dig deep into hardware specifications and optimize code for extremely limited clock speeds and storage. Instead, I’ll just say that the usefulness of the mithril border lies in how it grows skills and minds, how it helps you think in non-obvious ways and connect dots that might otherwise remain unconnected. Those are life-long skills.
Lastly, the mithril border can be beautiful because the constraints of science and technology have, in fact, produced art that otherwise wouldn’t exist.
Today, we might love pixel art for its retro-ness or cuteness. We might love how it leaves space for our imagination to fill in the gaps, almost like it’s halfway between HD graphics and a book. Years ago, it’s simply what computer graphics looked like due to hardware limitations. Artists experimented with techniques like anti-aliasing, dithering and color cycling to get as much of their imagination as possible onto a computer screen. The constraints of the hardware created a look, a vibe, an artform, and it lives on today even though it doesn’t have to. From even more ancient origins came ASCII art and ANSI art, both of which are still appreciated by many today.
Even closer to my heart, of course, is music. If you’ve heard chiptune music or if you dig some of the sounds in modern electronic music, rap or other genres, you can be thankful for the mithril border.
The Atari ST had three sound channels, meaning that only three sounds or notes could be played simultaneously. You may know that a basic, typical musical chord consists of three notes. Well, that’s all the channels consumed in an instance, leaving no room for a lead melody, let alone the bassline or drums. How did we get around this constraint so we could write fuller compositions? By time slicing on two levels.
On the micro level, coder-musicians discovered that the human ear can be fooled into hearing a chord if its constituent notes are arpeggiated quickly enough. That is, played one after the other in a repeating sequence. If you’ve heard someone slowly strum the strings of a guitar, you may have heard an arpeggiated chord (or something awful). Thankfully, even old computers could arpeggiate much faster than any human, so the speed was cranked up to 11 and we could now play a chord on a single channel, leaving two channels for other instruments. The resulting sound lives on today; it’s the one that makes you warm and fuzzy about the 80s and old video games when you hear it in a song.
At a higher level, a single channel could be reused for different instruments. For instance, the bassline and drums could share a channel through time slicing the arrangement so that the bass stopped playing for just the amount of time it took for the kick drum to play. This was nothing short of an early version of today’s sidechain compression technique in audio engineering, which is useful for both attaining a clear mix and as a stylistic choice coloring practically all electronic dance music.
Yeah, the mithril border is useful and beautiful. I don’t know which skills you’ll gain from the current set of boundaries you find yourself in and I don’t know what kind of art will come from it. I only know that the old Russian composer wasn’t wrong.