For some years I’ve been collecting and writing about electronic music hardware – good, not so good, and utter garbage. I’ve had fun popping a few bubbles while handing out praise where it was really due. But I’ve arrived at a larger goal than just sniping at the low hanging piffle. I finally feel empowered – I’ve used up considerable money and time crawling around the floor and behind racks, sourcing SCSI cables and voltage adapters, and wondering why the fuck this MIDI signal went the wrong way. I have seen for myself. I am one of you.
So, in this year 2019, is music hardware really better than software?
Better? We need to set some rules. Firstly, we are talking about musical instruments being used for music. Slavishly copying a sound plucked from some antique recording is not music. Matching the shape of a waveform on an oscilloscope is not music. These are feats for athletic carnivals or synthtopia.com
Implicit in music is a listener, and that something humane is being communicated – awe, hope, fear, something worth listening. What does the sound of the device offer for others? You may be pleased by the feel of the knobs, or the type of wood at the cheeks of the thing, but what does this do for a listener? Maybe the wood inspires you to delicate adjustments, but it’s not a violin.
The way your machine looks, how pretty the lights, the styling of it – that has no bearing on its purpose. It may as well be a Dyson vacuum cleaner or an Apple phone. Sure, it’s well designed, by that’s not specific in any way to synthesis.
We need to verify what we mean by ‘hardware’ and ‘software’, because quite a lot of ‘hardware’ is a hybrid. A Moog model D is all hardware. A Blofeld is software in a box. But the Arturia Origin is software that runs only on a very particular microchip – when the chips are gone, the machine is extinct. Same is allegedly true for the Access Virus. Is this hardware? Because no specific hardware – no sound.
I am tempted to judge that only the resulting sound matters, but I have to concede that a dedicated controller might make reaching that sound more likely.
I’d also like to say that synthesis was once a futuristic thing, a desire to hear new sounds, make new music, go places that hadn’t been heard before. That idea started to die with Tomita and is now truly dead when you’re trying to emulate some noise from 40 years ago. Synthesis, as a mainstream activity, has become terribly OLD FASHIONED.
OK, so let’s start by my telling you I’m selling the majority of my hardware, such is the faith I have in my answer.
The time spent racking, un-racking, cabling around the back of things, assembling A frames etc. is like the days when people would run clothing through a mangle before hanging it on a clothesline. You can definitely run into problems with virtual studios. I have. But generally, when I visit other people’s hardware studios the damn things are NEVER FINISHED – a great excuse for why no music is being produced. In my case I’m now trying to reduce the hardware down to a single, stable, mobile rack. When one is carefully limiting the amount of hardware (like cholesterol) so that you can actually create something – that’sthe whole story right there. Are you a musician, or are you building a model railroad?
Fundamental point: the sound of hardware is often not that interesting. I’ve just sold a venerable (and very heavy) old Roland keyboard for some good profit. The reason being that if you divided the sound by the amount the damn thing weighed, you’d have no change left over for coffee. Any half decent virtual analogue could make that noise – especially Roland’s own. Do a blind test. Can anyone really hear the difference in a piece of music? You are a musician, aren’t you?
(Some hardware is interesting. For example, I’ll keep my UltraProteus because of the weird thought process behind its operation, the SY77 because of its particular timbre and the Super Jupiter because it has a deeply exotic stomach-ache. But I’ve sold the Yamaha FS1r because as crazy as it is, the sounds it makes aren’t that great. And that’s what matters.)
I’d like to jump to the last point, the most important point. Synthesis shouldn’t be nostalgia, it should be futuristic, progressive. Why did we even start this thing? Because we wanted more than the instrumentation that we once had. But now we’re cowards emulating old safe things. I say fuck recreating the Blade Runner soundtrack when we’re in the year the story was supposed to take place. Synthesis can now pull a sound apart, make a wavetable, or an additive snapshot, change every aspect of it, build entirely new sounds from audio atoms – and people are still talking about ladder filters?
If for example you spend enough time with additive synthesis in Alchemy you will find a world of experimentation. Or take a recording and hack away at it with spectral editing – that’s what I did with my records Donut and Aversion. It’s synthesis, but it’s not hiding in the last century. I can now say what I mean by ‘better’ – I mean true to the goals that synthesis is all about.
Or is synthesis really like steam punk – doomed to be a paleofuture?