Progress Report on Video Synthesis

This was going to be a music class but synthesis takes a while to explain, especially when you disagree with a large amount of what is written on the subject. E.g. the thing that defines good ‘subtractive’ synthesisers is the additive (wave shaping) effects of their circuitry. Which makes most subtractive tutorials ‘well meaning’ in the most limp-wristed way. While I get that written you can study this carefully as it will form a main part of the discussion.

Instead here’s a progress report, for two reasons. Firstly, Kunst Kamp restarts next week and one of my jobs will be to moderate a WordPress community for the student body. This blogging of mine has actually been practice (and you thought I was only a tedious blow hard). Secondly, there’s stuff going on here that is interesting – even when incomplete.

Over the break I’ve tried to follow up my complaint from long ago – that video synthesis (as part of the zombified ‘new media’) is stuck in a time warp. It still looks as if we were in 1982 and have to build everything out of Z80 chips and Lego. The bright colours and tedious gamut of cheap effects are embarrassing and need a kick up the arse. There’s no room in 2009 for this. ENOUGH ALREADY. Nostalgia is the lowest form of art, and I don’t care if it’s new to you. Get a damn history book.

We need to develop an aesthetic, a new style, perhaps drawn from the beauty and complexity of the real world. I can’t program yet so I went looking, and found a few starting points.

First stop was E-on’s Vue 3D natural environment rendering software. This creates approximations of real nature: trees, land, water and so on. I’ve been using it since version 4 but only now have they included some important features (like wind) in the ‘costs less than a car’ version. One thing I wanted to try is to set up a tree casting a shadow on the ground and let the wind throw the leaves around (process drawn from real life). Another scenario – water splashing against rocks. Both are naturalistic alternatives to ‘LFO drives textures’ kinds of synthesis.

rocks test video

Couple of problems – the software has been released too early and is currently messed up. E.g. couldn’t even register it for some days. Secondly, this kind of rendering is horribly slow – the splashing water took 7 hours to create 10 seconds of animation. Thirdly, once you get too close to the action the functions that simulate natural form lack sufficient resolution – it’s made for kilometres not centimetres. You could build a library of this kind of material to recut but it’s not performable. Interesting but as yet too clumsy.

Next was a demo of Genetica which is intended to create realistic textures for 3D models. It’s obviously synthetic but capable of images that are ‘rusty’, ‘sandy’, ‘watery’ and so on, somewhere near to what I have imagined. Demo because the author has animation only in the most expensive version – too much money for an experimenter. The results are a step up from ‘wiggling colour bars’ and take a lot less time than Vue – still not real time. They’re also tuned for their intended application, and are created as repeated square tiles. Don’t want to go too far down the timed demo path, hopefully something like can be scripted into animation. This is free – predictably it’s harder to use.

As time ran out I went back to an old friend VisualJockey which is now the free love child of ‘Mavrick’ the hardest working bastard in live video – he recently programmed up a particle system tool in a few weeks. VJo is a 90’s old school VJ tool, and is usually dedicated to the usual pixellated purple puke – but it can be dragged towards more adult results even just by rethinking the colour scheme. There’s a texture tool which is quite powerful but static and painful to use. As is usual with free tools there’s poor documentation. For example, it can parse a XML file to be driven by a database, except no one can recall the format of the XML file, the original programmer having disappeared. Half the effects are messed up. Trial and error is fun but eats up time.

The net result has been little, apart from using up all my vacation on computer shit. But I have a better idea of what it is I want to see. Fundamentals include better colour control, modulation drawn from natural phenomena that develop over minutes not seconds, textures that don’t repeat and are perhaps drawn from a look up table for speed, simulation of light and shade…

In trying to design algorithmic television, it seems that basic creative tools will have to be developed first. That’s bad news, but a sign that here’s the right path to take. And smarter people than myself are also on the job. One day we will free the world from mandelbrots and dolphins.