Ambisonics: the whys and wherefores

The why: along with 360 video comes 360 sound. Ambisonic, rather than surround, as it has height as well as direction – it is a sphere. Not much point unless you have speakers above and below your head – unlikely to become popular in the home on that account. More likely it will be consumed on headphones as a binaural image – there are many technical problems with that, to which I’ll return.

In VR the sound image should track/rotate with the visual image. Hasn’t been a concern until VR goggles came back into style – speakers remain fixed as your head turns, but now the image has to compensate for head movements. Some artistic issues here about composing for speakers versus phones – obviously it’s easier to just compose the one image (e.g. strings always to your front right) than to have it rotate around your head.

As with all media there’s a confused plethora of formats.

Spherical Harmonics deg3.png
By Dr Franz Zotter <zotter@iem.at> – Dr Franz Zotter <zotter@iem.at>, CC BY-SA 3.0, Link

Orders: the number of divisions into which the sphere is cut. More divisions means more precision – you get better imaging if you add ‘higher’ order subdivisions. Each division needs its own sound channel.

  • 0 order – mono – requires one channel
  • 1st order – front-back, left-right, up-down – requires a total of four channels
  • 2nd order – in between the 1st order – requires nine
  • 3rd order – in between the 2nd order – requires a total 16 channels

… and so on

Most DAWs cannot handle more than 5.1. on a track: Cubase, Logic no good. Reaper, Premiere are good. Vegas?

The next problem is how these channels are arranged and of course different people have mucked this up. It means that you have to choose one version and stick with it, or spend your life translating.

Traditional B Format: a 1st order, four channel version that’s a standard. W+XYZ, where W is the signal strength and XYZ are a right handed coordinate system. This becomes complex as you start adding orders.

Furse-Malham continues to add right handed coordinates:

W
YZX
VTRSU

ACN is a cleaner format that numbers them by a sorting formula, therefore is extensible:

0
123
45678

The next problem is normalisation – choose from maxN, where each signal is 0-1 or SN3D, where is below the volume of the mono signal, or N3D, which is universally louder.

Common Versions and implementations.

UHJ is a horizontal only muxed version of B Format. https://en.wikipedia.org/wiki/Ambisonic_UHJ_format

YouTube uses 1st order, ACN, SN3D and is based on AmbiX. This tends to indicate it will become the standard. https://support.google.com/youtube/answer/6395969

The Google tool is Jump. https://vr.google.com/jump/

Oculus uses 1st order, ACN, SN3D based on AmbiX. Again, a good sign that this will be a standard. https://developer3.oculus.com/documentation/audiosdk/latest/concepts/audiosdk-features/#audiosdk-features-supported

Headphones: the ambisonic image is created by binaural encoding, where sounds arrive out of phase at each ear. But different shaped heads get different results, therefore it’s unreliable. There is a database of heads available in AES69 format which are usually averaged to approximation. Google assumes that the head is symmetrical (probably true of Oculus as well). BR Rapture can load these files.

The home of the AES69 format is here: https://www.sofaconventions.org/mediawiki/index.php/Main_Page

leadimage_h2n

Microphones: the Zoom H2 is the easiest solution but has no height information. This is the model I am using alongside my spherical camera.
https://www.zoom.co.jp/products/handy-recorder/h2n-handy-recorder

Binaural Microphones: https://www.roland.com/us/products/cs-10em/

Some sources of expertise: http://www.brucewiggins.co.uk

We have to talk about Virtual Reality.

It’s VR talk time, you’re old enough.

Let’s get this out of the way – 3D television died, Google glasses died, everything is bad, why try anything let’s just sit around bitching. Great, thanks for the opinion. But that argument is based around success I’m not looking for. There is zero chance or interest that I will be “a media tycoon by getting in at ground level”. People that think that way are calculating profit and loss, not making fun things. VR will probably die, so did a heap of things we’ve enjoyed.

But you were against Google Glasses and AR! Yes that’s right. I don’t want to build something that tells me what I am seeing. I want to build entertainment. VR and AR are only related in the most superficial way.

I’m learning how to handle VR by first remaking the 2010 video for Greater Reward.

all

Until you actually make something it’s all theory. Then the problems come crashing in fast. Let me add my voice to the very sensible advice already there.

  • Until you see the shot on a VR helmet you have very little idea what you are getting. The distortion moves everything away, your instinct to make things fit the image is wrong. Something that looks OK there is about 1cm from the viewer’s face and that sucks.
  • The inter pupil distance really matters. If your camera has pupils 6.5cm apart then everything must be scaled to that. If it’s wrong you end up with things being kilometres in size and that hurts. There is no zoom. Prime lens only.
  • There are no edges to the frame and so you have to design in all directions. But there is a centre of attention, only 70° in the middle. Your viewer is seeing a little bit at a time, you don’t know which bit. So most scenes should be simpler than you first thought. You only place complex things where you want the viewer to look. So like the way you place furniture in a room.
  • I tried tilting the camera. Nope. The viewer feels gravity pulling them down, the scene doesn’t, so it tilts, not the camera. It just doesn’t work. That and the previous point mean most of your camera skills are dead. So if you want the viewer to look up, you have to move something up there to get their attention. Same for most directions. You use the same tricks as stage plays: light sound action.
  • Editing seems OK so far but jump cuts are worse than usual. You need to move toward the new position, or pre-empt it. I’m testing fades now.
  • Technically you are making 4K but the bit the audience is seeing is about 640×480. All textures should be severely anti-aliased. Like Gaussian. No grids that will moire with the pixels. Think 1990’s computer graphics. Everything smooth.
  • Drop the light levels down, keep the lighting comfortable. You can’t use normal filters on a VR image – e.g. blur. So you have to get this right in camera.
  • I’m not seeing too many problems with motion but I am keeping it slow and steady. Also I am used to the helmet. One day maybe people will all be used to VR and some of these rules will be broken.

So is all this pain worth it? Depends on the material. I generally would steer the vast majority of film making away from VR (I wish my students would listen). You cannot perform cinema with VR. If you ever write ‘we see’ or ‘moves to’ in your scriptment do not use VR. As for Greater Reward, all the scenes are positioned in spheres of some sort and so the translation becomes possible. But what else will work – I just don’t know yet.

(Nothing here about sound design – a different post for that).

Strange Cameras for Strange Times

Too soon we have become blasé to the distortions of the current flock of optics. We pretend these are just sidesteps to the usual reality. Their peculiar qualities should be celebrated and their perversions articulated, and I am here to do just that.

The Lytro lllum. https://www.lytro.com/

original

Normally you point a camera, and light arrives at the lens in a wild range of angles. That forms a bright but blurry image. As you close the aperture the light is constrained to a smaller range of angles, and the image becomes coherent, while the exposure drops. The smaller the aperture the sharper the focus, and the more the camera has to work to expose the film. Hence the deep focus of Citizen Kane was a technical marvel. Now everyone is obsessed with shallow focus, because big lenses are expensive, and what better way to show you have money.

The Illum is a light field camera. Light arrives from all angles and hits one or more of hundreds of little ‘buckets’ inside. The computer notes the direction at which the buckets are filled and calculates the angle at which the light arrived. The camera sees both the light and its direction and from this records a perfectly focused image with depth information.

Lytro_ILLUM

You can use that depth to set focus after taking the shot, to calculate a 3D image or slice the image over the Z plane. Probably more – there’s a SDK available for trying out ideas. But most of us will just animate the focus after the fact and think that very clever. For a while…

Sensible review.

Lytro has set small, reasonable aims for the camera and provided them. The Illum is a well built, well thought out device with a defined purpose. But that purpose is not in itself very inspiring for very long.

Pulling the focus back and forward is exciting for about an hour after which you’re putting the camera in the cupboard next to the C64. Mine came out of a discount bin, still wildly expensive compared to an equivalent DSLR (because of Australian distribution). The Illum is not a game changer, because the technology is more interesting than what you’re encouraged to do with it. So you should think about misuse.

lytro-5-1024x609

Lytro is now onto surround video capture with an impossibly large and sexy UFO thing that photographs with 6 degrees of freedom inside a virtual space (but can it photograph itself?) I’m disappointed that they have leaped so far, when just a single lens 3D video capture would be really tops. The Illum is not able to shoot video, it maxes out at about 3fps. It might be insanely great as a stop motion camera, but no moving pictures.

The software can output its unique RAW format as set of TIFFs with the depth as an 8 bit gray scale image. The TIFFs show the scene from a range of angles, so you’re already alert that the depth must be some compromise of all these. It has a ‘cauliflower’ texture, by which I mean it shows a lack of detail evidencing some kind of fractal or wavelet tactic.

lytro-illum-raw-sample-image-1-editable-depth-map

Being lossy and 8 bit you are not going to get a clean slice where an object is magically cut out from the background. Fair enough. Probably the SDK can get a cleaner image from the RAW – but I tend to think that the Illum operates at the extreme edge of the hardware. It has the brain of an advanced mobile phone – impressive – but having to compromise greatly to get acceptable results.

My intention is to grab a whole variety of still images which I’m going to then mash together on the Z plane with some dirty and distorted depth data. It won’t be clean or realistic. It will hopefully be disturbing. You might have a pig and car sharing the same 3D space. You might like it.

The Ricoh Theta S – https://theta360.com/en/about/theta/s.html

2016 is the year where 360 cameras infest every gadget retailer the way that sports cameras did a few years ago, and 3D TVs before that. They will eventually die in large numbers. Right now they’re just touching on reasonable performance at a reasonable price, so the average enthusiast may as well have a look. That’s me.

1445210344526

If you’re the sort of person that takes selfies, you’ll love the Theta. Here’s the pyramids… and me! The beach front… and me! My friends and me me me me again. There being no back and front to a spherical photo, you’re always there unless you hide in a garbage bin or wear it as a hat.

R0010005

Suspicious white object at 6 o’clock

Because it has no viewfinder (what use is a viewfinder in 360º?) you are encouraged to setup a wifi link between it and a mobile phone where you can preview the effect. It works, but mobiles aren’t really set up to be field monitors, as the glare is such that you can’t see what you’re doing. So you set up the camera on a tripod, run away some distance to hide, find some shade, look at the phone and only then find that the tripod has been knocked over by some passing brat.

When I got back to the camera, the lens cover was scratched, but there seems to be no effect on the photos, I guess the cover is out of the focus area. It is not a sports camera, but akin to a toddler – it can take a fall.

Sensible.

The quality of the earlier Thetas was horrible, and at 1080p the video on the improved S is still only a quarter of the needed resolution because it captures two circular areas inside that area. But the photo images are big enough for my purposes, which is to decorate some VR spaces I’m building in Unity with natural light and textures.

R0010015

Software wise Ricoh give you a desktop viewer (made in Adobe Air so banned from my work computer) which connects to their gallery (which only allows very short segments of video). The video can also go up on YouTube but ignore the instructions given on Ricoh’s site – it needs to be first run through a “Video Metadata Tool” before YouTube will see it. YouTube has a fixed viewpoint which only covers a small part of the video – so very nasty quality. I’m going to to try pre-processing the video in After Effects to make it big before encoding it.

What use is surround? Only as a means to capture an environment for more detailed images – that is, the same way you would use a stereo microphone pair to capture the sonic environment, followed by a shotgun mic for the detail. We have not previously had a crossed pair for video. The problem then is one of ‘handling noise’ – big distorted hands at the bottom of every shot. It’s as annoying as microphone handling noise.

The Theta is basically a Zoom recorder for light. For most people the Zoom recording is not the end of the creative act – only the beginning, and using the Theta as some kind of documentary device is not anywhere near to the real reason to own one.

Back on the VR rollercoaster

Here you see a CINERAMA screen, from 1952.This.Is.Cinerama.1952a

CINERAMA was a big hit in that year. A standard film of the time would fit in the middle screen only. So you can see what an impression it must have made. I’m interested by the kind of films that bloom with any new technology. There must be a roller coaster film. It’s likely to be the very first thing that gets shot in the new (cumbersome) format.

rollercoaster3603d

Standard VR Roller Coaster film

And close behind are what we now know as a “Go Pro movies”. Sporting feats.

This.Is.Cinerama.1952b

And aerobatics.

This.Is.Cinerama.1952c

Once these technical demonstrations are done it’s time for a few experimental works by corporate funded artists, some stadium sports and a slowly wilting realisation that a good story works just as well on an old analogue TV, so what are you going to do? George Pal did good business with The Wonderful World of the Brothers Grimm in 1962.

cinerama-grimm-3films

Eventually the name CINERAMA became more important than the process. By the late 60’s it was a 70mm print stretched out. But the stories were better for it.

20015s

I would have liked to have seen a CINERAMA print, but standard Super Panavision was pretty cool. (My folks took me to see it soon after it came out.) This film isn’t right on analogue TV. Here’s a site devoted to wide screen film.

So instead of just going through this cycle, maybe we can think about it. What benefits can be found in the VR format? Obviously there’s interaction, but I’ll just leave that to one side for now please.

Consider this – no one can stand behind a spherical camera. There is no behind.

People working in VR are keen to point out that the frame is no longer there, and so the idea of composing an image in a frame is lost. So you cannot center the image, show something to one side – any of that. Edits are OK but you cannot know which direction the audience is facing. “Cut to:” is rendered useless when they could be looking at the sky for all you know.

The standard of ‘reversals’ – over the shoulder conversations – is dead. Next time you watch a film count how much of it is reversals. Then realise – that’s gone, all gone.

Nausea – it’s about the hairs in your ears that detect acceleration. Not movement – a scene in a moving train is fine – but the ease-in and ease-out motion of a standard motivated camera move. When the ears detect that you’re doing something impossible it’s time to prepare for emergencies and up comes lunch.

VR is about sound.

If this sounds as if VR is more problem that potential, and if you are a standard film maker please go elsewhere and let the experts take over. By experts I’m speaking about sound designers who have dealt with 360º for a very long time. We build realistic spatially coherent environments out of the materials of sound. We turn your head to the events you then watch. We signal that something is occupying a point, an arc, all of space. We signal that something went by, causing a Doppler. Our Foley gives substance to things in all directions.

VR demands that the old equation of vision first / sound second be turned on its head. Because you are not going to be able to navigate the worlds that you’re filming without first realising the air, the tone of the room, the placement of all the sound sources. If your set has three walls and relies on a particular orientation to work – you’re in deep shit.

But if you are a sound designer for film – don’t be smug because the game is going to get considerably harder. Get a VR helmet, learn how it works. Then start to build soundscapes that work in 360º. 5.1 won’t work any more there is no front/back. Listen to your mixes as somebody would in a room. What size room? Round? Carpeted? You are suddenly required to not only capture the air, but to construct it. I can’t tell you how we are going to do that. I can tell you that if we want to get past the demonstration films it’s going to be up to you.

Improving Kubrick’s Spartacus.

Everybody says what a great film maker Stan Kubrick was. But his early films aren’t all that great. Take Spartacus, the camerawork is very wobbly. Kirk Douglas is all over the place and so I decided I would rework it to put him front and centre. Now it makes much more sense. What do you think?

(Sorry about the sound, my main VHS machine is crook, and that made editing harder. You can put your own sound track there if you want.)

Liminal Synth

As promised, a look at Studio Artist.

I think it’s part of the story that John Dalton is one of the bad old boys of DECK, the first Mac based multi-track recording system that would one day be absorbed into Studio Tools, later to be known as Pro Tools. Sounds like those days were a bit like knocking over grave stones while doing wheelies on your hot rod, so the contemplative aspect of Studio Artist could be part of a healing process. More relevant – the interface and operation feels like an elder program, & none of this Kai Krause gobbledegook. SA looks like it existed before the grand wizards of MetaCreations got their orbs together, & get off my lawn.

555

Studio Artist is a complicated thing, like a Tower of Babel halfway built, parts of it are lounges, parts of it are holes. It looks like the author is somewhere near to putting it together, but always has a few more more loose nuts & bolts to tighten. To try explain the complexity I’ll underestimate it, then expand the idea.

Goodbye Tonsils

At the most basic level SA is a Paint Synthesiser that takes photographs and turns them into paintings by splotching brush strokes at the edges of things. Fractal Painter does this, as does Filter Forge. Along with presets, SA provides a multitude of settings for the way the paint is applied – does it start at the top? does the brush follow the edge? does it dab or stroke or mop? So many settings that it can be discouraging to work through them like reading a phone book from cover to cover.

Goodbye Tonsils paint

But there’s sense in this. Example: the problem with painting movies is that the usual algorithm dabs at random over the source image, which makes for 25 irritating random dabblings a second. One of the controls here forces the dabs into a regular grid which reduces the noise a little. SA doesn’t presume to decide what you might need, it just gives you one of everything.

The Image Operation mode filters the entire image with blurs and blocks and colourisations. The big difference here is that there’s no brushing, the pixels are modified as a plane. This contrasts with the interactive Warp, where you brush in spheres & waves & kaleidoscopes. Similarly the Adjust brushes in colour, levels & other Photoshop style changes.

Goodbye Tonsils iopGoodbye Tonsils warp

The Texture Synthesiser modulates the entire image to produce abstractions with rhythmic distortions & colourations. It’s different to the Image Operation in that it imposes a pattern over the image, modulating it. Different again are Modularized Synthetic Graphics, which are complex chains of smaller graphical modifiers. The manual says there’s over 500 of them & then wishes them into the background, which is disappointing. It’s difficult feature & I guess most users wouldn’t want to get into detail with it, but if you primarily bought the software as a synthesiser (as I did) you’re left scratching out the details unaided.

Goodbye Tonsils texGoodbye Tonsils msg

If you’re keen to make synthesis in real time the DualMode Paint mode follows the brush about the drawing area, creating shapes & echoes that have a particular Yellow Submarine look to them.

The Paint Action Sequencer is really nice, because it thinks musically. The usual case for this kind of sequencer is ‘do this, then do that’. Here you have the capability to ‘do this every couple of bars & that four times’. The grid is like an array of notes, with each note being a painterly activity. So you can make melodies of these actions, if your mind can figure that out.

Animation is something that comes close to brilliance without kissing it. It’s dead easy to load up a movie & have SA perform all kinds of painting & twists & turns on the frames & save it back out again. But in my experience the way it works a frame at a time means there’s always a jangling movement over everything, it seems impossible to make something smooth & flowing. There’s a Temporal Image Operation module which tracks & flows & jumbles frames and so on & probably the secret is in there. But as I said the tower is unfinished, bits are over here & others over there & the end user is hard pressed to make it a coherent whole.

Kai Krause is revered because he would limit your options in such a way that you’d get to a good outcome early on. You’d then have to fight to get anywhere else, with Kai laughing at you. Dalton doesn’t play this game. He says, ‘here are a couple of thousand controls, see you on the other side’. Each tactic has worth, & in SA‘s case there’s the serendipity that’s been missing from software for a long time. This really is the spiritual successor of the Fairlight CVI, knobs and menus everywhere – and maybe you won’t know how you got there, but the result is a real trip.

Additional notes from John Dalton:

A few comments. The kinds of things going on under the hood of Studio Artist are much more technically sophisticated than some of the other programs you mention. And incorporate a lot of academic research results associated with how the human brain perceives visual imagery, and how that relates to artistic visual representation. Also, those other programs basically draw what we would call a single paint nib (single dab of paint), and while you can certainly do that in Studio Artist, you can also automatically draw complete paint strokes, so the automatic painting is emulating the way real paintings are generated, as opposed to just being an image processing filter effect.

The trick to generating fluid non flickering paint animation is to build temporal continuity into the paint animation. This involves constructing the Paint Action Sequence you will use to process the source movie in such a way that the paint build up taking place builds temporal continuity into the resulting paint animation output. Temporal continuity basically means that there needs to be continuity in the appearance of the painted output frames across several adjacent frame times in the output movie file. The simplest way to do this is to overdraw on top of the previous output frame, but you can get much more elaborate, which leads to all kinds of great paint animation effects.
We have some tutorials that go into how to do this in depth on our online Studio Artist documentation. Here’s one place to get started.
http://www.synthetik.com/tips/2010/01/movie-processing-strategies/
And here are 2 simple tutorials on building temporal continuity in a Paint Action Sequence.
http://synthetik.com/process-movie-tutorials-example-1/
http://synthetik.com/process-movie-tutorials-example-2/

If you look at my vimeo posts, you can see some examples of smooth non-flickering paint animation generated with Studio Artist.
http://vimeo.com/user2967756

You are right about the need for more documentation associated with MSG. Anyone interested does have the option of asking questions on the Studio Artist User Forum
http://studioartist.ning.com
which includes a MSG group.We’re very responsive to providing additional technical information to anyone who asks.
And if you look in the doc folder in your main Studio Artist folder, there is a lot of additional html documentation on MSG processors hidden in there.
And, here are some links to some MSG tips
http://www.synthetik.com/tips/tag/msg/
http://synthetik.com/tag/msg/
http://www.synthetik-studioartist.com/search/label/MSG
You can also build a paint tool that incorporates a MSG preset in the paint tool, so that provides essentially an unlimited way to expand the functionality of the paint synthesizer.

What is great about Studio Artist (in my opinion) is that the synergy that occurs as you start to combine together different features provided in Studio Artist, which work together to create really an unlimited range of different visual effects. Here’s some more information on the philosophy behind the design of Studio Artist.
http://www.synthetik-studioartist.com/2010/01/what-is-studio-artist.html

Oxygen Mask

Funny how the last post ended with the Video Cox Box. I thought that was a known reference – and was dead wrong. Obscure video equipment hasn’t the same general interest as musical equipment – everyone is well versed in Rolands and Korgs, especially in over-pricing them, but the Cox Box raises only the most feeble of online presence, and when you do find it mentioned it’ll be somebody from the old school of experimental video in Australia.

I feel like a Moonie, raised in a parallel culture. But there is such a thing:

Big Iron 12 copy

The rack thing with 9 knobs plus the bits underneath. Red, green, blue for each of 3 grey levels.

Synthetics must be only art form where the visual is completely dominated by the sonic. I don’t fully understand why this would be; I suspect it’s related to the floating problems of abstract art (that is, butt ugliness) that I’m trying to solve.

Plug In Wastelands

Using the KVR site as a source, there are now over 5500 VST plug ins, 2700 being VST instruments. If you exclude anything made with SynthEdit, the number is still 1400 – which just shows what a phenomenon SynthEdit has been.

freeframelogo

You’d struggle to even find an equivalent to VST for video synthesis. Let’s use the open source FreeFrame as nearly all VJ software tools claim to support it. The project page mentions about 200 plug ins; there should be more as this very old page still lists software makers that died many years ago (Macromedia!) or have since become mainstream IT consultants. The same figure appears on IntrinsicFX’s home page and it seems almost every surviving FreeFrame plug-in comes from one or two vendors. If it weren’t for BigFug it’d be dead. Hero Alert.

This virtual tumble weed is much the same as the SynthEdit phenomenon. Apple Computer picked up PixelShox to dominate live visuals. Binding synthesis to QuickTime was excellent marketing – everyone started to develop in Quartz Composer killing the open source format, and once that was achieved Apple moved on to their next bit of Embrace, Expand, Extinguish. Even the people that have done well out of QC have realised that Apple has rolled on to the next bit of scorched earth and they’ll have to create something to fill the dead space. If VUO becomes a thing that’d be sweet. But you can understand why I’m not confident.

$(KGrHqVHJB8FJ!)+n!I6BSVrC8JOUw~~60_12

It goes so well with my coffee table!

Simon Hunt points out that the rabid interest in old audio hardware is likely a consequence of virtual instruments. That is, it was software like KORG’s Legacy collection that inspired the surge in KORG prices as people wanted the ‘real thing’. That would need a lot of research to decide – VST came in 1996, but it wasn’t wide spread for a few years after. Certainly in the late 90’s I could buy a MonoPoly for $250*, which now sells for around $1,500.

It matches my Persian rug!

It matches my Persian rug!

Had someone created a CoxBox or a Fairlight CVI in software, would these would now be equal in their mythology to the 303 Bassline? More importantly; would we now be able to enjoy the same spread of ‘looks’ as we currently enjoy ‘sounds’? How would we do this, and what format would we use? Should we make this part of the ‘Big Iron’ project?

Musty Old Castles

How many online synthesiser museums are there? More than stars in the sky or grains of sand? Then how many video synthesiser sites are there? Battered, and bruised with lava lamps half empty, AudioVisualizers is the original and the only. There’s more missing animated GIFs than you can shake a data glove at, but still nearly all the wikipedia articles use it as the definitive reference for visual synthesis. That’s pretty worrying and I see that part of the ‘Big Iron’ project needs be a web site that collects that info in case it dies.

Some old-school VJ Tools have lasted through the millennium bug. Arkaos is most venerable, Resolume still kicks along. Both now have versions that address the more lucrative media server market, the projection of video clips and DMX lighting in large events like the UK Olympics. Other tools like Salvation and Visual Jockey have become only media servers, joining ones like Ventuz that always were. New contenders like VDMX are keeping the flame lit.

Still the community is nowhere near that of sound and music. Fragmentation is part of it. Video edit guys are not live visuals lads are not interaction design gals. Maybe Isadora tries to unite the latter two users.

Max/Jitter has recently gone all-out to be less inscrutable more accessible via Vizzie, but it’s still like driving an 18 wheeler to the corner shop. Way too big and hard to steer. However the excellent adaptation of Vizzie into VizzAble by Zeal Hero Alert might bring Max4Live into focus as a living, breathing video equivalent to Reaktor. That’s currently my best hope for one day sharing the distinct ‘looks’ of these old video machines with everyone.

* No, I sold it again quickly because the MonoPoly is actually pretty boring.

Superfractalisticextracrappyatrocious.

mandelbrot

Deep in the neo-hippy outbreak of the early 90s, I wrote a rude article about Mandelbrots, describing them as multicoloured bird shits. I stand by that description, with one concession over two decades: the nature of fractal art is to look like multicoloured bird shit, the art is in elevating it from that nature.

That’s something I’m desperately trying to do.

Why are cassettes like Marxism?

This is a part of redeeming video synthesis, which shares fractal art’s innate tendency to shitness. In the 1980’s it was hard/cool to make wiggles on a video screen. In the 1990’s it was hard/cool to render complex geometry on a computer screen. Once a new media difficulty curve is overcome the pioneers and tinkerers move on, and lacking any other virtue, the new wave quickly rots into excuses. As happened with punk music, abstract expressionism and telephone poetry.

In the 21st century many old things are coming back to life, but for some reason with little or no insight or refinement. The explanation is sometimes given as nostalgic purity – that only the ur-form is authentic. Cassettes and Marxism are both to be unsoiled by revision.

The way I see it, video synthesis addressed some limitations in music, but then introduced further limits which were solved by fractals, which then introduced limits which are still in need of solving. Instead of which we careened off into minimalist grey and little ticking noises. That we have arrived back at the aesthetics of 1980 something shows how little the 2000’s contributed to the dialogue.

But I’m not here to go into that – this post is about positive steps I’ve made and information about what is available to the next lot of tinkerers.

First, let’s be clear about synthetic visuals. Networks of simple calculations leading to apparently complex, and therefore ‘natural’ results. A Mandelbrot is a single shape with an infinitely complex edge (actually so is a circle, that’s just not as interesting). Most of the control is in shading the exterior of the shape according to rules; such as how far you are from the edge of the formula.

by Diane Cooper. Not sure I like it, but she’s got the right idea.

The initial excitement in fractals was that nature – weather, flowers, landscapes, could possibly have a simple mathematical basis. The 90’s neo-hippies strived to wed the apparent power of the home computer to nature and thus gain magical insight. Rendering a Mandelbrot was a contemplation of this potential (as was all things virtual). The colours were hot primaries because they had symbolic meaning a la chakras. As with Futurism, the art itself was beside the point, the manifestos were the thing.

My personal interest is in trying to take the facility of video synthesis, the organic confabulation of fractals and impose some kind of painterly discipline that will create synthetic audiovisual work that doesn’t rely on sleeve notes. I’ll then use these to demo my thesis a few years from now.

Mandelbrots are only one of many complex forms – the Julia set is less predictable and more interesting. And Perlin Noise is an example of very different recipe, of boxes within boxes each with a random shade. All are useful in breaking up synthetic images into the detailed dirt and fuzz of the real world.

Experiments in bird poo Abstract video.

Coming from 3D, I first tried tools for texturing 3D models, for example Genetica. These have a good palette of procedures for noise and organic patterns, as 3D has a tradition of naturalism. But Genetica is aimed at creating repeating square tiles and looping animation, so most of what I got was very cyclic and contained.

I thought about software that makes infinite 3D landscapes. In reading the history of Bryce I was reminded that it was created by Eric Wenger, who made Metasynth and a program I’d forgotten – Artmatic. Now if you wanted to plot the epicentre of neo-hippydom, Artmatic is it. Brightly coloured space vomit was the domain of Wenger, Kai Krause and MetaCreations up to the great wizard collapse of 1999. But a lead is a lead and so I came to the site where 1990 lives forever – U & I software. Holy Shit, that site.

It's 1999 and the future is HOT GREEN

It’s NINETEEN NINETY NINE and the future is HOT GREEN

Who would have thought that Artmatic would reach version 5? And further, that it would develop into a broad system for animating synthetic visuals? Though the interface still yearns to be run at 800×600 pixels and the iconography is inscrutable, Artmatic has taken on a potent range of outcomes beyond the original aims of the software and I can recommend it as one of the few really versatile algorithmic visual tools.

11

Take a space and shade it in grey scale from -x to +x and -y to +y. Distort that shading with snippets of formulas that link up in a chain of icons. Colour and light the grey scale with complex gradations. I don’t fully know how to run it. The man who wrote it doesn’t fully know how to run it. But I’ve started to get enough skill in guiding it where I want to go.

Cross platform poo Abstract art.

So this was all very good, but while the Mac was grinding away at some watery thing I had a bunch of PCs lounging around and not earning their keep. There’s no Artmatic for PC, what could I find?

Again I started at the end & wound back through a history. I found Chaotica on this site which seemed promising, but then read that “Chaotica supports all Apophysis / Flam3 features”. What are these? It turns out that there’s been a tribe of people making fractal flames since Scott Draves specified the algorithm in 1992. For example, Apophysis is an open source tool that if anything makes Artmatic seem straightforward.

(Trigger warning – the Mac tool mentioned in this video will crash your Mountain Lion based OSX machine for some reason).

Apophysis is designed to be hard to predict – that would be fine if 90% of the results weren’t fairy floss, so I went looking for more control. I discovered someone had ported the code to After Effects in 2002, which may as well have been under the rule of Richard the Third in computing terms. No it doesn’t run in AFX CS6. It does run in AFX 4.1. which I make no apologies for finding online – hell, it’s a wonder that 11 year old software runs in Windows 7 at all. But what you get is a whole heap of inscrutable sliders. Not good.

After much fussing with open source I settled on buying Ultra Fractal. This is the ultimate multicoloured bird shit creator of all time, and it took a lot of Gin to bury my misgivings and hit the PayPal. But then, it’s a challenge: try to make something that isn’t found on the side of a Kombi van. And then I came up with the answer!

Garfield without Garfield.

Seeing as the goodness is all in the way you colour the field in relationship to the edge of the fractal, the answer is obvious: just don’t include the fractal. Show only the effect of the fractal in disturbing the space. The bird shit is gone, leaving an abstract colour field.

The example here is nowhere near what I want yet, I just like how it looks like one of my migraines. I still need to add grit and dirt. But it’s getting closer to something I can work with. The way Ultra Fractal handles formulas is a bit raw and nasty, lots of folders full of crowd sourced snippets. I’ve just found the Perlin noise code and have some early results; it could be good. Meanwhile Ultra Fractal can import flames, so I can put away the 11 year old After Effects.

This post is already way long, so maybe another time I can talk about Studio Artist – quite a different kettle of synthetics.

GoPro HERO3 Black biased review

Announcement: Resident Advisor podcast is up now. http://www.residentadvisor.net/

You can get plenty of ‘unbiased’ reviews elsewhere.

Why I bought one: so that the next time a student asks whether we can hire a Phantom camera because they need a cool slow motion effect, I can can shove this at them and end the whining. By the time they realise that the GoPro is a decoy, the end of term will have got them out of my face.

Does the GoPro approach the kind of quality that the Phantom offers? Can you play basketball with the moon?

Getting one: GoPro have a great strategy going where they combine relentless spamming of your mailbox with GET A HERO3 NOW OR DIE combined with a complete shambles of distribution. There’s no units anywhere, let alone accessories. My solution was to buy from a professional video supplier, as no one would expect them to stock it.

Unboxing: I am not a nerd so if you expect photos go to Engadget. But I can say that you should open this over a bucket so you can catch all the shit going everywhere. There’s a whole bunch of crapola in there which greatly resembles what falls out of a Transformers die cast kit. It’s not identified in the manual – because there is no manual. Once you’ve downloaded that you still don’t get told what all this crap is about.

Updating the firmware: this sucked shit. You charge up the battery, connect the camera to a computer, then visit their web site to be told You Don’t Have Java! So you visit Oracle and they say You Do Have Java, but you install it anyway then Firefox tells you Java Has Been Disabled Due To A Security Risk and Chrome says We Don’t Do Java so you end up finally trying Internet Explorer which does it, the camera turns on and off in a few seconds and you’re left wondering DID I UPDATE THE FIRMWARE OR WHAT?

So you go to their forum and read all the messages from people trying to work out WTF and it seems that the little piece of paper that tells you to update the firmware is out of date. Great work team. It’s on this visit you read the apology from the CEO and start to get worried.

Ergonomics: without a doubt one of the worst control systems I have ever used. Push a button to select from a menu, push another to OK. Sounds reasonable. Except if you’re not quick enough it will fall out of the menus. So be quick – but not so quick that you double press one of the buttons and have to go back through it all again. I’ve played Gameboy games like this except that you can actually read what’s on a Gameboy screen. The screen on the GoPro is really, really REALLY small and you’ll be looking in the box to see if they included the magnifying glass. This doesn’t compare to any camera I’ve ever used  – they were all better.

gameboy

Game Boy display

 

gp

GoPro display for comparison.

 

Wireless: I’ve managed to get the wireless controller working but I am truly flummoxed by the iPad app which relentlessly says YOU LOSE no matter what I do. This might be my stupid, so I will award the GoPro 1 star in that case. Update: got the iPad app working and it makes the whole thing 10x less painful. Seeing what you’re doing is the start – and being able to make settings helps too.

Quality: it overheated and froze in the first ten minutes of use, not even inside its proofing. Did they actually run QA on it? To get it going requires pulling the battery out, and given all the wrappings that’s non trivial. Here’s a law guys – power button always works no matter what.

So Having Got All That Sorted Out – How Does It Look: the GoPro is one of those tiny cameras with cheap optics like Bloggies and iPhones. The first test I did outside was bad – really bad – because the situation called for a neutral density filter for glare and that’s not even within the dreams of the GoPro. Framing the subject was impossible without any kind of viewfinder and the iPad app not working. And despite selecting a narrow setting the image is a great big unusable fish eye. The recording quality is soapy. None of these things are a surprise nor should they count against a camera that was never designed to do what I tried. It does what it supposed to do, which is record you falling off cliffs and stuff like that – it doesn’t serve as a general purpose camera.

GP

Torture test – 720p + muggy glaring day and a building that normally has straight edges. You’d normally filter it and adjust the lens and maybe there’s a way to get that out of the GP – but it’s going to take a lot of figuring out.

So the decision comes down to this: if you are someone that uses a tripod this is probably not for you. If however you are somebody who uses a suction cup or a head strap, you should wait for them to get it working for more than ten minutes without dying.

It is the ruin of all that is good and fair in this world.

Not [H.H]. That’s not causing anyone any trouble, except me.

But the VERY IDEA that you shoot a film at 48 frames per second! Peter Jackson shoots a film at double speed, film critics outdo themselves in hysterical stupid.

After a while my eyes adjusted, as to a new pair of glasses, but it was still like watching a very expensively mounted live TV show on the world’s largest home TV screen.

Says it all really. Making a film twice as clear turns it into … television.

The unintended side effect is that the extra visual detail gives the entire film a sickly sheen of fakeness… I was reminded of the BBC’s 1988 production of The Lion, the Witch and the Wardrobe, and not in a good way.

Television!! How lower class!!

For people shots and pans, the smoother motion of 50 fps looks more like … newer TVs to us, although we find it to be less noticeable on action shots.

… but when actors, costumes and sets appeared the clarity made every pore and flaw visible, breaking the spell of the film

 

OMG Television!!! Even worse it’s like one of those horrible Computer Games that young people seem to like!

It’s not … FILMIC!

valium_mm__47393

One day I hope to take ‘filmic’ and bury it alive under a slab of concrete, along with its addiction to blur as the answer every problem. Blur is the Valium of ‘film’, the scratchy grain, the losses of optical printing, the blur of shallow focus, the juddering pans that 24fps has offered for the last 100 or so years. All of this keeps the image soft, and politely out of our personal space.

Of course when you run out of Valium or Heroin or whatever your fancy, the world is harsh and bright and terribly in focus. Horrible nasty real world, not abstracted by analogue media – the rumble of the turntable, the hiss of tape – ready to provide a meta level where the story can be kept at a distance.

In 2012 there are still people who want to keep it blurry and grainy so that they aren’t confronted by pores and hairs and all that nasty reality. The Dogme movement was one attempt at blowing that out of the water – too rough and ready and Jackson is better equipped to strip off the bunny rug.

Of course this is also political. Never mind that television has been for quite some time the equal and in some cases the better of the cinema. That video cameras have been used for ‘film’ for about a decade. ‘Film’ is pulling rank, which is all it has left really. The smart directors like Jackson are crossing boundaries and forging a hybrid where the size of screen will no longer be compartmentalised.