Mancave Update: Arturia V Collection 5.

I once compared this software collection to the Smurfs. Let me update that slightly.

mD5oGGn

Five Arturias in Arturia Forest

It is still the Smurfs, but with a hell of a lot of Gym hours.

Arturia’s V Collection has both the pride and misfortune to be one of the first to emulate known hardware. 200X era computers were not up to the task, and the required compromises pleased no one. The first time I tried to run the virtual Jupiter 8 it choked on the effort, while not getting near the desired effect. I’ve never owned a JP8 – but I noticed there was no great advantage over no-name VSTs.

Over time things have improved. It’s not that the collection has nailed authentic sounds of MOOGs and CS80’s, but has become able to make useful, musical sounds that work in a mix much as the hardware would – not as an emulation but a substitute. The virtual Jupiter is able to serve the same role as the hardware. Arturia are shy of greatness, by only ever designing the expected, never the unexpected.

We are told that this version has ‘a new sound engine’. There are very few details about what this means. The new manuals specifically describe circuit modelling, but the older ones have a slightly woollier version of the claim. Likely that they had come up with something ‘good enough’ in the past but now have the luxury of the real thing. If you compare to U-He’s software you’ll see it can’t be nearly as hard on the CPU as Diva and I don’t hear a profound difference. Some key measures – such as the cross modulation in the Jupiter – seem better handled. I tried comparing the sound to the Origin, but honestly that has its own mislabeled noises, and doesn’t really help. Somebody will do a real test soon.

Actually the Jupiter 8 is now a ‘Jup8’, joining the Moog Mini and Modular as unbranded instruments. Likely this follows Roland’s move into Plug Outs. The Jup8 can produce many more voices than any of the Plug Out synthesizers, so Arturia’s “TAE” will still have a different use in your music to Roland’s “ACB”.

sc01-1
The interfaces are now big, and they scale. The improvement is more significant than you might expect: the Modular V is for the first time usable – I’m not exaggerating – I’ve only now made sounds, enjoyed using it. You’re likely to load up a wider range of the instruments now that you remember where the controls are. But when you are working with a DAW the new big interfaces will block most of it. And most of them are uglier than the old hand painted ones.

The browser is improved, for people who browse.

The big addition is is supposed to be the Synclavier V. Frankly it’s more interesting as a historical document than as a sound source. There’s a bunch of VSTs that do that sound (for example Morphine) and have less hopping back and forth between virtual wood grain and virtual green screen. Not dismissing it, but the highlight for me has been the Piano V and the Farfisa V.

The piano is definitely physically modeled, and it can be bent and stretched into all kinds of distortions yet still sound good. It proves that NI’s endless array of sampled pianos are fool’s gold – lots of names no real distinction. You can tell me about superior overtones or some such, but what I want in a piano is for it to sit in a mix and most of the NI pianos seem designed for solo play. This guy fits where ever you tweak him.

I’ve also never seen organs as an instrument for my kind of music. The Farfisa V takes the best features of the real thing – envelopes for example – and adds in more synthetic controls, including advanced additive synthesis. It’s very simple, it sounds great in a mix.sc01

It’s a very worthy upgrade. But if you are new to it, then you have a lot of other options. Diva is better if you wanted to dial up realistic poly synthesizers, but U-He is not cheap in CPU or Euros. NI’s Monark is a better mini Moog. The collection is really one for people who are not going to look too closely at each individual instrument, and would rather have a room full of things for a reasonable (given the quantity, yes reasonable) price. It is for composers, and not designers.

Make that sound!

I’m increasingly required to reproduce music I made a long time ago. There’s a lineage of sounds based around particular equipment sets, which I can quickly summarize starting a few years in:

  • Slab Horror – MS20’s, tape.
  • Big Bigot – DX7, SH101 and AKAI Sampler
  • Bad Mood Guy – Mirage and DX7
  • Rotund – ESQ, TX81z
  • Cuisine – SY77 and S10 Sampler
  • Gigapus – MKS80, EPS-16 Sampler, Oberheim Xpander. <– expensive!

You can see why I have re-collected some antique gear: the AKAI sampler is required for Big Bigot for example, where Rotund For Success will need a TX81z. You can get away with similar gear for standard patches, the DX7 is well emulated. In some cases the sound is especially troublesome – and worst of all is that MKS80. Back then they were cheap, damn they are expensive now. Oi.

Here’s a sound I am keen to make – the first part of “Tiny Wounded Bird”.

That there is pure damn MKS80. Or is it? Surely there is something you could use to get just that, but it’s not easy. Let’s look at some of the parameters.

  • Really sharp attack – much sharper than most DCO machines.
  • Cross modulation – the metallic sheen.
  • Two layers. One of them has a pitch envelope.
  • Detuned oscillators – the MKS80 has a monophonic mode that allows it.
  • Bass boost – one of Roland’s cheat machines with EQ built in.

And being an early Roland machine it’s around the time that you could put the VCA too high. The Jupiter 8 can have this fault, but in the Jupiter 6 it’s fixed (unless like Graham Revell told me way back in the 80’s, you get it modded to be controllable.) It sounds to me like it’s too high here.

So then, which cheaper alternative would you use?

  • JX anything – envelopes too slow.
  • JP8080 – it does a really good attempt, after all, it’s a Jupiter. Turn up the bass and treble, make the two layers. Just not quite the analogue overload of the original. V-Synth is similar but still trying.
  • System 1 – it does the analogue no problem and can do most of the MKS80 bass. It’s not able to be two layers though.
  • Boutique JP08 – well actually, close. But it’s being a Jupiter 8, and so not quite the same heft.

I’m going to try the Blofeld next. But somehow that’s just… Not Roland.

Damn this nostalgic madness.

Life Hacks for the recently widowed.

I am a widower. You got knocked down, but you get back up. Everybody finds their way again, and these are some ways I am doing it.

Set a deadline. I don’t have any strong culture or religion to work from so I just figured out a year is good round symbolic duration. For one year I am a widower. After that I am a millionaire playboy philanthropist. From Batman to Bruce Wayne. There will be a little re-birthday just for me.

I find the worst thing is having to re-live my loss to strangers – to the police, to the bank, to immigration officers, to co-workers. Which you will have to do sometimes, but you have plenty of other happier things to talk about as well. It’s not wrong to put your loss to one side for later, it’ll always be there, but you will be stronger. Every couple of months I get sent a newsletter about suicide. I’m sure it helps some people but I hate the damn thing, it goes in the bin.

Common knowledge is to re-arrange all your furniture. Like, if a tornado hit it. Like, you are exhausted by carrying things up and down stairs and so you sleep soundly. Like, you are on a mission that allows no other intrusive thought. Like, whatever life was lived here it’s been remixed by a bad DJ. And in your new environment you can get on with your new life.

Obviously you need to put something where you-know-what happened. If you choose something ridiculous you might find yourself staring forlornly at a novelty sock drawer, and snap out of it.

Socksmith_Maps_socks_sockdrawer.com_large

Don’t just get a hobby. Become theatrically obsessed with something. I want to be the tedious Man Cave guy, not the dead wife guy. Particularly as my Man Cave jokes are a re-occupation of space in my home which would otherwise be sad.

Whenever you get the bad thoughts go out and walk around. Look at the world. Wear out some shoes and wear out that misdirected energy.

Sleep is key. Sleep is repair. A strategically placed pillow allows you to sleep in the same positions you’ve known for the last 25 years without your knees knocking together. It should not have an anime picture on it that’s gross.

Despite that you’re going to age visibly. Sorry. If that worries you then lay off the grog. For me grog is not a problem but not everyone is so lucky. Careful.

Time to reread your old Roman stoic philosophers. No matter what they say, everyone you know is terrified that you’re going to weep all over them and will avoid you. Fortunately for me I did so much of that when young I’ve worn it out. You might once, but you’re in charge of the tone of your friendships, and if you practice calm and acceptance then they will too. And being chill actually helps your mood.

Caution: That person at a party that reminds you of your partner has no especial insight into your loss. That’s all in your head.

You remember your partner when you and they were young. But look in the mirror, time’s moved on. I’m pleased to say I have no great desire to shack up with a 20 year old, may you also be free of such delusion.

Ghosts: There are only the ghosts in your head. If you find it hard to go to the toilet because you might be observed by angels, you could hold a cleansing ceremony. Some people use incense. I have preferred to unleash an endless torrent of belching and other biological sonic place markers of such might that no woman alive or dead would possibly share this space.

It’s OK to talk aloud to yourself. Or a cat. I can’t have a cat right now, might get a robot one, which is just as oblivious as the real thing. It’s even OK to talk to the dead, because you’re really addressing some of the wiring in your own head that needs revision.

i-tried-the-99-robot-cat-hasbros-solution-for-lonely-grandparents-and-it-was-strangely-comforting

Media: I have a ritual of scanning and sorting photographs of the dead which I’ve done twice now. I think it’s because you look through them all and then you reach an end point where you can stop, and not have to look at them until much later.

If after a year you are healed and looking forward to new adventures then you have only done what your partner should have hoped for you. You’re a living thing not a grave marker. And at least for me, I do not believe in afterlife, nobody is watching, it’s completely up to me to make sure that life is spent well.

Facebook

I am on Facebook.

Why are you on Facebook?

Because I am leaving my job where I have had a steady wage, to go back into the wilderness. I will need to be more approachable and take up opportunities or I will starve, and that means Facebook.

You said Severed Heads were never going to be on Facebook!

No, I said: “Neither myself or ‘Severed Heads’ are available on FACEBOOK. Do not read this blog. This blog is not the fault of my employer.” This will now change; you are allowed to read this bog. Also I will not have an employer. But “Severed Heads” more then likely will never be on Facebook because all the people involved have moved on (and are Facebook anyway).

What happens to this blog?

It stays the same. I am not moving to Facebook, I am simply putting my name in the ‘phone book’ so that people can find me. All my bullshit stays here. The Man Cave stays here and I might even get time to fix the rest of it,

Can’t they find you at tomellard.com?

Apparently not. I can’t risk that any more, because of the starving bit.

Does this mean that you have given up on independent web presence?

No, it means that I have another way to get people to come here. I don’t see Facebook as my staging. It’s simply a big phone directory and being there is like listing your business there. It’s like having a FidoNet address in 1992.

Why didn’t you do this before?

Because I had a different life path planned, on a tenure track. Most people that have contacted me recently have found me by my workplace. But that didn’t allow the creative activity I need to be happy, so I am changing everything.

 

We have to talk about Virtual Reality.

It’s VR talk time, you’re old enough.

Let’s get this out of the way – 3D television died, Google glasses died, everything is bad, why try anything let’s just sit around bitching. Great, thanks for the opinion. But that argument is based around success I’m not looking for. There is zero chance or interest that I will be “a media tycoon by getting in at ground level”. People that think that way are calculating profit and loss, not making fun things. VR will probably die, so did a heap of things we’ve enjoyed.

But you were against Google Glasses and AR! Yes that’s right. I don’t want to build something that tells me what I am seeing. I want to build entertainment. VR and AR are only related in the most superficial way.

I’m learning how to handle VR by first remaking the 2010 video for Greater Reward.

all

Until you actually make something it’s all theory. Then the problems come crashing in fast. Let me add my voice to the very sensible advice already there.

  • Until you see the shot on a VR helmet you have very little idea what you are getting. The distortion moves everything away, your instinct to make things fit the image is wrong. Something that looks OK there is about 1cm from the viewer’s face and that sucks.
  • The inter pupil distance really matters. If your camera has pupils 6.5cm apart then everything must be scaled to that. If it’s wrong you end up with things being kilometres in size and that hurts. There is no zoom. Prime lens only.
  • There are no edges to the frame and so you have to design in all directions. But there is a centre of attention, only 70° in the middle. Your viewer is seeing a little bit at a time, you don’t know which bit. So most scenes should be simpler than you first thought. You only place complex things where you want the viewer to look. So like the way you place furniture in a room.
  • I tried tilting the camera. Nope. The viewer feels gravity pulling them down, the scene doesn’t, so it tilts, not the camera. It just doesn’t work. That and the previous point mean most of your camera skills are dead. So if you want the viewer to look up, you have to move something up there to get their attention. Same for most directions. You use the same tricks as stage plays: light sound action.
  • Editing seems OK so far but jump cuts are worse than usual. You need to move toward the new position, or pre-empt it. I’m testing fades now.
  • Technically you are making 4K but the bit the audience is seeing is about 640×480. All textures should be severely anti-aliased. Like Gaussian. No grids that will moire with the pixels. Think 1990’s computer graphics. Everything smooth.
  • Drop the light levels down, keep the lighting comfortable. You can’t use normal filters on a VR image – e.g. blur. So you have to get this right in camera.
  • I’m not seeing too many problems with motion but I am keeping it slow and steady. Also I am used to the helmet. One day maybe people will all be used to VR and some of these rules will be broken.

So is all this pain worth it? Depends on the material. I generally would steer the vast majority of film making away from VR (I wish my students would listen). You cannot perform cinema with VR. If you ever write ‘we see’ or ‘moves to’ in your scriptment do not use VR. As for Greater Reward, all the scenes are positioned in spheres of some sort and so the translation becomes possible. But what else will work – I just don’t know yet.

(Nothing here about sound design – a different post for that).

Strange Cameras for Strange Times

Too soon we have become blasé to the distortions of the current flock of optics. We pretend these are just sidesteps to the usual reality. Their peculiar qualities should be celebrated and their perversions articulated, and I am here to do just that.

The Lytro lllum. https://www.lytro.com/

original

Normally you point a camera, and light arrives at the lens in a wild range of angles. That forms a bright but blurry image. As you close the aperture the light is constrained to a smaller range of angles, and the image becomes coherent, while the exposure drops. The smaller the aperture the sharper the focus, and the more the camera has to work to expose the film. Hence the deep focus of Citizen Kane was a technical marvel. Now everyone is obsessed with shallow focus, because big lenses are expensive, and what better way to show you have money.

The Illum is a light field camera. Light arrives from all angles and hits one or more of hundreds of little ‘buckets’ inside. The computer notes the direction at which the buckets are filled and calculates the angle at which the light arrived. The camera sees both the light and its direction and from this records a perfectly focused image with depth information.

Lytro_ILLUM

You can use that depth to set focus after taking the shot, to calculate a 3D image or slice the image over the Z plane. Probably more – there’s a SDK available for trying out ideas. But most of us will just animate the focus after the fact and think that very clever. For a while…

Sensible review.

Lytro has set small, reasonable aims for the camera and provided them. The Illum is a well built, well thought out device with a defined purpose. But that purpose is not in itself very inspiring for very long.

Pulling the focus back and forward is exciting for about an hour after which you’re putting the camera in the cupboard next to the C64. Mine came out of a discount bin, still wildly expensive compared to an equivalent DSLR (because of Australian distribution). The Illum is not a game changer, because the technology is more interesting than what you’re encouraged to do with it. So you should think about misuse.

lytro-5-1024x609

Lytro is now onto surround video capture with an impossibly large and sexy UFO thing that photographs with 6 degrees of freedom inside a virtual space (but can it photograph itself?) I’m disappointed that they have leaped so far, when just a single lens 3D video capture would be really tops. The Illum is not able to shoot video, it maxes out at about 3fps. It might be insanely great as a stop motion camera, but no moving pictures.

The software can output its unique RAW format as set of TIFFs with the depth as an 8 bit gray scale image. The TIFFs show the scene from a range of angles, so you’re already alert that the depth must be some compromise of all these. It has a ‘cauliflower’ texture, by which I mean it shows a lack of detail evidencing some kind of fractal or wavelet tactic.

lytro-illum-raw-sample-image-1-editable-depth-map

Being lossy and 8 bit you are not going to get a clean slice where an object is magically cut out from the background. Fair enough. Probably the SDK can get a cleaner image from the RAW – but I tend to think that the Illum operates at the extreme edge of the hardware. It has the brain of an advanced mobile phone – impressive – but having to compromise greatly to get acceptable results.

My intention is to grab a whole variety of still images which I’m going to then mash together on the Z plane with some dirty and distorted depth data. It won’t be clean or realistic. It will hopefully be disturbing. You might have a pig and car sharing the same 3D space. You might like it.

The Ricoh Theta S – https://theta360.com/en/about/theta/s.html

2016 is the year where 360 cameras infest every gadget retailer the way that sports cameras did a few years ago, and 3D TVs before that. They will eventually die in large numbers. Right now they’re just touching on reasonable performance at a reasonable price, so the average enthusiast may as well have a look. That’s me.

1445210344526

If you’re the sort of person that takes selfies, you’ll love the Theta. Here’s the pyramids… and me! The beach front… and me! My friends and me me me me again. There being no back and front to a spherical photo, you’re always there unless you hide in a garbage bin or wear it as a hat.

R0010005

Suspicious white object at 6 o’clock

Because it has no viewfinder (what use is a viewfinder in 360º?) you are encouraged to setup a wifi link between it and a mobile phone where you can preview the effect. It works, but mobiles aren’t really set up to be field monitors, as the glare is such that you can’t see what you’re doing. So you set up the camera on a tripod, run away some distance to hide, find some shade, look at the phone and only then find that the tripod has been knocked over by some passing brat.

When I got back to the camera, the lens cover was scratched, but there seems to be no effect on the photos, I guess the cover is out of the focus area. It is not a sports camera, but akin to a toddler – it can take a fall.

Sensible.

The quality of the earlier Thetas was horrible, and at 1080p the video on the improved S is still only a quarter of the needed resolution because it captures two circular areas inside that area. But the photo images are big enough for my purposes, which is to decorate some VR spaces I’m building in Unity with natural light and textures.

R0010015

Software wise Ricoh give you a desktop viewer (made in Adobe Air so banned from my work computer) which connects to their gallery (which only allows very short segments of video). The video can also go up on YouTube but ignore the instructions given on Ricoh’s site – it needs to be first run through a “Video Metadata Tool” before YouTube will see it. YouTube has a fixed viewpoint which only covers a small part of the video – so very nasty quality. I’m going to to try pre-processing the video in After Effects to make it big before encoding it.

What use is surround? Only as a means to capture an environment for more detailed images – that is, the same way you would use a stereo microphone pair to capture the sonic environment, followed by a shotgun mic for the detail. We have not previously had a crossed pair for video. The problem then is one of ‘handling noise’ – big distorted hands at the bottom of every shot. It’s as annoying as microphone handling noise.

The Theta is basically a Zoom recorder for light. For most people the Zoom recording is not the end of the creative act – only the beginning, and using the Theta as some kind of documentary device is not anywhere near to the real reason to own one.

Is cinematography a biological technology? Probably not.

Here’s a thing – if you run Deleuze through a piece of cloth and strain out all the poo, you end up with some reasonable, if fairly simple ideas. (Of course that ‘does violence to his concepts’ but I can live with that).

So here’s one. Technology is a mechanism by which we make our actions easier and more potent. A telescope enables us to see further, a microscope allows us to see smaller and if you’re Dziga Vertov, a camera enables us to see things more clearly. Deleuze is even more enthusiastic – cinema is a technology which entwines with our own innate technology of seeing. Unlike a microscope or a telescope, cinema is a means by which our perceptive apparatus and therefore mind processes can be analyzed and re-synthesized. Cinematic language is an exercise of attention that leads to new modes of thought.

baby-and-ipad

His argument is more visible now than when he was writing. When you see a child fascinated by an iPad, to the extent of being oblivious to nature around them, you may feel the sense of unease that he mentions – it’s hard to say whether the child is accelerated, sensing the game world at immense speed, or senile in being unable to sense anything else. It can be both.

Where I disagree is that I do NOT see cinema as privileged in this way. When I recently bought some prescription glasses and walked home I was appalled to find that everything was now taller (and so my feet found it difficult to hit the ground) and that people far away had faces, which was extremely disturbing. That’s not equivalent to the entirety of cinema – it doesn’t have to be. I think that any technology does what he describes to some extent.

Now, a related factoid is about episodic memory, which is stored as sensory impressions with duration, or at least sequence. If you blast the right part of the temporal lobe with electricity you can bring up auras which include sensory impressions before they have been organised into a narrative (e.g. one stimulation brought ‘being at a wedding, throwing a bouquet’, another ‘the theme from Star Wars’ source). For a short while it was thought that we record everything like a video camera. But then it was noticed that memories are cinematically assembled, to the extent that you may see yourself ‘filmed’ in the third person. There’s further evidence that recall is drawn from a storyline – {me}{the beach}{ball}{hit} = embarrassing – and turned into visuals. You will note the comparison between episodic memory and film, which at first blush leads one to think that narrative film is a technical realisation of episodic memory.

But it’s always good to be sceptical. By which I mean we’ve only buzzed people since there was film. If you buzzed someone from several hundred years ago would they see through the media of a painted triptych? If you went back far enough, would Julian Jaynes be right and you get the Voice of Gods explaining what happened in your past? Did perspective arrive in the mind when it arrived in art? Because as much as we may be inspired by our bodies and minds to extend them, we may also train our minds and bodies to emulate mechanical systems. It could be a cycle, and Deleuze was right to be worried that we have cut off the variety of existence that we could otherwise have.

The sensory impressions found in the temporal lobes are not organised in the way we store files on a hard drive. There are related items spread all over the place to the extent that if you snip out a relevant bit of flesh, the memory is still retrievable, but more blurry. So how this material is ‘scored’, or even manifested is (I guess) a matter of the installed biological software. That software could have evolved over time – in fact I would be amazed if it hadn’t. The manner in which the episodic impressions are realised, and how their causal chains are linked cannot have been the same for all human history.

A quick reality check is dreams, which at best guess involves random excitation of these impressions and attempts by the hippocampus to stitch them into a causal chain. Blind dreamers do not dream visually. But their dreams still involve narrative structure. The camerawork of the sighted dreamer doesn’t occur naturally – it’s more likely that camera work in film has responded to the mental mechanism, which in turn has noticed patterns in camera work and responded in kind. I am pretty sure that yes, Deleuze is right about film now, but I also think that the use of fire was once just as important, perhaps the start of animistic religion. And god knows how these images once served human thought:

people

Back on the VR rollercoaster

Here you see a CINERAMA screen, from 1952.This.Is.Cinerama.1952a

CINERAMA was a big hit in that year. A standard film of the time would fit in the middle screen only. So you can see what an impression it must have made. I’m interested by the kind of films that bloom with any new technology. There must be a roller coaster film. It’s likely to be the very first thing that gets shot in the new (cumbersome) format.

rollercoaster3603d

Standard VR Roller Coaster film

And close behind are what we now know as a “Go Pro movies”. Sporting feats.

This.Is.Cinerama.1952b

And aerobatics.

This.Is.Cinerama.1952c

Once these technical demonstrations are done it’s time for a few experimental works by corporate funded artists, some stadium sports and a slowly wilting realisation that a good story works just as well on an old analogue TV, so what are you going to do? George Pal did good business with The Wonderful World of the Brothers Grimm in 1962.

cinerama-grimm-3films

Eventually the name CINERAMA became more important than the process. By the late 60’s it was a 70mm print stretched out. But the stories were better for it.

20015s

I would have liked to have seen a CINERAMA print, but standard Super Panavision was pretty cool. (My folks took me to see it soon after it came out.) This film isn’t right on analogue TV. Here’s a site devoted to wide screen film.

So instead of just going through this cycle, maybe we can think about it. What benefits can be found in the VR format? Obviously there’s interaction, but I’ll just leave that to one side for now please.

Consider this – no one can stand behind a spherical camera. There is no behind.

People working in VR are keen to point out that the frame is no longer there, and so the idea of composing an image in a frame is lost. So you cannot center the image, show something to one side – any of that. Edits are OK but you cannot know which direction the audience is facing. “Cut to:” is rendered useless when they could be looking at the sky for all you know.

The standard of ‘reversals’ – over the shoulder conversations – is dead. Next time you watch a film count how much of it is reversals. Then realise – that’s gone, all gone.

Nausea – it’s about the hairs in your ears that detect acceleration. Not movement – a scene in a moving train is fine – but the ease-in and ease-out motion of a standard motivated camera move. When the ears detect that you’re doing something impossible it’s time to prepare for emergencies and up comes lunch.

VR is about sound.

If this sounds as if VR is more problem that potential, and if you are a standard film maker please go elsewhere and let the experts take over. By experts I’m speaking about sound designers who have dealt with 360º for a very long time. We build realistic spatially coherent environments out of the materials of sound. We turn your head to the events you then watch. We signal that something is occupying a point, an arc, all of space. We signal that something went by, causing a Doppler. Our Foley gives substance to things in all directions.

VR demands that the old equation of vision first / sound second be turned on its head. Because you are not going to be able to navigate the worlds that you’re filming without first realising the air, the tone of the room, the placement of all the sound sources. If your set has three walls and relies on a particular orientation to work – you’re in deep shit.

But if you are a sound designer for film – don’t be smug because the game is going to get considerably harder. Get a VR helmet, learn how it works. Then start to build soundscapes that work in 360º. 5.1 won’t work any more there is no front/back. Listen to your mixes as somebody would in a room. What size room? Round? Carpeted? You are suddenly required to not only capture the air, but to construct it. I can’t tell you how we are going to do that. I can tell you that if we want to get past the demonstration films it’s going to be up to you.