It’s 1AM and these thoughts have to go somewhere before I forget.
Item: I am hearing about mechanisms for storage which is all very good, but what we need is a mechanism for retrieval. We already have people busy storing. Squirrels. Nuts. Buy all the hard drives you want guys – that’s half the problem. Something stored and not retrieved is lost, so all your museums are only half the story. OK, artistic goal: develop retrieval mechanism which is curatorial.
“curator.” Online Etymology Dictionary. Douglas Harper, Historian. 14 Jun. 2008.
“1362, from L. curator “overseer, guardian,” from curare Originally of minors, lunatics, etc.; meaning “officer in charge of a museum, library, etc.” is from 1661.”
(Originally of minors and lunatics – fantastic. Robocurator, the Iron Man of the Nut House.)
Greater minds than mine have prepared the ground. MPEG7 is a schema for describing the content of multimedia. The recipe was ready in 2001. Some early efforts – IBM in 2002. Dead. Some Japanese efforts up to 2005 also dead. A java version that can provide ‘low level descriptors’ for audio is available at Sourceforge which seems to be an effort from The University of Wollongong electrical engineering Whisper team. Low level descriptors are not very useful, it’s like storing the waveform in an Excel spreadsheet. That strand of MPEG7 investigation seems to have hit a wall. Note that the UOW team are now onto MPEG21 (DRM for the ISO) which will be more lucrative. Not of interest to me.
Good work over at JOANNEUM RESEARCH in Austria (it’s their capitals not mine). Dr. Helmut Neuschmied seems to be the big man on campus for media tagging – he’s currently in a team setting up automated search for religious symbols in motion pictures using what I guess is partly his own “Semantic video annotation suite”. The device has scanty documentation but after about an hour of fluffing around I managed to have it process The Great Curry House Collapse video (henceforth the GCHC) and recognise the Ch9 reporter in a few scenes. Note to self: the GCHC will be my official test video for this whole project. So, Dr. Neuschmied and JR company willing, we have a way to generate MPEG7.
Also BOEMIE (Bootstrapping Ontology Evolution with Multimedia Information Extraction) have just laid a prototype MPEG7 editor. Doesn’t look like it’s as advanced as the SVAS, but more likely I can get my greasy mitts on it.
- Collect a test suite of video (including of course the GCHC video)
- Practice generating MPEG-7 XML from these. What information do we need?
- Translate the XML into a friendly database format (I vote Filemaker)
- Draft an interface that allows a useful overview of the data, so it’s easy to find video scenes
- Part one done – offer this to video museum guys
- Develop a compositional mechanism for rearranging the data
- So far so easy… here’s the hard bit…
- How to have the visual data rearrange to match the composition? That is, how to re edit multiple videos into new output according to a XML file?
- hunch – it’s a VJ thing – generate play list from the tags
- another hunch can we convert an MPEG7 XML into a Final Cut EDL? There should be a way
- Part two done – this becomes the basis of the umami project