In the year 2006, something I wanted to fix

While developing and testing the avs6eyes Linux kernel driver, I needed to record long and short video clips for stability testing. One of those was captured using a camera aimed out from my window, registering the everyday activities outside my home. The result was pretty boring, and I thought it could be a fun exercise to spruce it up with the approximate equivalent of multiple virtual cameras doing overlapping time-lapse and long exposure photography at the same time.

With quite a few tricks, real nice pics

Image of a bus zooming by

Simulated shutter time of about 1.2 seconds.

The long exposure bit was done by averaging several consecutive frames, simulating keeping the shutter open for an extended amount of time, and time-lapse was simulated by skipping in the stream. It worked out pretty well; the fast-paced, ghost-like appearance I was aiming for was, well, fast and ghostly.

But by now, you know me! There had to be more before I considered it done!

Inspired by something, quite probably related to the works of Michel Gondry, I wanted to insert multiple versions of myself in the video. Just having a bunch of me would be boring, and I thought up a pretty advanced choreography for myself, taking out the trash and dodging the imaginary vehicles and persons in the video. I thought it kind of amusing to think of what my neighbors would think I was doing out in the street, walking and jumping from point to point, tossing and handling a trash bag, but they didn't have to experience that. The weather conditions changed, work got in the way, I moved, and, basically, the taking out the trash part never did come to fruition.

The list of times and positions have been lost, but the initial video experiment has not.

I rarely ask people to contribute to my experiments. It's not that I don't think others could be of help, but rather that I don't want to waste other people's time on projects that just might could become something, but most likely are destined to become eternal darlings. This time, I asked my friend Martin Persson, good with sounds and related things, if he could make a soundtrack for me. He returned the video with an incomplete, but really nice, example of what he could make of it. I like what that annoyingly squeaky background loop does to the video.

Years go by, stack like bricks

A large portion of the time required for publishing this particular darling went into digging through all the places where my abandoned projects have been archived, but after extensive searching, I found the version with his soundtrack.

I've enhanced the contrast for increased viewing pleasure, and recompressed it from the source material with more modern techniques, but apart from that, the whole thing looks pretty much like it looked in 2006.

Bonus tip: An entire exhibition of neglected darlings: Tappade sugar (swedish)

Ever since i first laid eyes on DKBTrace, I've wanted to do something great in the field of raytracing, something larger and grander than teapots and glass reflections. Somewhere between 2002 and 2004, I heard a track from the upcoming new album from Studio Symbios, 'This one goes to eleven'. I thought it discreetly uneventful, and felt that I could create an equally discreetly uneventful video for it, as a gift to my colleague.

The video has been under development for the last ten years, or so. Time for design, tweaking and test rendering seem to have waned, bringing the project to a discreetly uneventful halt. It has resulted in a spin-off product, HTTPov, a system for distributing POV-Ray rendering jobs across several computers via HTTP, but the video itself has so far failed to materialize.

It's time to let go, and admit defeat.

I have an unfortunate tendency to want to make things just a little bit prettier than they are, and in this case, it manifests itself, among other things, in bright reflections of the sun on the ground. That requires photons, and photons require massive amounts of time in POV-Ray. I'm not including them in the video, but they do look great in my test renderings. I'm letting the This one goes to Eleven video go in a heartwrenchingly unfinished state.

Image of the Unauthorized Aardvark

'This one goes to eleven' was originally planned to be released in a Studio Symbios sub-project called 'Ambient Aardvark', and as I, at the time, already was publishing a web comic starring an aardvark...

The central tower is there, the mysterious clock, and the mirror high-rises. The streets beyond the city center are not there. Neither are the various parts of the city containing more or less elaborate, but not reflective, buildings. The number 11 bus, destined for Eleven, is missing, as is the motion-blurred biker and the numerous clocks, all counting towards 11:11:11. The starring aardvark is nowhere to be seen, which is a pity, as it is kind of cute.

When I started working on the video, YouTube had a time limit of 10 minutes for uploaded videos. I planned to break it up, into two five-and-a-half minute videos, with a transition from bright and sunny to dark and scary at noon. That's not there, either. I really would have enjoyed seeing that distant building break up into a few thousand bats, though. (The perceptive reader might get a picture of what I mean by saying that I want to make things just a little bit prettier.) Finishing this project could easily have taken another ten years, considering it took me the better part of two months just to finish it off.

But all is not lost! I did manage to squeeze in an eleven-storey, eleven-sided tower, a few eleven-ish mirror houses, a carefully timed and executed intro and outro, ditto sun path, a bit of Blender experience, and I did learn quite a bit of Python, Bash and PHP, along with ImageMagick, GraphicsMagick and being able to help a few persons in their need for a solution for distributed raytracing. That's not too shabby for a POV-Ray project!

crop-6600.png

POV-Ray 3.6 has two ways of doing anti-aliasing. When objects are one pixel or less, they produce different results. AM1, shown here, is less forgiving about vertical lines and sliced rendering than AM2, which still doesn't give a perfect result, but is more consistent between slices, though not between frames. I have a distinct feeling that I'm doing something wrong.

Finishing off a project of this size does in itself generate enough material to warrant a blog post on the subject. It's one thing to create a 125 frame, sliced animation that's demanding enough to be usable as a test project in a distributed rendering system, and an entirely different thing to create a 16500 frame animation that looks nice and still is able to be finished within a month. Postprocessing the 445500 zip archives and image slices produced is time consuming, to say the least. A good, fast internet connection is a must, as are SSD devices. 128 gigabytes was barely enough space to unzip the lot, let alone encoding it as video. Then I discovered that I really should have used the other antialiasing method...

If I were to start this project today, I would think it worth the time and effort to learn a good, graphical modeller for POV-Ray, and a competent video editing program. Most of this video was created while riding the bus to and from work, at least those days that it wasn't too crowded and I wasn't too tired, using Emacs and Gimp for modeling. A modern, graphical editor would have sped things up. On the other hand, my low end development environment did allow me to use a miniscule computer, an Asus Eee PC 901, and size does matter on the bus. The text overlay shown half a minute into the video took several rides to perfect, experimenting with ImageMagick and GraphicsMagick in combination to create subpixel composition and a blurred drop shadow in combination with variable transparency. It might not be obvious at a first glance, but that overlay doesn't simply fade in and out; it's subtly animated, too.

This project broke my HTTPov infrastructure in interesting ways, and pointed out scalability issues in areas that aren't any problem when you're not rendering 11 minutes of HD video, distributed over several thousand cores.

Last, but certainly not least, I want to extend my sincere thank you to those who lent me those cores of raw raytracing power; I wouldn't have been able to do this without you!

Some statistics

First run, +AM1
Number of clients: 4147
CPU time: 3434:44 hours
Wall time: 19:30 hours

Final run, +AM2
Number of clients: 84
CPU time: 7127:22 hours
Wall time: 226:28 hours

The first run was mainly executed on a sizeable cluster, optimized for speed, while the second run was executed on older hardware. There's a noticeable performance difference between older and newer hardware, even when accounting for the difference in antialiasing settings.

Archive

Shuttleworth_Funded.png