This one goes to eleven
Ever since i first laid eyes on DKBTrace, I've wanted to do something great in the field of raytracing, something larger and grander than teapots and glass reflections. Somewhere between 2002 and 2004, I heard a track from the upcoming new album from Studio Symbios, 'This one goes to eleven'. I thought it discreetly uneventful, and felt that I could create an equally discreetly uneventful video for it, as a gift to my colleague.
The video has been under development for the last ten years, or so. Time for design, tweaking and test rendering seem to have waned, bringing the project to a discreetly uneventful halt. It has resulted in a spin-off product, HTTPov, a system for distributing POV-Ray rendering jobs across several computers via HTTP, but the video itself has so far failed to materialize.
It's time to let go, and admit defeat.
I have an unfortunate tendency to want to make things just a little bit prettier than they are, and in this case, it manifests itself, among other things, in bright reflections of the sun on the ground. That requires photons, and photons require massive amounts of time in POV-Ray. I'm not including them in the video, but they do look great in my test renderings. I'm letting the This one goes to Eleven video go in a heartwrenchingly unfinished state.
'This one goes to eleven' was originally planned to be released in a Studio Symbios sub-project called 'Ambient Aardvark', and as I, at the time, already was publishing a web comic starring an aardvark...
The central tower is there, the mysterious clock, and the mirror high-rises. The streets beyond the city center are not there. Neither are the various parts of the city containing more or less elaborate, but not reflective, buildings. The number 11 bus, destined for Eleven, is missing, as is the motion-blurred biker and the numerous clocks, all counting towards 11:11:11. The starring aardvark is nowhere to be seen, which is a pity, as it is kind of cute.
When I started working on the video, YouTube had a time limit of 10 minutes for uploaded videos. I planned to break it up, into two five-and-a-half minute videos, with a transition from bright and sunny to dark and scary at noon. That's not there, either. I really would have enjoyed seeing that distant building break up into a few thousand bats, though. (The perceptive reader might get a picture of what I mean by saying that I want to make things just a little bit prettier.) Finishing this project could easily have taken another ten years, considering it took me the better part of two months just to finish it off.
But all is not lost! I did manage to squeeze in an eleven-storey, eleven-sided tower, a few eleven-ish mirror houses, a carefully timed and executed intro and outro, ditto sun path, a bit of Blender experience, and I did learn quite a bit of Python, Bash and PHP, along with ImageMagick, GraphicsMagick and being able to help a few persons in their need for a solution for distributed raytracing. That's not too shabby for a POV-Ray project!
POV-Ray 3.6 has two ways of doing anti-aliasing. When objects are one pixel or less, they produce different results. AM1, shown here, is less forgiving about vertical lines and sliced rendering than AM2, which still doesn't give a perfect result, but is more consistent between slices, though not between frames. I have a distinct feeling that I'm doing something wrong.
Finishing off a project of this size does in itself generate enough material to warrant a blog post on the subject. It's one thing to create a 125 frame, sliced animation that's demanding enough to be usable as a test project in a distributed rendering system, and an entirely different thing to create a 16500 frame animation that looks nice and still is able to be finished within a month. Postprocessing the 445500 zip archives and image slices produced is time consuming, to say the least. A good, fast internet connection is a must, as are SSD devices. 128 gigabytes was barely enough space to unzip the lot, let alone encoding it as video. Then I discovered that I really should have used the other antialiasing method...
If I were to start this project today, I would think it worth the time and effort to learn a good, graphical modeller for POV-Ray, and a competent video editing program. Most of this video was created while riding the bus to and from work, at least those days that it wasn't too crowded and I wasn't too tired, using Emacs and Gimp for modeling. A modern, graphical editor would have sped things up. On the other hand, my low end development environment did allow me to use a miniscule computer, an Asus Eee PC 901, and size does matter on the bus. The text overlay shown half a minute into the video took several rides to perfect, experimenting with ImageMagick and GraphicsMagick in combination to create subpixel composition and a blurred drop shadow in combination with variable transparency. It might not be obvious at a first glance, but that overlay doesn't simply fade in and out; it's subtly animated, too.
This project broke my HTTPov infrastructure in interesting ways, and pointed out scalability issues in areas that aren't any problem when you're not rendering 11 minutes of HD video, distributed over several thousand cores.
Last, but certainly not least, I want to extend my sincere thank you to those who lent me those cores of raw raytracing power; I wouldn't have been able to do this without you!
First run, +AM1 Number of clients: 4147 CPU time: 3434:44 hours Wall time: 19:30 hours Final run, +AM2 Number of clients: 84 CPU time: 7127:22 hours Wall time: 226:28 hours
The first run was mainly executed on a sizeable cluster, optimized for speed, while the second run was executed on older hardware. There's a noticeable performance difference between older and newer hardware, even when accounting for the difference in antialiasing settings.