In the year 2006, something I wanted to fix

While developing and testing the avs6eyes Linux kernel driver, I needed to record long and short video clips for stability testing. One of those was captured using a camera aimed out from my window, registering the everyday activities outside my home. The result was pretty boring, and I thought it could be a fun exercise to spruce it up with the approximate equivalent of multiple virtual cameras doing overlapping time-lapse and long exposure photography at the same time.

With quite a few tricks, real nice pics

Image of a bus zooming by

Simulated shutter time of about 1.2 seconds.

The long exposure bit was done by averaging several consecutive frames, simulating keeping the shutter open for an extended amount of time, and time-lapse was simulated by skipping in the stream. It worked out pretty well; the fast-paced, ghost-like appearance I was aiming for was, well, fast and ghostly.

But by now, you know me! There had to be more before I considered it done!

Inspired by something, quite probably related to the works of Michel Gondry, I wanted to insert multiple versions of myself in the video. Just having a bunch of me would be boring, and I thought up a pretty advanced choreography for myself, taking out the trash and dodging the imaginary vehicles and persons in the video. I thought it kind of amusing to think of what my neighbors would think I was doing out in the street, walking and jumping from point to point, tossing and handling a trash bag, but they didn't have to experience that. The weather conditions changed, work got in the way, I moved, and, basically, the taking out the trash part never did come to fruition.

The list of times and positions have been lost, but the initial video experiment has not.

I rarely ask people to contribute to my experiments. It's not that I don't think others could be of help, but rather that I don't want to waste other people's time on projects that just might could become something, but most likely are destined to become eternal darlings. This time, I asked my friend Martin Persson, good with sounds and related things, if he could make a soundtrack for me. He returned the video with an incomplete, but really nice, example of what he could make of it. I like what that annoyingly squeaky background loop does to the video.

Years go by, stack like bricks

A large portion of the time required for publishing this particular darling went into digging through all the places where my abandoned projects have been archived, but after extensive searching, I found the version with his soundtrack.

I've enhanced the contrast for increased viewing pleasure, and recompressed it from the source material with more modern techniques, but apart from that, the whole thing looks pretty much like it looked in 2006.

Bonus tip: An entire exhibition of neglected darlings: Tappade sugar (swedish)

Ever since i first laid eyes on DKBTrace, I've wanted to do something great in the field of raytracing, something larger and grander than teapots and glass reflections. Somewhere between 2002 and 2004, I heard a track from the upcoming new album from Studio Symbios, 'This one goes to eleven'. I thought it discreetly uneventful, and felt that I could create an equally discreetly uneventful video for it, as a gift to my colleague.

The video has been under development for the last ten years, or so. Time for design, tweaking and test rendering seem to have waned, bringing the project to a discreetly uneventful halt. It has resulted in a spin-off product, HTTPov, a system for distributing POV-Ray rendering jobs across several computers via HTTP, but the video itself has so far failed to materialize.

It's time to let go, and admit defeat.

I have an unfortunate tendency to want to make things just a little bit prettier than they are, and in this case, it manifests itself, among other things, in bright reflections of the sun on the ground. That requires photons, and photons require massive amounts of time in POV-Ray. I'm not including them in the video, but they do look great in my test renderings. I'm letting the This one goes to Eleven video go in a heartwrenchingly unfinished state.

Image of the Unauthorized Aardvark

'This one goes to eleven' was originally planned to be released in a Studio Symbios sub-project called 'Ambient Aardvark', and as I, at the time, already was publishing a web comic starring an aardvark...

The central tower is there, the mysterious clock, and the mirror high-rises. The streets beyond the city center are not there. Neither are the various parts of the city containing more or less elaborate, but not reflective, buildings. The number 11 bus, destined for Eleven, is missing, as is the motion-blurred biker and the numerous clocks, all counting towards 11:11:11. The starring aardvark is nowhere to be seen, which is a pity, as it is kind of cute.

When I started working on the video, YouTube had a time limit of 10 minutes for uploaded videos. I planned to break it up, into two five-and-a-half minute videos, with a transition from bright and sunny to dark and scary at noon. That's not there, either. I really would have enjoyed seeing that distant building break up into a few thousand bats, though. (The perceptive reader might get a picture of what I mean by saying that I want to make things just a little bit prettier.) Finishing this project could easily have taken another ten years, considering it took me the better part of two months just to finish it off.

But all is not lost! I did manage to squeeze in an eleven-storey, eleven-sided tower, a few eleven-ish mirror houses, a carefully timed and executed intro and outro, ditto sun path, a bit of Blender experience, and I did learn quite a bit of Python, Bash and PHP, along with ImageMagick, GraphicsMagick and being able to help a few persons in their need for a solution for distributed raytracing. That's not too shabby for a POV-Ray project!

crop-6600.png

POV-Ray 3.6 has two ways of doing anti-aliasing. When objects are one pixel or less, they produce different results. AM1, shown here, is less forgiving about vertical lines and sliced rendering than AM2, which still doesn't give a perfect result, but is more consistent between slices, though not between frames. I have a distinct feeling that I'm doing something wrong.

Finishing off a project of this size does in itself generate enough material to warrant a blog post on the subject. It's one thing to create a 125 frame, sliced animation that's demanding enough to be usable as a test project in a distributed rendering system, and an entirely different thing to create a 16500 frame animation that looks nice and still is able to be finished within a month. Postprocessing the 445500 zip archives and image slices produced is time consuming, to say the least. A good, fast internet connection is a must, as are SSD devices. 128 gigabytes was barely enough space to unzip the lot, let alone encoding it as video. Then I discovered that I really should have used the other antialiasing method...

If I were to start this project today, I would think it worth the time and effort to learn a good, graphical modeller for POV-Ray, and a competent video editing program. Most of this video was created while riding the bus to and from work, at least those days that it wasn't too crowded and I wasn't too tired, using Emacs and Gimp for modeling. A modern, graphical editor would have sped things up. On the other hand, my low end development environment did allow me to use a miniscule computer, an Asus Eee PC 901, and size does matter on the bus. The text overlay shown half a minute into the video took several rides to perfect, experimenting with ImageMagick and GraphicsMagick in combination to create subpixel composition and a blurred drop shadow in combination with variable transparency. It might not be obvious at a first glance, but that overlay doesn't simply fade in and out; it's subtly animated, too.

This project broke my HTTPov infrastructure in interesting ways, and pointed out scalability issues in areas that aren't any problem when you're not rendering 11 minutes of HD video, distributed over several thousand cores.

Last, but certainly not least, I want to extend my sincere thank you to those who lent me those cores of raw raytracing power; I wouldn't have been able to do this without you!

Some statistics

First run, +AM1
Number of clients: 4147
CPU time: 3434:44 hours
Wall time: 19:30 hours

Final run, +AM2
Number of clients: 84
CPU time: 7127:22 hours
Wall time: 226:28 hours

The first run was mainly executed on a sizeable cluster, optimized for speed, while the second run was executed on older hardware. There's a noticeable performance difference between older and newer hardware, even when accounting for the difference in antialiasing settings.

Nature is vast, and present a diverse array of interesting phenomena. Lightning is one of them, and in today's post, I'm going to discuss a few ways a lightning locator network could, and how it should not, be implemented.

Somewhere before 2006, when I began researching how to detect and position lightning, I stumbled upon blitzortung.org, which, to my understanding, at the time collected its data mostly from primitive lightning detectors that guessed and approximated a great del.

[Reality check: I don’t seem to have a clue about what I’m saying here. The oldest Blitzortung pages I can find are from 2007, and they say directional detectors were used. Somewhere between 2007 and 2009, a switch to time of arrival detectors was made. I don’t know what was used before 2007, but the network itself has existed since at least 2003.

One mail to Blitzortung later, I know:
Between 2005 and 2008, the network consisted of up to 100 directional Boltek detectors. The switch to time of arrival detectors was made in 2008.

Boy, this is one severely neglected darling!

The Blitzortung people seem to be a nice bunch, by the way. If you have questions, they are answered within a short time. The answers are not always to the actual questions you asked, but hey, such things happen! And I did get some answers to questions that I didn’t know I wanted to ask, that’s good service!]

Portable detectors



Figure A: Personal, portable detector

When the noise reach the detector, an approximate distance is calculated

These detectors, which were not used in the network, are mostly sold as personal lightning warning devices. They detect nearby lightning strikes, and show an approximate distance. The distance is calculated from the strength of the received noise, compared to a statistically average level. The received noise is stronger if it was produced by a large discharge, and weaker if the discharge was small. That’s one level of uncertainty, and then you have to keep in mind that rain, trees, walls and buildings between you and the detector will affect the strength of the signal. One could compare the approximated distances of several such detectors, and kind of try to triangulate a strike location based on that, but the inherent inaccuracies would make the result pretty worthless. Personal, portable detectors are, at best, toys and party tricks.

"How could the precision be improved?", I thought to myself.

Two ways:

Either triangulation based on time of arrival, or triangulation based on direction.

The first method requires exact position and extremely exact timing, and was expensive.

The second method requires exact position and bearing, but less much exact timing. One major drawback is that resolution decreases considerably with distance.

Lightning is fast. Like really fast, but not nearly as fast as light. Light is really, really fast, and the electromagnetic noise created by lightning travel at the speed of light.

Light travel at a speed of 299,792,458 meters per second, give or take a few decimals. That’s almost three hundred million meters per second, or just over a billion km/h, which translates to about 671 million mph. Not even the ThrustSSC can keep up with that.

Going to the other end of the scale, the speed of light is 299,792.458 meters per millisecond, or 299,792458 meters per microsecond. If you sample the lightning detector one million times per second, you get a resolution of 300 meters, and it would probably make little sense to increase the resolution more than that, bearing the sheer size of lightning bolts in mind.

Synchronizing clocks to a precision of less than one microsecond is not the easiest thing to do. Using the DCF77 signal with low cost parts, you get a precision of about two tenths of a second. That’s the time it takes light to travel 60,000 kilometers. Better hardware could get a precision of 4 to 44 microseconds. That’s better, but a resolution of 1.2 to just above 13 kilometers is not practical.

NTP has a practical precision of "tenths of milliseconds", which in this case mean it’s not nearly enough.

GPS would provide a usable time resolution, but the price of a GPS module in 2006, let alone at least three of them, to form a usable network, was not within my experiment budget.

Hence, I decided not to try a time of arrival approach, but rather directional triangulation.

Direction finding



Figure B: Direction finding detector

When the noise reach each detector, an approximate bearing is calculated. The location of the discharge is where the lines cross each other.

This is where I’m going to become somewhat theoretical. I did assemble some experimental hardware to evaluate my theories, but that’s as far as I got. Direction finding has a few drawbacks, but also a few advantages. The resolution, how accurately the direction to the lightning strike can be calculated, decrease with the distance to the strike. My initial guess was that I might be able to get eight or nine bits of direction out of a setup that involved two loop antennas at right angles, either the wound wire type or ferrite rod antennas, suitable for about 300 kHz.

When lightning strikes, electromagnetic noise is produced. This noise seem to be strongest around 300 kHz for cloud-to-ground strikes, while more modern detectors monitor a larger span of frequencies in order to increase accuracy and detect cloud-to-cloud strikes.

With two directional antennas at right angles, both antennas receive this noise, but different amounts of it, one receiving the strongest signal while the other receives the weakest, and vice versa. I was going to connect these antennas to simple tuners and amplifiers, and then into the line input of a computer, sampling a stereo signal at 48 kHz. When the static received peaked past a certain threshold, the computer would compare the amplitude of the stereo channels, and compute a direction from that. It is my understanding that this setup should be able to calculate a line through the detector, and give two possible lightning directions, 180 degrees from each other. This is why you need a network of detectors; one detector can not distinguish between the two directions on its own. It could guess, based on polarity and probability, but never be certain. It takes at least three detectors to be able to calculate every possible strike location.

These antennas have to be precisely aligned. My theory, though, was that if you have three stations with exact alignment, new detectors could be added in a setup mode that did not use the new detector’s data for strike location, but rather for figuring out which way the detector is pointing, using the known detectors’ data compared to the new one’s.

Given the low sampling rate of the detectors, and the blinding speed of light (it would travel a tad longer than six thousand meters between each sample), I thought NTP would give enough temporal accuracy. One problem that Blitzortung had with the earlier, direction finding network was that the stations used only had second resolution, but lightning could strike several times per second. My take on that was to use time stamped data, and work out individual strikes from that. With knowledge about where the detectors were located, and that the signal would be displaced one sample every 6245 meters, it should be possible to work some pretty impressive miracles on the data.

One thing I didn’t foresee, but I’ve told would be a problem with directional detection, is reflections. Electromagnetic pulses reflect off things, be it mountains, buildings or different layers in the atmosphere. The principle is the same as when one shine a flashlight onto a wall; what arrives at one’s eyes is the photons emitted by the lamp, but reflected off a distant object, making it seem that they were emitted by that object. This would be a problem regardless of detection method used, and would have to be handled accordingly.

All of this is long since obsolete.

Time of arrival



Figure C: Time of arrival triangulation

The point in time at which the noise is received by each detector is converted to a distance, used as the radius for a circle around the detector. The point where three or more circles cross each other is the location of the discharge.

Time caught up with this darling while I wasn’t looking. The price of GPS modules has dropped to a point where it more or less would be a crime not to go for the time of arrival approach, like Blitzortung did. The resolution is better, and the distance to the strike matters much less. Plus, it’s really cool to have a network of satellites being an integral part of your lightning locating network!

Finishing off

It’s easy to believe that finishing off one’s outstanding experiments prematurely would not take very much time. Just take what there is, and dump it on the public scrap heap. There, done, next!

I could have done that with this experiment, but honestly, who’d want to read 'I once planned to build a lightning detection network, mostly to see which level of accuracy I could achieve. I never did, and the existing networks have shaped up and become much better now, anyway.'? No, I want to explain some of my reasoning and the history of the experiment. That puts my ambition level somewhere between ‘just dump it’ and my friend Christer Weinigel, who use some of his spare time to actually complete what he’s been working on.

In some cases, explaining concepts is easier if you can illustrate your thoughts, but finding tools for creating a quick and dirty lightning detection animation wasn’t all that trivial. My first thought was PIL, but it’s non-trivial to make circles with walls thicker than one pixel there. I wanted to antialias my circles, and antialiasing is another thing PIL isn’t all that good at.

ImageMagick probably would have done the trick, but its command lines tend to become mind-numbingly long.

GD was next on my list of possibilities, but I didn’t want to install PHP on my somewhat cramped computer.

In the end, I went for SVG files, created by a Python script and converted to PNG by rsvg-convert. Pretty easy and straightforward. When I decide to learn to animate SVG, I could re-arrange that script, and get a proper animation out of it, but I have to save some learning for the future.

My first darling to go is a simple extension of something that already exists; the lava radiator.

Lava lamp

Traditional lava lamp.
Photo by Wollschaf (CC BY-SA).

Remember the Lava Lamp? Of course you do; versions of it are available next to everywhere. But they are all built following the original design principle; a bottle of some kind (variants include rockets and liquor bottles), and an incandescent lamp under it, acting as illumination and heater for the wax.

Ordinary radiatorOrdinary radiator.

Photo by Bios (CC BY-SA).

That's all fine and dandy, but would a lava lamp be able to scale up to a radiator? Electrical heaters, as well as water based ones, work by being warm and emitting that heat into the space they're mounted in. But they look pretty boring.

Think of a rectangular box, perhaps 10 centimeters deep, one and a half meter wide and about a meter high, for a modestly sized radiator. Then fill it with almost 150 liters of lava lamp innards. Mount LEDs and heating resistors along the bottom.

There it is, in its monumental simplicity. A grotesquely large lava lamp that, is my thought, could do its magic in completely new ways, due to its unusual appearance and construction.

Uses and variants

lavaradiator.png

Sketch of the Lava Radiator. Imagine this in color, raytraced using POV-Ray and mounted along a smooth, white wall, underneath a window. That's what I imagined. Instead, you get a crackpotogram, drawn in Gimp while riding the bus.

The lava radiator could be a nice touch to a public space. A 1000-or-more liter version could easily be an attraction in an art exhibition. Using heating resistors and LEDs instead of incandescent lamps could improve the radiator's aestethic value, as it could be used with or without lighting, and given individual control of the resistors, the internal flow of the fluids could probably be controlled in ways not possible with the original lava lamp.

If electricity is not the first choice when planning heating (and it shouldn't be, really), it could be possible to build the radiator using water heating, mounting small radiators in the bottom of the lava radiator. With a few small radiators, it should be possible to control the flow pattern in much the same way heating resistors should be able to.

The wedge shape of the radiator depicted above is a direct inheritance from the lamp it's based on. My understanding is that it reduces the fluid volume versus cooling surface ratio, giving a more articulated temperature gradient than possible with a straight tube. The same effect could be accomplished by a container of even thickness that is being increasingly creased towards the top, or simply a higher container.

This is one of the experiments I haven't considered doing. The sheer scale of it is prohibitive; just getting 150 liters of lava stuff would ruin a small country. Then you'd have to build the radiator itself...

Hi there. Welcome to the first post on Outstanding Experiments.

I'm interested in many things, and would like to experiment with all of them, be it physics, programming, social, or any number of different kinds of experiments. However, I frequently find myself having more ideas and interests than time to actually do something about them. It's time to make something happen!

I'm not the kind of person that announce his grand plans and lofty visions to the world, promising to deliver, and then goes on to the next project without the current one materializing.

I'm more of the kind that envision grand plans in his mind, work on them in secret, and hope to present the completed project to the world. Then time or other resources run out, and the project is placed on the back burner in favor of a new, exciting endeavour.

Both of these personality types have obvious drawbacks. Not wanting to be either one of them, I've decided to combine the best parts of both, and become the kind of person that bestow his genious upon the world, without having to do any hard work himself.

Which, in itself, sounds like a really annoying kind of person.

Archive

Shuttleworth_Funded.png