This one turned out even more nonspectacular than I feared.
I decided to capture the March 20, 2015 solar eclipse in a simulated scanner photo. Without a good view to record, I hurriedly set up my computer, a Lenovo X220 paid for with Shuttleworth Foundation money, and the camera in a window at work, overlooking the street outside. At 09:55, Swedish time, a two hour recording session began. I'd neglected one crucial thing, though: The Logitech C920 I used has automatic exposure compensation, which I didn't turn off. That's called a fail.
My two hour video turned out pretty decently exposure compensated by the camera, and the resulting photo showed signs of the eclipse, but in a way that largely reminds me of a Fresnel lens, intensity wise.
Not very dramatic. The real life experience was that of an overcast spring day rapidly turning into a soon-to-be thunderstormy spring day, but this was far from reflected in the image. I was going to leave the experiment in this state, and move on. But then, in what could be described as a kind of Tron moment (During the production of the 1982 movie Tron, there was a problem; If you realize that you've been using the stacks of boxes of photographic film sheets in the wrong order, and accidentally introduced quite notable jumps in the brightness in the final product, what are you going to do? Scrap the whole run, and order new stacks of boxes of photographic film sheets? That's expensive. Release the thing as it is? It doesn't look very good… And there's the Tron moment: The scene takes place inside a computer. Why not add some electrical sparks, flashes and anomalies that seem to cause the changes in brightness? Perfect!) I realized something:
In the camera's field of view, there's a really bright lamp, belonging to a building site, and pointing into the camera. That lamp and its flare is overexposed in bright sunlight, and, I reasoned, should become even more overexposed when the camera compensate for the lack of daylight during the eclipse.
After another two hours of video processing, I had a curve describing the number of overexposed pixels in that area. It looked like it could be used, and after some research on how much of the sun that was blocked (up to 82%), I fed the numbers and the image through some math. The final result still wasn't all that dramatic, but now the center of it showed a definite drop in brightness. Definitely not perfect; had I remembered to lock the exposure, the overcast sky at the end of the street wouldn't be banded, and if I had taken a little bit more time to position the camera, I could have had the dark band right through it instead of through the wall.
This experiment, although mostly a failure, gave me more insight in OpenCV, forced me to consider how data is structured in NumPy, and made me think outside the box. Not too bad for a largely failed experiment.
Nature is vast, and present a diverse array of interesting phenomena. Lightning is one of them, and in today's post, I'm going to discuss a few ways a lightning locator network could, and how it should not, be implemented.
Somewhere before 2006, when I began researching how to detect and position lightning, I stumbled upon blitzortung.org, which, to my understanding, at the time collected its data mostly from primitive lightning detectors that guessed and approximated a great del.
[Reality check: I don’t seem to have a clue about what I’m saying here. The oldest Blitzortung pages I can find are from 2007, and they say directional detectors were used. Somewhere between 2007 and 2009, a switch to time of arrival detectors was made. I don’t know what was used before 2007, but the network itself has existed since at least 2003.
One mail to Blitzortung later, I know: Between 2005 and 2008, the network consisted of up to 100 directional Boltek detectors. The switch to time of arrival detectors was made in 2008.
Boy, this is one severely neglected darling!
The Blitzortung people seem to be a nice bunch, by the way. If you have questions, they are answered within a short time. The answers are not always to the actual questions you asked, but hey, such things happen! And I did get some answers to questions that I didn’t know I wanted to ask, that’s good service!]
Figure A: Personal, portable detector
When the noise reach the detector, an approximate distance is calculated
These detectors, which were not used in the network, are mostly sold as personal lightning warning devices. They detect nearby lightning strikes, and show an approximate distance. The distance is calculated from the strength of the received noise, compared to a statistically average level. The received noise is stronger if it was produced by a large discharge, and weaker if the discharge was small. That’s one level of uncertainty, and then you have to keep in mind that rain, trees, walls and buildings between you and the detector will affect the strength of the signal. One could compare the approximated distances of several such detectors, and kind of try to triangulate a strike location based on that, but the inherent inaccuracies would make the result pretty worthless. Personal, portable detectors are, at best, toys and party tricks.
"How could the precision be improved?", I thought to myself.
Either triangulation based on time of arrival, or triangulation based on direction.
The first method requires exact position and extremely exact timing, and was expensive.
The second method requires exact position and bearing, but less much exact timing. One major drawback is that resolution decreases considerably with distance.
Lightning is fast. Like really fast, but not nearly as fast as light. Light is really, really fast, and the electromagnetic noise created by lightning travel at the speed of light.
Light travel at a speed of 299,792,458 meters per second, give or take a few decimals. That’s almost three hundred million meters per second, or just over a billion km/h, which translates to about 671 million mph. Not even the ThrustSSC can keep up with that.
Going to the other end of the scale, the speed of light is 299,792.458 meters per millisecond, or 299,792458 meters per microsecond. If you sample the lightning detector one million times per second, you get a resolution of 300 meters, and it would probably make little sense to increase the resolution more than that, bearing the sheer size of lightning bolts in mind.
Synchronizing clocks to a precision of less than one microsecond is not the easiest thing to do. Using the DCF77 signal with low cost parts, you get a precision of about two tenths of a second. That’s the time it takes light to travel 60,000 kilometers. Better hardware could get a precision of 4 to 44 microseconds. That’s better, but a resolution of 1.2 to just above 13 kilometers is not practical.
NTP has a practical precision of "tenths of milliseconds", which in this case mean it’s not nearly enough.
GPS would provide a usable time resolution, but the price of a GPS module in 2006, let alone at least three of them, to form a usable network, was not within my experiment budget.
Hence, I decided not to try a time of arrival approach, but rather directional triangulation.
Figure B: Direction finding detector
When the noise reach each detector, an approximate bearing is calculated. The location of the discharge is where the lines cross each other.
This is where I’m going to become somewhat theoretical. I did assemble some experimental hardware to evaluate my theories, but that’s as far as I got. Direction finding has a few drawbacks, but also a few advantages. The resolution, how accurately the direction to the lightning strike can be calculated, decrease with the distance to the strike. My initial guess was that I might be able to get eight or nine bits of direction out of a setup that involved two loop antennas at right angles, either the wound wire type or ferrite rod antennas, suitable for about 300 kHz.
When lightning strikes, electromagnetic noise is produced. This noise seem to be strongest around 300 kHz for cloud-to-ground strikes, while more modern detectors monitor a larger span of frequencies in order to increase accuracy and detect cloud-to-cloud strikes.
With two directional antennas at right angles, both antennas receive this noise, but different amounts of it, one receiving the strongest signal while the other receives the weakest, and vice versa. I was going to connect these antennas to simple tuners and amplifiers, and then into the line input of a computer, sampling a stereo signal at 48 kHz. When the static received peaked past a certain threshold, the computer would compare the amplitude of the stereo channels, and compute a direction from that. It is my understanding that this setup should be able to calculate a line through the detector, and give two possible lightning directions, 180 degrees from each other. This is why you need a network of detectors; one detector can not distinguish between the two directions on its own. It could guess, based on polarity and probability, but never be certain. It takes at least three detectors to be able to calculate every possible strike location.
These antennas have to be precisely aligned. My theory, though, was that if you have three stations with exact alignment, new detectors could be added in a setup mode that did not use the new detector’s data for strike location, but rather for figuring out which way the detector is pointing, using the known detectors’ data compared to the new one’s.
Given the low sampling rate of the detectors, and the blinding speed of light (it would travel a tad longer than six thousand meters between each sample), I thought NTP would give enough temporal accuracy. One problem that Blitzortung had with the earlier, direction finding network was that the stations used only had second resolution, but lightning could strike several times per second. My take on that was to use time stamped data, and work out individual strikes from that. With knowledge about where the detectors were located, and that the signal would be displaced one sample every 6245 meters, it should be possible to work some pretty impressive miracles on the data.
One thing I didn’t foresee, but I’ve told would be a problem with directional detection, is reflections. Electromagnetic pulses reflect off things, be it mountains, buildings or different layers in the atmosphere. The principle is the same as when one shine a flashlight onto a wall; what arrives at one’s eyes is the photons emitted by the lamp, but reflected off a distant object, making it seem that they were emitted by that object. This would be a problem regardless of detection method used, and would have to be handled accordingly.
All of this is long since obsolete.
Time of arrival
Figure C: Time of arrival triangulation
The point in time at which the noise is received by each detector is converted to a distance, used as the radius for a circle around the detector. The point where three or more circles cross each other is the location of the discharge.
Time caught up with this darling while I wasn’t looking. The price of GPS modules has dropped to a point where it more or less would be a crime not to go for the time of arrival approach, like Blitzortung did. The resolution is better, and the distance to the strike matters much less. Plus, it’s really cool to have a network of satellites being an integral part of your lightning locating network!
It’s easy to believe that finishing off one’s outstanding experiments prematurely would not take very much time. Just take what there is, and dump it on the public scrap heap. There, done, next!
I could have done that with this experiment, but honestly, who’d want to read 'I once planned to build a lightning detection network, mostly to see which level of accuracy I could achieve. I never did, and the existing networks have shaped up and become much better now, anyway.'? No, I want to explain some of my reasoning and the history of the experiment. That puts my ambition level somewhere between ‘just dump it’ and my friend Christer Weinigel, who use some of his spare time to actually complete what he’s been working on.
In some cases, explaining concepts is easier if you can illustrate your thoughts, but finding tools for creating a quick and dirty lightning detection animation wasn’t all that trivial. My first thought was PIL, but it’s non-trivial to make circles with walls thicker than one pixel there. I wanted to antialias my circles, and antialiasing is another thing PIL isn’t all that good at.
ImageMagick probably would have done the trick, but its command lines tend to become mind-numbingly long.
GD was next on my list of possibilities, but I didn’t want to install PHP on my somewhat cramped computer.
In the end, I went for SVG files, created by a Python script and converted to PNG by rsvg-convert. Pretty easy and straightforward. When I decide to learn to animate SVG, I could re-arrange that script, and get a proper animation out of it, but I have to save some learning for the future.