Skip to main content

Voyager Golden Record images - processing considerations

As you may have seen in my previous post, decoding the images on the "Golden Record" is not very difficult. The problems arise in timing and amplitude issues.

I've talked about timing in my previous post: there "sample rate" varies throughout the record (slowly increasing speed) and it also "flutters" throughout a frame.

The amplitude issues are caused by feeding digital signal into an unconditioned/untuned analog system (probably a recording needle).

Let's analyze the circle image:


I will use the [vertical] scanline represented by the red dot as illustration for how a uniform background (constant color) should look.


The blue trace is the actual waveform. My red line underlines the non-linearity of the recording. The yellow line signifies my expected value. So we have an inverse exponential gain that needs to be applied with respect to time, as well as a linear gain. The could be mixed into a single formula.

For reference, the actual image on the record should be this one:


Remember that the signals are inverted, so white would actually be a very low signal, close to the minimum. Hence my interpretation on where the yellow line should sit.

Back to the first image, analyzing the scanline next to the blue circle:


Highlighted in red are the level differences: after meeting a peak, the signal recovers to a lower level than the background one. Lower level = lighter pixels.

The opposite happens after a white portion (low signal level), exemplified in the following image:


The portion below ("after", in time) the white text is darkened. Picking out a random scanline from the waveform:


It's pretty hard to determine what difference is caused by the non-linear time-dependent gain distortion and what is caused by the peak/dip recovery.


Another non-linear distortion, perhaps related to all of the above, can be seen in the lead-out (postamble) of the frame:


After a wide peak (odd lines), the dip is more pronounced - longer recovery time with overshoot. After a narrow peak (even lines) the signal is more linear and with little overshoot.
The analog recording system can be characterized using the data and a "reverse" system can be designed, which corrects for at least some errors.


I can only speculate what the causes for those errors are: output vs input impedance (capacitance), needle bouncing, slow and non-linear amplifiers (remember this was the mid-70s). Whatever the cause should be, I have confidence that the engineers at that time did their best to create a signal as clean as possible. We also have to take into account that the digitization process might not have been perfect, it never is.


That's as far as I will go with this today. As a "cookbook", in order to decode better images one needs to:

  • apply a multiplier to the grayscale signal, like: signal *= a^time + time/b - where a is between 0.5 and 0.999 and b < 0
  • apply some sort of reverse impulse recovery: design a FIR filter that based on the slope of a positive/negative impulse it applies a proportional feedback. That is, a high-slope positive signal should provide positive feedback
  • take into consideration whether decoding an odd or even scanline
Unfortunately the extent of my digital signal processing skills stops well ahead of a proper implementation.


On an unrelated note, it would be interesting to find out if the encoded images carry some metadata: whether they are part of a color image, which channel, whether they are in portrait or landscape orientation, if they have an ID.

Comments

Popular

FiberHome AN5506-02-F router hack

Ikea SKARSTA sit/standing desk hack

Floureon BYC17.GH3 thermostat teardown and impression

Non-genuine battery in Lenovo X230

Zoom G1 guitar effects pedal repair

Philips 3200 Coffee Machine - part 1

Racechip tuning box - part 2 - reverse engineering