We have all watched videos of concerts and events dating back to the 1950s, but probably never really wondered how this was done. After all, recording moving images on film had been done since the late 19th century. Surely this is how it continued to be done until the invention of CCD image sensors in the 1980s? Nope.

Although film was still commonly used into the 1980s, with movies and even entire television series such as Star Trek: The Next Generation being recorded on film, the main weakness of film is the need to move the physical film around. Imagine the live video feed from the Moon in 1969 if only film-based video recorders had been a thing.

Let’s look at the video camera tube: the almost forgotten technology that enabled the broadcasting industry.

It All Starts With Photons

The principle behind recording on film isn’t that much different from that of photography. The light intensity is recorded in one or more layers, depending on the type of film. Chromogenic (color) film for photography generally has three layers, for red, green and blue. Depending on the intensity of the light in that part of the spectrum, it will affect the corresponding layer more, which shows up when the film is developed. A very familiar type of film which uses this principle is Kodachrome.

While film was excellent for still photography and movie theaters, it did not fit with the concept of television. Simply put, film doesn’t broadcast. Live broadcasts were very popular on radio, and television would need to be able to distribute its moving images faster than spools of film could be shipped around the country, or the world.

An image dissector tube

Considering the state of art of electronics back in the first decade of the 20th century, some form of cathode-ray tube was the obvious solution to somehow convert photons to an electric current that could be interpreted, broadcast, and conceivably stored. This idea for a so-called video camera tube became the focus of much research during these decades, leading to the invention of the image dissector in the 1920s.

The image dissector used a lens to focus an image on a layer of photosensitive material (e.g. caesium oxide) which emits photoelectrons in an amount relative to that of the intensity of the number of photons. The photoelectrons from a small area are then manipulated into an electron multiplier to gain a reading from that section of the image striking the photosensitive material.

Cranking Up the Brightness

Iconoscope diagram, from Vladimir Zworykin’s 1931 US patent.

Although image dissectors basically worked as intended, the low light sensitivity of the device resulted in poor images. Only with extreme illumination could one make out the scene, rendering it unusable for most scenes. This issue would not be fixed until the invention of the iconoscope, which used the concept of a charge storage plate.

The iconoscope added a silver-based capacitor to the photosensitive layer, using mica as the insulating layer between small globules of silver covered with the photosensitive material and a layer of silver on the back of the mica plate. As a result, the silver globules would charge up with photoelectrons after which each of these globule ‘pixels’ could be individually scanned by the cathode ray. By scanning these charged elements, the resulting output signal was much improved compared to the image dissector, making it the first practical video camera upon its introduction in the early 1930s.

It still had a rather noisy output, however, with analysis by EMI showing that it had an efficiency of only around 5% because secondary electrons disrupted and neutralized the stored charges on the storage plate during scanning. The solution was to separate the charge storage from the photo-emission function, creating what is essentially a combination of an image dissector and iconoscope.

 

In this ‘image iconoscope’, or super-Emitron as it was also called, a photocathode would capture the photons from the image, with the resulting photoelectrons directed at a target that generates secondary electrons and amplifies the signal. The target plate in the UK’s super-Emitron is similar in construction to the charge storage plate of the iconoscope, with a low-velocity electron beam scanning the stored charges to prevent secondary electrons. The super-Emitron was first used by the BBC in 1937 for an outdoor event during the filming of the wreath laying by the King during Armistice Day.

The image iconoscope’s target plate omits the granules of the super-Emitron, but is otherwise identical. It made its big debut during the 1936 Berlin Olympic Games, with subsequent commercialization by the German company Heimann making the image iconoscope (‘Super-Ikonoskop’ in German) leading to it being the broadcast standard until the early 1960s. A challenge with the commercialization of the Super-Ikonoskop was that during the 1936 Berlin Olympics, each tube would last only a day before the cathode would wear out.

Commercialization

Schematic diagram of an orthicon video camera tube.

American broadcasters would soon switch from the iconoscope to the image orthicon. The image orthicon shared many properties with the image iconoscope and super-Emitron and would be used in American broadcasting from 1946 to 1968. It used the same low-velocity scanning beam to prevent secondary electrons that was previously used in the orthicon and an intermediate version of the Emitron (akin to the iconoscope), called the Cathode Potential Stabilized (CPS) Emitron.

Between the image iconoscope, super-Emitron, and image orthicon, television broadcasting had reached a point of quality and reliability that enabled its skyrocketing popularity during the 1950s, as more and more people bought a television set for watching TV at home, accompanied by an ever increasing amount of content, ranging from news to various types of entertainment. This, along with new uses in science and research, would drive the development of a new type of video camera tube: the vidicon.

The vidicon was developed during the 1950s as an improvement on the image orthicon. They used a photoconductor as the target, often using selenium for its photoconductivity, though Philips would use lead(II) oxide in its Plumbicon range of vidicon tubes. In this type of device, the charge induced by the photons in the semiconductor material would transfer to the other side, where it would be read out by a low-velocity scanning beam, not unlike in an image orthicon or image iconoscope.

Although cheaper to manufacture and more robust in use than non-vidicon video camera tubes, vidicons do suffer from latency due to the time required for the charge to make its way through the photoconductive layer. It makes up for this by having better image quality in general and no halo effect caused by the ‘splashing’ of secondary electrons caused by points of extreme brightness in a scene.

The video cameras that made it to the Moon during the US Apollo Moon landing program would be RCA-developed vidicon-based units, using a custom encoding, and eventually a color video camera. Though many American households still had black-and-white television sets at the time, Mission Control got a live color view of what their astronauts were doing on the Moon. Eventually color cameras and color televisions would become common place back on Earth as well.

To Add Color

Video transmission from the Apollo 10 spacecraft on 18 May 1969.

Bringing color to both film and video cameras was an interesting challenge. After all, to record a black and white image, one only has to record the intensity of the photons at that point in time. To record the color information in the scene, one has to record the intensity of photons with a particular wavelengths in the scene.

In the Kodachrome film, this was solved by having three layers, one for each color. In terrestrial video cameras, a dichroic prism split the incoming light into these three ranges, and each was recorded separately by its own tube. For the Apollo missions, the color cameras used a mechanical field-sequential color system that employed a spinning color wheel, capturing a specific color whenever its color filter was in place, using only a single tube.

So Long and Thanks for All the Photons

Eventually a better technology comes along. In the case of the vidicon, this was the invention of first the charge-coupled device (CCD) sensor, and later the CMOS image sensor. These eliminated the need for the cathode ray tube, using silicon for the photosensitive layer.

But the CCD didn’t take over instantly. The early mass-produced CCD sensors of the early 1980s weren’t considered of sufficient quality to replace TV studio cameras, and were relegated to camcorders where the compact size and lower cost was more important. During the 1980s, CCDs would massively improve, and with the advent of CMOS sensors in the 1990s, the era of the video camera tube would quickly draw to a close, with just one company still manufacturing Plumbicon vidicon tubes.

Though mostly forgotten by most, there is no denying that video camera tubes have a lasting impression on today’s society and culture, enabling much of what we consider to be commonplace today.



Source link