Goals: From Fire Starters to Smartphones and Virtual Reality

0

In antiquity, we see examples of magnifying crystals formed into a biconvex form as early as the 7th century BC. It is unclear whether people of this period used them for fire-starting purposes or for vision. Yet it is said that Emperor Nero of Rome watched gladiatorial games through an emerald.

Needless to say, the views we get through modern lenses are much more realistic. So how did we go from simple magnification systems to the complex lens systems we see today? We start with a quick trip through camera and lens history, and end with the cutting edge of lens design for smartphone cameras and VR headsets.

Theory and practise

Philosophers and scientists of most cultures and times have thought about light. Our modern theories of light date back to the 1600s and the work of scientists like Johannes Kepler, Willebrord Snellius, Issac Newton and Christiaan Huygens. Of course, it was not without controversy. Newton and many others had advanced the idea that light was a particle that moved in a straight line like a ray, while Huygens and others proposed that light behaves more like a wave. For a time, Newton’s side prevailed.

This changed in the 1800s when Thomas Young’s interference experiments showed data that no particle theory could explain. Fresnel, in 1821, manages to describe light not as a longitudinal wave but as a transverse wave. This became the de facto theory of light, known as the Huygens-Fresnel principle until Maxwell’s electromagnetic theory came along and ended the era of classical optics.

Meanwhile, practical eyeglasses were probably invented in central Italy around 1290. Eyeglasses spread throughout the world, and eyeglass makers also began making telescopes. The first patent for a telescope was filed in 1608 in the Netherlands. However, the patent application was not granted because at that time telescopes were already quite common. These refracting telescopes were very popular and often simple two-element systems. Reflecting telescopes such as the one built by Newton in 1668 were built in part to prove his theories about chromatic aberration. Ultimately, he proved that lenses refract light to a focal point, but different wavelengths refract differently. This means that the colors have different focal points, which distorts the image.

When the film arrived on location, it was discovered that the cameras also suffered from spherical aberration – the lens could not focus the image onto a wide, flat plane. Charles Chevalier created an achromatic lens capable of controlling both chromatic and spherical aberrations. However, this meant that the aperture at the front was quite small (f/16), bringing the exposure time to twenty or thirty minutes.

Although not useful for cameras, the Fresnel lens appeared around this time in 1818 and saved hundreds, if not thousands, of ships. The French Lighthouse Commission had engaged Fresnel to design the lens and it had worked out quite well. Perhaps because of this success, in 1840 the French government offered a prize to whoever could come up with a lens that could reduce camera exposure times.

Diagram of the 1841 Petzval portrait lens – pink shaded crown glass, blue shaded flint glass

Joseph Petzval was a math teacher who rose to the challenge presented. Eight Human Artillery Computers were lent to his project by an Archduke for six months – it was a state-of-the-art design. In the end, he did not receive the award as he was not French, but his lens was the best performing among those submitted that year.

Petzval’s lens was one of the first four-element lens systems and one of the first lenses designed specifically for the camera rather than being a camera obscura or repurposed telescope part. As a result, it was a popular lens design for the next century. While other tweaks are common, they were mostly done through trial and error rather than going back to the mathematical foundations that created the lens in the first place.

The next leap forward came in 1890 with the Zeiss Protar, which used new types of glass with different refractive indices and other optical properties. The combination of different glasses resulted in a lens that corrected almost all aberrations. This type of lens is known as an anastigmatand the Protar was the first.

There’s a lot more history here around the rise of Japanese lens makers and the fall of German manufacturers. But we will go directly to the smartphone.

The modern smart phone

Three-element modern smartphone system
US Patent Form US8558939B2

We discussed this briefly in our longer article which talks about what makes up a smartphone. But modern smartphone lenses are complex because they’ve had to manage capturing adequate light while being small. A great resource is this blog we linked to in the article above.

Many smartphones today still use a three-element lens system, heavily inspired by the Triplet Cooke.

It has the advantage of being fairly easy to explain and relatively simple to manufacture. The first lens has high optical power and low index of refraction and dispersion since we cannot correct such aberrations. The second lens compensates for any aberrations that occur in the first and is a different material, helping to reduce the spherical effect produced by the first lens. The third lens corrects the distortion of the first two and flattens the rays on the image plane.

From US patent US20170299845A1

Then we abruptly go to something like this. Look at the lenses. None of them are pretty spherical shapes. Instead, they’re weird and mysterious.

This is the pile of lenses around an iPhone 7 – it’s unclear which patent was used in which phone. The front lens has high optical power, and the second lens tries to correct that. But then the last four lenses are all wonky shapes that correct for distortion and spherical aberration.

Unlike larger cameras, most cell phone lenses are made of the same material. Why? The simple answer is that they have to. Smartphone lenses are mostly plastic rather than ground glass. Contrary to what one might think, making them in plastic is more complex than glass. Anyone who has worked with resin can tell you that getting clear, flawless plastic is no small feat. The plastics we can use for lenses only come in two main varieties, with two refractive indices to choose from. Glass is available across a spectrum, doped with various materials to achieve exotic IoR and Abbe numbers. In fact, some of the more exotic camera lenses contain radioactive materials such as thorium. However, plastics can form unique shapes better than glass. Lapping glass into anything other than a sphere is difficult to scale and manufacture consistently. The plastic is molded and can take any shape you want.

Additionally, smartphones offer many other features, such as optical image stabilization which uses MEMS to shift the lens in response to movement. Of course, this requires moving one or more lenses or even the camera module itself, which introduces a host of problems as each lens has a specific role in handling aberrations. In the latest iPhone 12, the CMOS image sensor moves rather than the lenses. This allows the lenses to retain much of their optical power while correcting aberrations.

VR headsets

If photography drove lens innovation in the 1800s, it’s probably the cell phone that drove it in the 2000s. But there’s another niche application that could do shake things up in the near future: VR. Currently, VR headsets are big and bulky. They feel this in part because so much of their weight is pushed away from your face, pulling harder. If the headset could be thinner, it would make the experience more comfortable.

Right now, a lot of that bulk comes from the lenses and distances needed to focus the image so it looks right when the headset is on. Recently, Facebook/Oculus/Meta showed off some of their prototype headsets, and a few tried to fix this issue. Depending on where the user is looking, the headset does things like vary the focal plane and correct lens distortion in software on the fly.

The future of glasses

Some say we can get rid of lentils altogether. Several companies, such as Metalenz, build waveguides from silicon nanostructures. The advantage is that it can be integrated directly above the CMOS image sensor without complex housing. Since systems that used dozens of lenses to achieve the necessary precision and low levels of distortion can be compressed into a single layer, this would allow ordinary cameras and spectrometers to shrink.

Additionally, this is something of great interest to VR headsets, as waveguides could be integrated into the screens allowing for a wider field of view with less weight and bulk. The future certainly holds many exciting new developments in lens design. As we move towards distortion-free lenses in more scenarios with more control, some are moving back to older lenses. Sometimes it’s nostalgia and sometimes it’s because they like the look. Perhaps if Emporer Nero were squinting through our various lenses, cameras, and VR headsets today, he might still prefer ruby, optical distortions, and all.

Share.

Comments are closed.