Chapter 2 Light rays
2024-10-02 06:48:19
作者: 戴維·羅瑟里
Taking a selfie requires that the phone camera is pointed towards you. It’s obvious that this is necessary if you are to be in the picture. But this simple fact indicates something about the nature of light: to see an image of an object, there has to be a straight line between the object (in this case you) and the camera lens. This is usually called the 『line of sight』. Thus light is something that propagates in a straight line from the object to the viewer.Indeed, this is what we might expect from our knowledge of certain types of light source. Eye-catching visual efects at concerts are generated using lasers, illuminating the stage and the performers with coloured light beams. Laser pointers are commonly used at talks or lectures to emphasize images or words on a screen. The beam produced by these coherent light sources is highly focused, hardly diverging at all even across a big hall. It goes in a straight line-you point the device in the direction where you want the light to go, and it does so.
Because sunlight does not obviously exhibit this characteristic, it required some thinking to determine that the propagation of light in a straight line was exactly what was needed to understand why distant objects appeared to be smaller than nearer ones, even when it was known that they were, in fact, exactly the same size physically.
5. Euclid’s construction of rays showing why objects of the same size look smaller when they are further away.
The insight that the concept of straight-line propagation could explain this efect is attributed to Euclid, working in Greece in around 300 bce. His idea, from one of the earliest books on optics, is illustrated in Figure 5. Imagine two lines-let’s call them rays-one connecting the top of the object (a pillar in this case) to the observer’s eye, and one connecting the bottom of the object to the eye. The angle between these is related to the apparent size of the image of this object that we perceive. A more distant pillar, the same physical size as the first, produces two rays whose angle of intersection at the observer is smaller-hence the pillar appears smaller. This is what we call perspective in the image.
What is the stuf that traverses the rays? Euclid (building on earlier ideas) thought that it was particles sent out from the eye itself (from an imaginary internal fire) that illuminate the object and are reflected back to the observer. But this suggests that we would be able to see things whether or not it was dark outside. Nonetheless, the idea of stuf moving along a trajectory between object and observer remained a powerful concept.
It was modified by Al-Hazen in the 11th century to a form that we now use routinely: objects are illuminated by rays from the Sun (the external fire) and are scattered towards the observer. There are several stories about how he came to this idea-including doing an experiment when he looked directly at the Sun, and determined that the painful sensation he experienced would be there all the time if the 『internal fire』 were burning all the time. Thus, he argued, the source of the light necessary to generate the image was external.
Let us, for the sake of this argument, assume that what moves along these rays are particles of light: call them photons. The brightness of the beam is related to the number of photons traversing the ray in one second. In order to understand how an image of an object is formed, we』ll need to consider what happens when one of these photons is reflected of a mirrored surface, as well as what happens at a lens. This will lead to the 『laws of optics』 that are used for designing very complex optical instruments such as surgical microscopes, and catheters for 『keyhole』 surgery, as well as massive optical telescopes placed in orbit above the Earth for observing distant galaxies. The impact of these instruments on our life and our understanding of the world is immense.
What sort of properties do these particles of light possess? The usual sorts of attributes assigned to a particle are its position, its direction of travel, and its speed. For now, assume that it moves at the 『speed of light』, without going into detail about what that actually is. The position of a photon might then specify the starting position of the 『ray』 and the direction of the photon’s motion would be the direction of the ray. The photon heads of from its starting point in this direction at the speed of light until it encounters the surface of the object.
Relection
記住全網最快小説站𝕓𝕒𝕟𝕩𝕚𝕒𝕓𝕒.𝕔𝕠𝕞
When it hits the object, the light is reflected. What happens is that the photon 『bounces』 of the surface, changing its direction, but not its position on the surface. The situation is shown in Figure 6. The manner in which the direction is changed is specified by the 『law of reflection』, discovered by Hero of Alexandria in the 1st century. It states that the angle of incidence (that is, the angle between the incoming ray direction and the direction perpendicular to the surface at the point of incidence) is equal to the angle of reflection (that is, the angle between the outgoing ray direction and the direction perpendicular to the surface at the point of incidence). There are some surprising consequences of this conceptually simple and yet extremely powerful law.
6. Rays of light reflecting of a. aflat mirror and b. and c. a curved mirror.
This can be explained entirely by Hero’s law of reflection. Figure 7 shows how a mirror generates a reflection with opposite handedness. The clockwise-pointing arrow is the object. Rays from each point on the arrow reflect of the mirror and are rearranged so that the arrow seen in the mirror is pointing anticlockwise. You can use the same construction to show that a left-pointing arrow is seen in the mirror as a right-pointing arrow and vice versa, but an up-pointing arrow remains pointing up, and a down-pointing arrow points down in the reflection.
Imaging using relectio
If, instead of looking at your opposite-handed self in a flat bathroom mirror, you see your reflection in a polished spoon, then you see a distorted form of your image: magnified features on curved backgrounds. The concave front surface of the spoon magnifies the object and the convex back surface demagnifies it.
7. The change of handedness of objects reflected in a mirror.
Why is this? Since the formation of images is perhaps the most important application of optical instruments-ranging from contact lenses for vision correction to space telescopes for scientific discovery-it’s worth understanding exactly how this happens.
Up to now, I『ve considered only a single ray of light from one point on the object. In fact, rays usually scatter in all directions from each object point. Consider a 『bundle』 of such rays all coming from a single point on the object, forming a cone around the original ray. This cone of rays diverges as it moves away from the object, as shown in Figure 8. The rays hit the curved mirror surface at diferent points and therefore also at diferent angles of incidence. Thus they reflect in diferent directions, each one still satisfying Hero’s law of reflection.
8. Image formation by a ray bundle using a curved mirror. The rays from one point on the object meet at one point in the image.
In fact they now form a converging cone, and eventually all meet at a single point. This is the 『image』 point of the original object point.
An image as we normally consider it is made up of multiple such image points arising from diferent object points. The size of the image is determined by the distance of the object from the mirror and the focusing power of the mirror, specified by the radius of curvature of surface (e.g. a more concave mirror has a smaller radius of surface curvature). The image can be bigger than the object when the object is closer to the mirror than the image. The ratio of image to object size is called the magnification.
The image-magnifying feature of a curved mirror was used by Newton to design a telescope, shown in Figure 9. His design has a remarkable property: it forms images of distant objects that are the same size for every colour (said to be free from 『chromatic aberration』). Newton cleverly used the property that the angle of a reflected ray for a fixed angle of the incident ray is the same no matter what the colour of the light. The image of each colour is therefore formed in the same place-all the colours register perfectly, guaranteed by physics.
9. Newton’s reflecting telescope enabled images without chromatic aberration.
Refraction
Newton invented this instrument because the telescopes used by contemporary pioneers such as Galileo Galilei and Johannes Kepler sufered seriously from chromatic aberration. The images formed by their telescopes always had a blurred coloured halo around the edge of the object. The reason for this is that they were designed using a diferent property of light rays-refraction: the phenomenon that light rays bend when they go from one transparent medium to another.
It is the refraction of light that produces the 『kink』 observed in a pencil partially immersed in a bowl of water. This is described by the law of refraction, commonly known as Snell’s law, after Dutchman Willebrord Snell, an early 17th-century proponent. This law says that the angle the exiting ray makes with a line perpendicular to the surface is related to the angle of the incoming ray by the properties of the two media that form the interface-in our example the surface of the water. This is illustrated in Figure 10.The particular property of the media that is relevant here is the 『refractive index』. The refractive index can be thought of as being a measure of the optical 『stifness』 of the medium as experienced by a light ray. So light travels more slowly in a medium with a larger refractive index because the molecules of the medium are slightly more resistant to having their atoms and electrons moved by the light. It is like running in a pool of water. If the depth is very small, your legs can move easily and you can run fast. If the water is up to your knees, it’s harder, because you have to work against the resistance of the water.In fact, the law of refraction can be derived from this analogy. Pierre de Fermat showed that when light goes from a point in one medium to a point in another medium, it seeks a ray that traverses a shorter time in the medium with high refractive index and a longer time in the medium with a lower refractive index. This requires the light ray to bend at this interface between the two media, and Fermat’s principle turns out to be entirely the same as Snell’s law.
10. Refraction of a ray at the interface between air and water.
Imaging using lenses
Now, just as a curved reflecting surface can form an image of an object, so can a curved transparent surface. How this happens is shown in Figure 11. A bundle of rays from a point on the object are brought to focus at the image. Notice the shape that does this-it has the same cross section as a lentil. This is the origin of the word 『lens』.
Lenses are ubiquitous in image-forming devices, from your eyes to mobile phone cameras to surgical microscopes. Imaging instruments have two components: the lens itself, and a light detector, which converts the light into, typically, an electrical signal. In the case of your eyes, this is the retina, whereas for the mobile phone it is an array of minute pieces of silicon that form solid-state light sensors.
11. Image formation by a ray bundle using a lens.
The lenses in each of these devices are diferent, of course, but the basic principle is the same for each. In every case the location of the lens with respect to the detector is a key design parameter, as is the focal length of the lens which quantifies its 『ray-bending』 power. The focal length is set by the curvature of the surfaces of the lens and its thickness. More strongly curved surfaces and thicker materials are used to make lenses with short focal lengths, and these are used usually in instruments where a high magnification is needed, such as a microscope.
Because the refractive index of the lens material usually depends on the colour of light, rays of diferent colours are bent by diferent amounts at the surface, leading to a focus for each colour occurring in a diferent position. This gives an image that has 『haloes』 of diferent colour around it. For example, only one colour may be properly in focus at a particular detector plane; the others will be out of focus, and form the halo. Whether this chromatic aberration is an important efect or not depends on the particular application.
One of the most familiar, and indeed most important image-forming instruments of this type is the eye. It consists of a front-refracting surface-the cornea-and an adjustable lens, which changes shape as you focus on things at diferent distances. These elements form images on the retina at the back of the eye.
Historically, the formation of images by the eye was of great interest, since an experiment by Descartes (see Figure 12) showed that the image of an upright object was upside down. Of course,we don’t perceive the object in this way, so it was clear that the brain undertakes some remarkable processing between the raw retinal signals and the perception of the external world.
Optical instruments
As many of us are all too aware, the ability of the eye to form high-quality images (sharp, undistorted, and in colour) can degrade as we age. One of the earliest applications of optical instruments was developed as an aid to sight under such circumstances.Eyeglasses were perhaps the first optical technology, purportedly invented by Roger Bacon—the 『mad friar』 of Oxford—in the13th century.
The corrective elements are often simple lenses, placed either in a frame at some distance (a few millimetres) from the cornea(the front surface of the eyeball), or 『contact lenses』 placed,as the name suggests, in contact with the cornea. In both cases the imaging system is compound, that is, it consists of several elements—the external lens, the cornea and the internal ocular lens. This functional form provides the necessary degrees of freedom to correct most kinds of vision by enabling the external lens to compensate for the imperfections of the internal lens.This can also be done by directly altering the shape of the front surface of the eye by laser surgery. One approach, laser-assisted in situ keratomileusis (LASIK), uses the laser to ablate part of the surface of the cornea. This changes its curvature, thereby altering its focusing power, and thus the image-forming capabilities of the eye.
Many other image-forming instruments work on very similar principles to the eye. The camera of a mobile phone, for instance,has a lens near the surface of the phone, and a silicon-based photodetector array inside the device. The mobile phone lens is often very small, and yet must provide images of sufficient quality that they are intelligible, and thus make sense when they are posted on Facebook. This requires that both the detector array and the imaging system are adequate to the task of producing a high-quality image. The quality of the image depends on two things—the size and the number of detectors in the array, and the ability of the optical system to create a sharp,undistorted image with all colours properly registered. That is,an image free from 『aberrations』.
12. Descartes』 experiment to show the image formed by an eye is upside down.
The specification of the number of 『pixels』 of the detector array is often used as a proxy for the quality of the image. A 24 megapixel camera (one in which the detector array contains 24 million sensors) is often considered better than an 8 megapixel one. A pixel can be thought of as the size of the image of a point object.If it is possible to resolve only a small number of points—because there are only a few elements in the detector—then it is hard to tell much about the object. The more pixels, then, the better. But only if the imaging system can produce a point image that is smaller than one detector element.
Limits to imaging
In the 19th century, the German scientist Ernst Abbe devised a simple rule for the minimum size of any image that was applicable to all imaging systems then known. Abbe’s criterion says that the size (S) of the image of a point object is proportional to the wavelength (λ) of the light illuminating the object multiplied by the focal length (f) of the lens and divided by the lens diameter (D):
S= 1.22 S × λf/D.
Thus, lenses with a big diameter and a short focal length will produce the tiniest images of point-like objects. It’s easy to see that about the best you can do in any lens system you could actually make is an image size of approximately one wavelength.This is the fundamental limit to the pixel size for lenses used in most optical instruments, such as cameras and binoculars.
Designing and building imaging systems that can deliver high-quality images has been at the heart of many important applications of optics. Microscopes, for example, are used in applications from biological research to surgery. The earliest microscopes used very simple lenses—small, nearly spherical polished glass shapes that provided early experimenters like Robert Hooke in the 17th century with the means to explore new features of the natural world, too small to be seen by the unaided eye. His drawing of a flea, shown in Figure 13, was a revelation of the power of technology to enable new discovery.
Modern research microscopes are much more sophisticated devices.They consist of a multi-element compound lens that can form images with pixels very close to the wavelength of the illumination light—just at the Abbe limit. Figure 13 also shows an example of what is possible using a modern imaging microscope. It is a composite image of the nervous system of a fruit-fly larva, about to hatch, made by viewing light-emitting proteins located in the cells.
Abbe’s criterion applies to all optical systems in which the image brightness is proportional to the brightness of the object. These are called linear systems. But it’s possible to go beyond this limit by means of nonlinear systems, where the image brightness is proportional to the square or even more complicated function of the object brightness. A fuller explanation of these effects requires knowing a bit more about the wave model of light, which is the subject of Chapter 3.
Optical imaging systems of similar properties and complexity are used in another imaging application—the making of computer chips. Individual electronic circuit elements are extremely tiny.
13. Hooke’s diagram of a flea observed by means of an earlymicroscope (top), and the nervous system of a fruit-fly larva (bottom)taken with a modern fluorescence microscope.
A wire connecting two transistors on a chip may have a diameter of only 250 nanometres (nm). (A nanometre is one billionth of a metre,or 10-9 m. For comparison, a human hair is approximately 10,000nm in diameter.) The complex array of devices and connections is laid out on a silicon wafer by means of a process called lithography.Essentially, the chip layout is drawn at a large enough scale to be visible to the human designers, then a demagnified image is projected on to the chip. The image is etched into a surface coating on the wafer, and a series of chemical processes then maps the image into real devices. The imaging system must be able to provide extraordinary resolution in the image—with pixel sizes of the order of the line size of the device. Maintaining this resolution over an entire wafer is a real challenge, requiring many lens elements properly designed to reduce all aberrations to an absolute minimum.An example of such a lens is shown in cross section in Figure 14,showing the multiplicity of lens elements and ray paths.
At the other extreme, both ground- and space-based telescopes for astronomy are very large instruments with relatively simple optical imaging components, often consisting of just one curved reflecting surface and a simple 『eyepiece』 to adjust the rays so as to make best use of the available detectors. The distinctive feature of these imaging systems is their size. The most distant stars are very, very faint. Hardly any of their light makes it to the Earth. It is therefore very important to collect as much of it as possible. This requires a very big lens or mirror—several tens of metres or more in diameter. It is not practical to build lenses of this size, hence the ubiquity of mirrors in large telescopes.It is also necessary to look at distant stars for a long time in order to gather enough light to form an image. And this leads to another problem for ground-based telescopes: the atmosphere is not static. It changes in density with wind, temperature, and moisture. These fluctuations tend to make rays deviate from their course from star to telescope, causing the star to 『twinkle』as its light is deflected randomly onto and off the detector due to atmospheric turbulence.
One way to deal with this problem is to put the telescope outside the atmosphere, in space. The Hubble Space Telescope is an example. It produced spectacular images of distant stars, galaxies,and nebulae, showing extraordinary structures and movement in the far reaches of space. However, optical engineers have in the past two decades devised a clever way to deal with this problem for ground-based telescopes operating with visible light. What they do is to make the telescope mirrors in segments, the tilt of each segment being adjustable. Thus it is possible to 『steer』 the rays hitting different parts of the telescope mirror so that they all hit the detector. If you can measure the deviation that a ray experiences as it traverses the atmosphere, then you can configure the mirror to compensate for that deviation. And that’s what the engineers do. They measure how light from a guide star—an artificial light source in the upper atmosphere—is distorted and use that information to adapt the tilt of the mirror’s segments.In this way images right at the Abbe limit can be produced.Space telescopes are still needed though, to probe the wavelength ranges, such as X-rays and UV, that are absorbed by the atmosphere, and several missions for new ones are planned by the North American Space Agency (NASA) and the European Space Agency (ESA).
14. Section of a lens used for photolithography of computer chips. It consists of more than twenty diferent lens elements, and produces images of 500 nm in size using light of less than half that wavelength.
Metamaterials and super lenses
For many years, optical scientists have been fascinated with the question of what makes a good optical system. Is there a lens that can form the perfect image of an object? This question has intrigued many great physicists, from James Clerk Maxwell in England in the19th century to Victor Veselago in the USSR in the 20th century.Veselago thought about materials that behave in strange ways—in which light bends in a way opposite to that predicted by Snell’s law.Snell’s law is based on the positive refractive indices that are found in common 『normal』 materials. Veselago considered materials with a 『negative』 refractive index. Such materials can be made up of tiny structures that are each less than a wavelength of light in size.This kind of special construction gives 『metamaterials』 such unusual optical properties. In particular, refraction takes place at the interface between normal and metamaterials such that the light rays bend in the opposite direction with respect to the interface than between two normal materials.
The strange refractive indices of these materials can be engineered to bend light rays in all directions, allowing the incoming rays which would normally be scattered by the object to instead be guided around it. Indeed, British physicist Sir John Pendry showed that it is possible to build an invisibility cloak using these designer materials.
Another of the unusual properties metamaterials possess is the ability to make perfect images of objects that are very close to a slab of the metamaterial. Even a flat surface is sufficient to make a lens,which makes them suited to viewing very tiny objects—so-called nanostructures, because they have sizes of the order of tens of nanometres. This is the 21st-century version of Hooke’s technology,and will perhaps unleash a similarly fruitful era of discovery.
All of the imaging systems described in this chapter make two-dimensional renderings of objects. That’s normally the way we experience and think about images—as flat pictures. But what if it were possible to conceive of a system that could make three-dimensional images? Remarkably it is, but that requires a deeper view of light itself, which we will consider in Chapter 3.