Article Text

Download PDFPDF

Linear optics of the eye and optical systems: a review of methods and applications
  1. Tanya Evans,
  2. Alan Rubin
  1. Department of Optometry, University of Johannesburg, Doornfontein, South Africa
  1. Correspondence to Dr Tanya Evans; tevans{at}


The purpose of this paper is to review the basic principles of linear optics. A paraxial optical system is represented by a symplectic matrix called the transference, with entries that represent the fundamental properties of a paraxial optical system. Such an optical system may have elements that are astigmatic and decentred or tilted. Nearly all the familiar optical properties of an optical system can be derived from the transference. The transference is readily obtainable, as shown, for Gaussian and astigmatic optical systems, including systems with elements that are decentred or tilted. Four special systems are described and used to obtain the commonly used optical properties including power, refractive compensation, vertex powers, neutralising powers, the generalised Prentice equation and change in vergence across an optical system. The use of linear optics in quantitative analysis and the consequences of symplecticity are discussed.

A systematic review produced 84 relevant papers for inclusion in this review on optical properties of linear systems. Topics reviewed include various magnifications (transverse, angular, spectacle, instrument, aniseikonia, retinal blur), cardinal points and axes of the eye, chromatic aberrations, positioning and design of intraocular lenses, flipped, reversed and catadioptric systems and gradient indices. The optical properties are discussed briefly, with emphasis placed on results and their implications. Many of these optical properties have applications for vision science and eye surgery and some examples of using linear optics for quantitative analyses are mentioned.

  • optics and refraction
  • vision

Data availability statement

Data sharing not applicable as no datasets generated and/or analysed for this study.

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See:

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.


This review introduces the reader to linear optics, which, at its heart, is the study of paraxial optical systems that include elements that may be astigmatic, tilted and decentred, such as the eye. Linear optics allows one to obtain explicit formulae for optical properties and give insight into the relationships and dependencies among the optical properties. An advantage of linear optics is that it gives insight into understanding clinical phenomena in astigmatic heterocentric optical systems, such as the eye, and as a first step towards the design process. Linear optics is based on first-order optics to give information about the position of the image and the geometry of the blur patch,1 whereas higher-order aberrations and exact ray tracing techniques give information about the quality of an image. Together, defocus and astigmatism provide the greatest contribution to the image quality and are the optical components typically measured and corrected by conventional lenses.

Linear optics makes use of two concepts, the ray state or ray state vector and the transference of an optical system, also known as the system matrix,2–4 ray transfer matrix,5 ray transference,6 system matrix7 or ABCD matrix.5 8 9 The transference is a symplectic matrix that represents all the fundamental first-order optical properties of an optical system.6 10 For a Gaussian optical system that has elements that are centred and rotationally symmetric, the order of the transference is Embedded Image 11–14; where the elements are astigmatic, the transference is Embedded Image ,6 15 and where, in addition, the elements are decentred or tilted, such as in the eye, a Embedded Image augmented transference can be used.2 16

The ray vector6 17 or ray state18 is a matrix that defines the ray in terms of its transverse position and reduced inclination at a transverse plane. The transference operates on the incident ray state to provide the emergent ray state from a system. The arrangement of the entries of the transference or system matrix and corresponding ray state vector may differ2–4 6 7 16 19–21 from the arrangement in this review.

The Results explain how to define an optical system, obtain the transference and how the transference operates on the incident ray state to obtain the emergent ray state. Four special systems are presented, leading to a discussion on how to obtain the most commonly used derived properties. In linear optics, familiar optical properties such as power, front-vertex and back-vertex powers, refractive compensation, thick lens power, transverse and angular magnification, neutralising powers and prismatic effect are derived from the transference and are generalised to three-dimensional and four-dimensional concepts for thick systems. The transference is a matrix that belongs to the real symplectic Lie group,22–24 which poses restrictions for quantitative analysis of linear optical systems because transferences do not constitute a linear (or vector) space; for example, they cannot be added. Options that are currently available for quantitative analysis of optical systems using transformed transferences are mentioned.

The Discussion reviews a range of topics where linear optics has been applied, including magnification, referred apertures, cardinal points, axes of the eye, chromatic aberrations, flipped, reversed and catadioptric systems, Gradient Indices (GRINs) and intraocular lenses (IOLs). Studies using linear optics to analyse their data are mentioned. Most of the Discussion is summarised and given without proof; however, the proofs are available in the relevant referenced papers.


Relevant papers were sought by searching in Scopus, MEDLINE, PubMed, Web of Science and AOSIS (for studies from South Africa). Key words used included linear optics, transference, eye, astigmatism and system matrix in varying combinations. Papers were included if the topic made use of linear optics and the transference or system matrix, with application to the human eye and was published in English. A total of 84 papers were obtained as a result of this systematic search. An additional 34 references were included that were not identified by the systematic search but were considered relevant.

Results: linear optics

An optical system S is bound by two transverse planes, an entrance plane Embedded Image and an exit plane T, and has a longitudinal axis Z, shown in figure 1. Such an optical system may consist of a series of elementary systems, namely homogenous gaps and refracting or reflecting surfaces that may be astigmatic, tilted and decentred. Together, Embedded Image , T and Z define the optical system.6 15 25 The ordered set Embedded Image defines a left-handed coordinate axis system, as shown in figure 1.

Figure 1

A system S is bound by an entrance plane Embedded Image and an exit plane T. The left-handed coordinate axis system is indicated with longitudinal axis Z and horizontal Embedded Image and vertical Embedded Image axes in Embedded Image and Embedded Image and Embedded Image in T. A ray, with incident ray segment Embedded Image , is incident on S at Embedded Image with transverse position Embedded Image and inclination Embedded Image . The ray, with ray segment R, emerges from S at T with transverse position y and inclination a. The refractive index is Embedded Image upstream and n downstream of the system. Refracting elements of the system are not shown.

A linear optical system with centred elements that may be astigmatic is represented by a Embedded Image transference S as

Embedded Image (1)

where Harris6 15 26 27 defines the fundamental first-order properties of the system as A the dilation, B the disjugacy, C the divergence and D the divarication. Each of A, B, C and D is a Embedded Image submatrix. A and D are unitless, while B is in units of length and C is in units of inverse length, for example, dioptres. The fundamental properties are strictly properties of the system itself and not properties of light, such as vergence.6 15

There are two types of elementary system, the thin system and the homogeneous gap. A thin system may be a refracting surface or a thin lens. The transference of an astigmatic refracting surface or thin lens with power F is19 28

Embedded Image (2)

sometimes referred to as the refraction matrix.7 29 30 F is a Embedded Image symmetric matrix, defining the astigmatic dioptric power.3 4 31–37 O and I are Embedded Image null and identity matrices, respectively.

The transference of a homogenous gap of width z and refractive index n is

Embedded Image (3)

where Embedded Image is the reduced width. The transference for this elementary transference is sometimes referred to as the translation matrix.7 29 30

A compound system is made up of optical system Embedded Image with transference Embedded Image , followed by systems Embedded Image , Embedded Image , etc. The transference of the compound system Embedded Image with m juxtaposed optical systems16 38 is given by

Embedded Image (4)

Note that multiplication is in reverse order. Early versions used the inverse of the transference, avoiding the necessity of multiplying in reverse and referred to a matrix of the second order for Gaussian or centred systems and a matrix of the fourth order for astigmatic systems.39–41

A paraxial ray is defined by its transverse position y and reduced inclination α at transverse plane T as

Embedded Image (5)

y is a Embedded Image matrix with Cartesian coordinates Embedded Image representing the position of the ray in the transverse plane T relative to Z. α is the reduced inclination; Embedded Image where Embedded Image is a Embedded Image matrix representing the inclination of the ray at T, relative to Z and n is the refractive index of the medium. Superscript T is the matrix transpose. Figure 1 shows the ray state Embedded Image at incidence and ρ at emergence.

For a ray traversing system S, the ray states at incidence Embedded Image and emergence ρ are related through the basic equation of linear optics3 4 6 15 16

Embedded Image (6)

Substituting equations 1 and 5 into equation 6 and multiplying we obtain the pair of matrix equations6 28

Embedded Image (7)


Embedded Image (8)

The Greek word Embedded Image or stigma means a spot or point. Harris42 defines a stigmatic system as one where, ‘through the system, every object point maps to an image point’. An astigmatic system is one that is not stigmatic. While a Gaussian system is stigmatic, not all stigmatic systems are Gaussian.42 43 That is, a system that is stigmatic may be made up of astigmatic elements within the system.

A Gaussian system is defined as one where all surfaces are rotationally symmetric about an optical axis.15 44 In a Gaussian system all the fundamental properties are scalar. Le Grand39 referred to the fundamental properties as Gauss’s coefficients.41 Consequently, the transference simplifies to Embedded Image , a Embedded Image matrix, the ray state simplifies to Embedded Image a Embedded Image matrix and equations 2 and 3 simplify to Embedded Image and Embedded Image , respectively.44 45 Applying the basic equation of linear optics (equation 6), we obtain the pair of scalar equations

Embedded Image (9)


Embedded Image (10)

The Embedded Image and Embedded Image transferences belong to the real symplectic Lie group and are closed under multiplication.22 Symplectic matrices have unit determinant and are therefore invertible.10 28 38 45–47 Harris10 provides a detailed summary of symplecticity and its implications for optical systems. Because of symplecticity, the four fundamental properties are related and not independent.6 10

Symplectic matrices are not closed under addition and multiplication by a scalar. The implications are that Symplectic matrices do not span a vector space so there is no such thing as a transference space.6 Symplectic matrices do not lend themselves to quantitative analysis, for example, averaging a set of transferences does not give the average of a set of optical systems, or eyes.48 However, quantitative analysis of transferences is possible by applying any of a number of available transforms.48–53

Thus far we have looked at the Embedded Image transference of a Gaussian system and the 4×4 transference of an optical system with astigmatic elements. When a system has elements that are tilted or decentred, Harris2 16 defines a Embedded Image partitioned matrix or system vector

Embedded Image (11)

which accounts for all the effects of prism, tilt and decentration. The Embedded Image submatrices are e, the transverse translation and π the deflectance, and along with A, B, C and D are also fundamental first-order properties of the system.2 25 45 e has units of length and π is unitless or can be thought of as radians.

Harris16 showed that it is possible to combine all six fundamental properties into one Embedded Image augmented transference as

Embedded Image (12)

where o is a Embedded Image or Embedded Image null matrix and the fifth row is trivial.10 16 54 The optical nature of a first-order system can be completely characterised by the six fundamental properties. The converse is also true; that is, given a matrix of the type in equation 12 in which submatrix S is symplectic, an optical system can be constructed, in principle, with linear optical character represented by the given matrix.54 The Embedded Image augmented ray state is given as

Embedded Image (13)

and similar to equation 6 the basic equation generalises to

Embedded Image (14)

To obtain the transference of a compound system, equation 4 generalises to2 16

Embedded Image (15)

where the elementary systems have transferences

Embedded Image (16)

for a refracting surface with astigmatic power F and prismatic power π and Embedded Image for a homogenous gap of reduced width ζ .2 55 Substituting the transference T and the ray states Embedded Image and γ into equation 14, we obtain the two matrix equations

Embedded Image (17)


Embedded Image (18)

These two equations form the basis for many of the derivations and relationships in linear optics. By definition, symplectic matrices are of the order Embedded Image , however, the augmented transference is referred to as augmented symplectic which has implications for quantitative analyses.10 54

Four special systems

The four Embedded Image fundamental properties are related through symplecticity and each is a modification of a familiar optical property.6 10 Consequently, four special systems are defined and occur when each of the four fundamental properties, is null.15–17 These four special systems are shown in figure 2.

Figure 2

The four special systems for optical systems. (A) Exit-plane focal, (B) conjugate, (C) afocal and (D) entrance-plane focal.106

Setting Embedded Image reduces equation 7 to Embedded Image which implies that a pencil of rays of reduced inclination Embedded Image entering an optical system at Embedded Image will focus at a point in the exit-plane T. Such a system is referred to as exit-plane focal. Where the system is an eye, Embedded Image represents emmetropia.40 Harris6 refers to Embedded Image as the ‘condition for emmetropia’. Consequently, Harris,6 regards A as the ametropia of the eye, such that, when Embedded Image the eye is ametropic. A is a scalar matrix, that is Embedded Image , Embedded Image represents a hyperopic eye and Embedded Image a myopic eye. Where A is not a-scalar matrix the eye is astigmatic.

When Embedded Image , equation 7 simplifies to Embedded Image and rays from a point with position Embedded Image at Embedded Image are conjugate with a point with position y at T. For a conjugate system, A represents the transverse magnification.6 15 For a Gaussian system Embedded Image , where A is the familiar transverse magnification in a conjugate system. The transference for a conjugate system is also referred to as the object-image matrix.4 56 For example, from a reading plane to the retina of an eye that is accommodated at the reading plane.

For Embedded Image , equation 8 becomes Embedded Image . A pencil of parallel rays entering an afocal system will exit parallel. D represents an angular magnification.6 For a Gaussian system this is the familiar Embedded Image .15

Finally, if Embedded Image equation 8 becomes Embedded Image and rays entering the system from a point with position Embedded Image on Embedded Image will emerge from the system all with the same inclination α . Such a system is entrance-plane focal.6 This is the situation, for example, in ophthalmoscopy. The entrance plane is the retina and the exit plane is the cornea of an emmetropic eye. Light is traversing the eye in the reverse sense. The optical system is a reversed eye.

Some of these special systems are used to obtain certain familiar derived properties. Each of A, B, C and D, when applied in this manner, are generalised magnifications.57

Derived properties

There are a number of familiar optical properties of paraxial systems that are used regularly, such as power, refractive compensation, front- and back-vertex powers, neutralising powers, cardinal points, axes of the eye, etc. These properties can be derived from the transference and defined in terms of the fundamental properties. Except for the optical axis, which needs the length of the system in addition to the fundamental properties, all of the derived properties can be derived directly from the transference.58

The derived properties are obtained by making use of the four special systems or the two matrix equations 7 and 8. We look at some the familiar derived properties that are used in optics.

The dioptric power of optical systems in general was first defined by Harris17 28 as

Embedded Image (19)

implying that power is a first-order optical property derived from the transference. For Gaussian systems, this simplifies to the scalar Embedded Image . For refracting surfaces and thin lenses, F is symmetrical, however, for thick astigmatic systems C and, hence F, may be asymmetric,59 a consequence of multiplication that can be observed from equations 4 and 15.

A thin lens of power Embedded Image may be placed immediately in front of the cornea such that it fully compensates the eye for distance vision. Together, the eye and refractive compensation create an exit-plane focal system or condition for compensation,6 as shown in figure 2A. As an example, we derive the corneal-plane refractive compensation. If the transference of the eye is Embedded Image , then the transference of the compensating thin lens and eye is Embedded Image . We set Embedded Image and solve for Embedded Image to obtain6

Embedded Image (20)

The front-neutralising power of an optical system is obtained similarly. The transference of the neutralised system is Embedded Image . Neutralised systems are afocal, so we set Embedded Image and solve to obtain the front-neutralising power Embedded Image . The front-vertex power is the negative of the front-neutralising power and is therefore given as60

Embedded Image (21)

Similarly, the transference of the system with back-neutralising lens is Embedded Image . This is also an afocal system and so we set Embedded Image and solve for the back-neutralising power Embedded Image . The back-vertex power is the negative of the back-neutralising power. The back-vertex power of an optical system with transference S is therefore given as3 4 19 21 60

Embedded Image (22)

Vertex powers are measures of vergence (discussed below) and the matrices are symmetric as is the case for neutralising powers.3 4 17

A thin lens has a transference of the form given by equation 2. Substituting into equation 6 and multiplying we obtain two equations Embedded Image and Embedded Image , which can be rearranged to

Embedded Image (23)

where prismatic effect is defined as Embedded Image . This is the generalised form of Prentice’s equation.2 15 17 28 61 62

Two thin astigmatic lenses juxtaposed have transference given by equations 2 and 4, Embedded Image and from equation 19 we obtain the familiar relationship Embedded Image .28

A thick lens is made up of an astigmatic refracting surface of power Embedded Image , followed by a gap of reduced thickness τ and then a second astigmatic refracting surface of power Embedded Image . The transference of the system is (equations 2 to 4) Embedded Image ; equation 19 gives the generalised Gullstrand equation for the power of a bitoric thick lens,17 19 59

Embedded Image (24)

The advantage of using linear optics lies in the simplicity of the ray tracing, using the ray state vector, from incidence to emergence through an optical system. However, occasionally it may be necessary to trace the vergence across the optical system. The vergence from an object point at incidence onto an optical system is Embedded Image a Embedded Image matrix where Embedded Image and Embedded Image is the radius of curvature of the wavefront, measured from the wavefront to the object point. The vergence matrix L can be traced through the system using an augmented step-along method or linear ray optics approach.63 Alternatively, it can be traced through the system in terms of the transference. On emergence from the system, Harris28 64 gives the vergence as

Embedded Image (25)

For a distant object the incident vergence is Embedded Image , equation 25 simplifies to64 Embedded Image , which, is the back vertex-power, equation 22. When an object point is on the entrance plane Embedded Image , Embedded Image and equation 25 simplifies to Embedded Image .

The principal meridians of an astigmatic wavefront are orthogonal and the vergence matrix is always symmetric,3 4 64 a consequence of symplecticity.64 For astigmatic vergence, the position of the two line foci is given by a position matrix Embedded Image , in units of length. For Z a singular matrix there is a point or line focus. The eigenstructure37 of Z gives the positions Embedded Image of the two line foci. z may be positive, negative, zero (a point or line focus) or undefined, usually interpreted as infinite. The eigenmeridians indicate the orientation of the two principal meridians and the line foci lie orthogonal to their corresponding eigenmeridians.65

Wavefront aberrations, including the Zernike polynomials for defocus, astigmatism, coma and spherical aberration have been included in Embedded Image radial and tangential ray transfer matrices.5 The parameters and shape of the wavefronts in terms of Zernike coefficients can be obtained the same way as utilised in Shack-Hartman wavefront sensor measurements.66 The aberrated wavefront propagates both forward and backwards along its normal. A ray transfer matrix represents wavefronts characterised by Zernike coefficients and normalised for the given the pupil radius.67

Quantitative analysis

Quantitative analysis is necessary for statistical and other purposes. For example, the question arises ‘What is an average eye?. Harris48 states ‘by average, we mean having an optical character as a whole that is representative or central to the optical characters of eyes in a set of eyes’. Eyes are not merely the sum of their parts. Consider refractive error; two eyes with the same refractive error may have different ocular biometry and consequently, different retinal image sizes. There is, therefore, more to the optics of eyes than its refractive error. The average of the refractive error of two or more eyes, is exactly that; the average of their refractive errors, and similarly for the average of corneal powers and other components of the eye. The averages of the components of the eyes do not necessarily represent the optical character of the eye taken as a whole.48 50

Because of symplecticity, transferences do not constitute a linear space. This means that they cannot be added nor multiplied by a scalar. The transference, therefore, does not lend itself to conventional methods of quantitative analysis, including the calculation of an arithmetic average, and cannot be applied to calculate, for example, an average eye. Several transformations to vector spaces have been proposed. Initial attempts made use of four characteristic matrices,50 68 69 in particular the point characteristic P and the angle characteristic Q. This method was followed by the exponential-mean-log transference48 49 52 53 and the Cayley transform.13 23 51 70

There are four characteristic transforms. The point characteristic P is given as Embedded Image for the Embedded Image transference T (equation 12), and the angle characteristic Q is given as Embedded Image .50 68 69

The characteristic matrices are Embedded Image matrices, each with a trivial bottom row which is omitted for brevity. Sometimes it is preferable to work with Embedded Image , the Embedded Image submatrix of Embedded Image , ignoring the last row and column, and similarly for Embedded Image . This ignores the effects of tilt and decentration of the elements in the optical system. The units for Embedded Image are inverse lengths, however for Embedded Image the last row and column are unitless, while Embedded Image is in units of length for all entries. Each of the Embedded Image characteristic matrices are symmetric50 and therefore Embedded Image and Embedded Image have 10 independent entries. Similarly, Embedded Image and Embedded Image have 14 independent entries; the bottom row always being trivial.50 The first M and second N mixed characteristics have mixed units and will not be discussed further. P and Q are inverses of each other.69 P and Q do not always exist; the characteristic matrices break down when the inverse of B or C does not exist, respectively for P and Q.50 69 This is obviously not a problem for eyes. The implication is that P cannot be applied to conjugate systems and Q cannot be applied to afocal systems.50

The point characteristic defines a vector space. The same is true of the other three characteristics. Thus, each of the characteristics lend themselves to quantitative analysis.69 Obtaining the average of N optical systems requires each transference to be transformed to the desired characteristic matrix, averaged, and then transformed back to a transference.69 The first entry of P is the corneal-plane refractive compensation in an eye, given by equation 20, making P the characteristic of particular interest to ophthalmology and optometry.50 68

Another option for calculating the average of a set of eyes, or any optical systems, is the exponential-mean-log transference given by Embedded Image , where N is the number of systems in the set.23 48 49 52 53 71 The calculation is easily performed using software such as MATLAB and the functions logm and expm. The principal logarithm of a symplectic matrix with no negative eigenvalues23 72 (unlikely for eyes73 is a Embedded Image Hamiltonian matrix (Embedded Image ) and the exponential of a Hamiltonian matrix is symplectic.23 53 An augmented symplectic matrix can be transformed into an augmented Hamiltonian matrix with form Embedded Image where B and C are symmetric and the omitted bottom row is null. Of the 25 entries, 14 are independent. The units of a Hamiltonian matrix or transformed transference are the same as those of the transference. Hamiltonian matrices fulfil the requirements for a vector space47 and therefore the arithmetic mean of Hamiltonian matrices is Hamiltonian.23

Yet another option for quantitative analysis also transforms the transference T to a Hamiltonian matrix, but this time making use of the Cayley transform.12 There are several Cayley transforms defined in the literature,23 46 51 however, the one given as Embedded Image is its own functional inverse.13 24 46 70 The Cayley mean is given as Embedded Image .51 70

Change in optical character of an eye is of obvious interest in eye surgery. This can be quantified as a difference in one of the vector spaces. It is possible to define 10-dimensional spaces74 for Embedded Image , Embedded Image and Embedded Image and 14-dimensional spaces75 for Embedded Image , Embedded Image and Embedded Image . A coordinate vector is obtained for each of these spaces that is Embedded Image or Embedded Image , respectively. An inner-product space needs to be dimensionally homogenous. The physical dimension of Embedded Image and Embedded Image is length (L) and of Embedded Image is inverse length (Embedded Image ) and therefore these three coordinate vectors define an inner-product space.74 75 Embedded Image , Embedded Image and Embedded Image have mixed units and cannot define an inner-product space.75 These inner-product spaces allow one to calculate distances (magnitudes) and angles in the space between two optical systems or eyes.74 75

A sample variance-covariance matrix can be obtained from the coordinate vectors of Hamiltonian matrices because it defines a linear or vector space and may be Embedded Image or Embedded Image , accordingly.71 76 77 The Embedded Image exponential-mean-log-transference as well as the Embedded Image variance-covariance matrix were used to study multiple measurements of the cornea of a single participant.71 This study illustrated the application of quantitative analyses of optical systems in general.

While it is not possible to obtain the average of optical systems using their transferences directly, by using one of the transforms into linear space it becomes possible to perform quantitative analyses of optical systems. For averaging of optical systems, Hamiltonian space is best suited. The exponential-mean-log-transference is numerically optimal for averaging optical systems.22 For changes between two or more optical systems, the inner-product space of the angle characteristic is well suited to this task.75

An issue that hinders quantitative analysis is units of the entries of the matrices. Hamiltonian matrices have the same mixed units as the transference, and similarly, the characteristic matrices also have mixed units with the exception of Embedded Image , Embedded Image and Embedded Image . A dimensionless transference exists which makes use of the wavelength of light, allowing for representation of a magnitude of the transference as a scalar, for example, using the Frobenius norm.78

Discussion: applications

Studies have defined the familiar optical properties using linear optics such as magnifications, cardinal points and structures, chromatic aberrations, systems that are flipped, reversed and reflected and systems that include GRINs. These theoretical studies, along with research studies that have based their analyses in linear optics, including quantitative analyses, are discussed. The discussion is intended to be complete with respect to the range of topics, however, each topic is discussed briefly in the interests of brevity, with references providing details.

Magnification, including spectacle and instrument magnifications

The concept of spectacle magnification is familiar and is expressed as the product of power and shape factors. The power factor depends on the position of the entrance pupil which is an image and becomes fuzzy in an astigmatic eye and, therefore, the power factor and, consequently, the spectacle magnification do not readily generalise for astigmatic eyes.20 21 79 The approximate generalised spectacle magnification is given as Embedded Image , a Embedded Image magnification matrix20 21 with this order of multiplication. The shape factor Embedded Image and approximate power factor Embedded Image are both obtained from the entries of the transference. τ is the reduced thickness of the lens, Embedded Image is the front-surface power represented by a Embedded Image power matrix, Embedded Image is the distance from the back-vertex of the spectacle lens to the entrance pupil of the eye and Embedded Image is the back-vertex power of the thick lens obtained from equation 21. Provided the astigmatism is minimal the approximation is good.20

Garcia et al 56 80 obtained the spectacle magnification from the Embedded Image transference based on what they refer to as the pupil matrix; a matrix that is based on the positions of the entrance and exit pupils of the uncorrected and corrected eye. Their work was, therefore, limited to stigmatic powers and concentrated on the correction of myopic eyes using spectacle,80 81 contact lens,81 and IOLs corrections.56 81 Ocular rotation of the eye is needed to fixate on a particular object.82 83 Flores83 generalised the ocular rotation for eyes in the case where thick astigmatic spectacle lenses are placed in front of the eye, taking spectacle magnification and prismatic effect into account.

Harris79 proposed using the actual pupil of the eye instead of the entrance pupil and obtained a set of equations to define the instrument size magnification for a general instrument in front of an eye. Such an instrument may be a contact lens, spectacle lens or other optical instrument and may or may not compensate for any ametropia in the eye, provided that the eye’s pupil remains the limiting aperture. The equations depend on the optical character for the eye, separated at the iris into anterior and posterior subsystems (figure 3), as well as the instrument. Becken et al 84 used ray tracing and wave tracing to include instrument magnification of objects at finite distances in front of an astigmatic eye.

Figure 3

The eye with corneal plane Embedded Image , retinal plane Embedded Image and plane with restricting aperture Embedded Image which separates the eye Embedded Image into anterior Embedded Image and posterior Embedded Image subsystems. The length of the eye from Embedded Image toEmbedded Image is z. There is refractive index Embedded Image upstream and n downstream of the system. A ray is incident on Embedded Image with incident inclination Embedded Image , traverses the aperture with position Embedded Image and arrives at the retina with position Embedded Image and inclination Embedded Image . The elliptical aperture is referred upstream to Embedded Image , the effective corneal patch indicated by the dashed ellipse or downstream to Embedded Image , the blur patch indicated by the dashed ellipse.

An afocal optical instrument, such as a telescope has a generalised magnification Embedded Image . Such an afocal instrument may include astigmatic refracting elements. In this case, the instrument and eye interact such that each eye, viewing distant objects, sees differently through the afocal instrument.85 The difference in observations depends on the eye’s disjugacy Embedded Image .85

A number of magnifications M are obtainable and define both the image size magnification and the image blur magnification. The equations define the magnification M of the object to the image at the retina.18 26 These image size and image blur magnifications are defined for the eye alone, or an instrument and eye.18 26

Spectacle and instrument magnification M is a Embedded Image matrix representing magnification as a product M or as a percentage as Embedded Image .20 21 The spectacle magnification relates the retinal image position Embedded Image to the object position Embedded Image through Embedded Image for both the right and left eyes. Discrepant retinal image sizes can be represented by the aniseikonic magnification matrix as Embedded Image , where subscripts r and l represent right and left spectacle magnification, respectively.20 21 Unit magnification I represents retinal images of the same size. The diagonal entries of M relate magnification in the horizontal and vertical meridians, while the off-diagonal entries relate to rotation of the images relative to each other27 and, in particular, the vertical declination error in binocular vision which will give the image the appearance of tilting towards or away from the observer.21

Interpreting the magnification M represented by a Embedded Image matrix becomes somewhat more involved than for scalar magnification, especially in cases where M is asymmetric. The magnification matrix M results in magnifications along two principal meridians, resulting in rotation and an anamorphic distortion of the image.84 86 The eigenstructure37 is important to interpreting the magnification.27 63 84 87 For a symmetric matrix M, the eigenvalues will be real and the eigenvectors orthogonal. The magnification can be interpreted in the same way as for the dioptric power matrix, that is, as two scalar magnifications (the eigenvalues) along the two eigenmeridians and magnification crosses can be drawn or thought of in a similar way as the familiar power cross.27 87 There may be combinations of magnification Embedded Image and/or minification Embedded Image along the two meridians and these magnifications may be positive or negative. Where both meridians are positive, the image is upright; where one meridian is positive and the other negative, this represents an image that is reflected in or flipped about the positive meridian and where both meridians are negative, the image is reflected in both meridians, resulting in an image that is rotated through Embedded Image . Where a meridian has zero magnification, this would indicate that the image reduces to a line. When M is asymmetric, it is possible to obtain complex eigenvalues and eigenmeridians. In this situation, Harris,27 interprets the magnification as a symmetric magnification followed by a rotation or a reflection in an axis that is not an eigenmeridian.27 43 In contrast, Espinós and Micós86 propose that the degree of asymmetry of M is derived from the asymmetry of the power matrix and derive an additive relationship of symmetric and antisymmetric components, however, this study is limited to lateral magnification produced by thin lenses and thick optical systems.

Aperture referral including blur patches and the effective corneal patch

A pencil of light, from an object, is restricted by an aperture within the system, such as the pupil. The restricted pencil of rays will form a blur patch at the retina and, similarly, of all the rays incident on the cornea, those that reach the retina are also restricted by the aperture, but referred upstream to the cornea (figure 3).1 88 89 This effective corneal patch has implications for corneal surgery, such as refractive surgery, corneal inlays and optic zones for contact lenses. In addition, pinhole apertures are available for surgical insertion as a corneal inlay, or a pinhole IOL at the iridial plane. In the presence of two apertures, that is, pinhole and pupil, one takes the role of the restricting aperture.

The retinal blur patch, effective corneal patch, projective field and field of view are all related through a common aperture, usually the pupil.88 An aperture at some longitudinal position within the optical system may have the effect of restricting the pencil of rays. Elliptical apertures are considered in this study, a circle being a special case of an ellipse. This restricted pencil can be traced upstream or downstream and its shape, size, orientation and transverse position at a referred plane will be determined by both the geometry of the aperture and the refractive elements upstream or downstream of the aperture. The position, both longitudinally and transversely, of an object referred downstream or an image referred upstream will also have an influence on the geometry of the referred aperture.1

The general equation for aperture referral is Embedded Image , where Embedded Image is the generalised radius of the elliptical aperture and Embedded Image is the generalised radius of the referred aperture. R defines the geometry of an ellipse described by its major a and minor b semidiameters.1 88 89 A circular aperture is a special case of the elliptical aperture where Embedded Image . M is a generalised linear magnification matrix that may be asymmetric and that depends on the system, longitudinal position of the aperture, object or image position and direction of referral, that is, upstream or downstream.1 18 26 27 88 90 Embedded Image is defined for all finite ellipses, including degenerate ellipses, that is, point and line segments (or foci). The referred aperture may be rotated and/or reflected and magnified by different amounts along the diameters.1

Intraocular lenses

A number of studies have applied the use of linear optics to calculations of IOLs. Expressions are available for predicting the refraction given the IOL power, the necessary IOL power given the desired target refraction, refractive power for a phakic IOL and for treating the IOL as either a thin or a thick lens.7 30 91–95 Haigis29 used linear optics to compare the available theoretical IOL formulae. The effects of meridional magnification and aniseikonia were considered91 94 as well as the impact of decentration of the toric IOL on residual refraction.7 92 95

The corneal-plane refractive compensation is sensitive to changes in the power and axial position of an IOL.96 The residual refraction can be used to first, estimate the position and orientation of a toric IOL to best compensate for the eye’s refraction and second, to estimate the amount of axial translation and rotation of a toric IOL that may have occurred postoperatively.92

Cardinal points and axes of the eye

Interest in the various lines and axes of the eye has increased in recent years with the need for accurate placement of IOLs and corneal surgery including refractive surgery and inlays such as the KAMRA pinhole inlay. Many of these axes are defined with respect to certain cardinal points. It turns out that in an astigmatic system, just as the image or focal point is broken up into two orthogonal line foci separated by an interval of Sturm, so too, cardinal points are broken up in this way. In other words, in an astigmatic system such as the eye, cardinal points are usually not points, they are, instead, a ‘fuzzy’ zone called a node.25 However, the lines of a nodal structure or node are not necessarily orthogonal nor is there always a pair of lines and similarly for principal structures.25 97 For an optical system that has elements that are decentred or tilted such that Embedded Image , the node may also be decentred.25 These two effects have obvious implications for concepts such as the visual axis which is defined in terms of nodal points.25 98

Special points include the six familiar cardinal points, the incident (subscript 0) and emergent focal (Embedded Image and F), principal (Embedded Image and P) and nodal points (Embedded Image and N) as well as the anti-principal (Embedded Image and Embedded Image ) and antinodal points (Embedded Image and Embedded Image ). The approach in linear optics is unified and shows that the special points are all related and are special cases of a large class of special structures.57 97 The positions of the incident points are a generalised distance

Embedded Image (26)

from Embedded Image and the emergent points a generalised distance

Embedded Image (27)

from T, provided C and X are non-singular and where subscript Q indicates the special point corresponding to the Embedded Image generalised magnification X for each special point. X relates all special rays through the system through

Embedded Image (28)

By substituting the corresponding X for each of the special points, it is possible to obtain the position Embedded Image with respect to Embedded Image for incident points Embedded Image and Embedded Image with respect to T for emergent points Q. The generalised magnifications X are given as follows; for nodal points Embedded Image , for principal points Embedded Image , for the incident focal point Embedded Image , for the emergent focal point Embedded Image , for the anti-nodal points Embedded Image and for the anti-principal points Embedded Image .25 97 Generalised distance Z is a Embedded Image matrix and its eigenstructure aids in its interpretation. The eigenvalues give the longitudinal distances Embedded Image and Embedded Image measured from Embedded Image or Embedded Image and Embedded Image measured from T and the eigenvectors or eigenmeridians give the orientations for the lines.25 62 The generalised distance Embedded Image may be asymmetric for Embedded Image , P, Embedded Image , N, Embedded Image , Embedded Image , Embedded Image and Embedded Image , implying that the lines of the structure or node are not necessarily orthogonal.25 97 The detailed interpretation of Z was discussed earlier.

To further illustrate the idea that the special points are unified rather than distinct structures with no relationship among them, Harris11 57 shows how locator lines can be obtained for Gaussian systems and drawn using a graphical construction. All of the incident special points (Embedded Image , Embedded Image , Embedded Image , Embedded Image and Embedded Image ) lie on the incident locator line Embedded Image while all of the emergent special points (F, P, N, Embedded Image and Embedded Image ) lie on the emergent locator line L. The slopes of Embedded Image and L are related through the relationship Embedded Image .11 While this graphical construction is limited to Gaussian systems, changes to a schematic eye, such as due to ocular accommodation or refractive surgery, result in changes to the position and slope of the locator lines, clearly indicating shifts in position of each of the special points.

Pascal’s ring is a second schema available to illustrate the unified relationships among the points. Pascal’s ring is a hexagon that shows equalities among the points and, like the locator line graph, can be used to illustrate changes to the positions of the points when one or more elements in the system is changed.14 99

The optical axis does not depend on the cardinal points, however, the cardinal points are defined as lying on the optical axis. Clearly this necessitates a definition for an optical axis that holds for all eyes.

Optical systems may have no optical axis, a unique optical axis or an infinity of optical axes.58 Traditionally, an optical axis may be defined as a ray that traverses an optical system in a straight line, without undergoing deflection or translation.45 58 However, defining an optical axis this way means that eyes, with the exception of schematic eyes, do not have an optical axis.82 100 Harris,45 58 defines an optical axis, where it exists, as ‘a straight line along which a ray both enters and leaves the system’. The emphasis is on the alignment of the ray segments before and after the system and not on its path within the system. It turns out that by defining an optical axis in this way, every eye has an optical axis (the test for existence) and that optical axis is unique (the test for uniqueness). The optical axis is obtained from the transference as the displacement at Embedded Image with respect to the longitudinal axis Z.58

The visual axis of an eye is defined as the incident ray segment of the nodal ray that arrives at the centre of the fovea.100 In an eye that suffers from astigmatism, nodal points do not exist as points and so the definition fails. However, Harris98 points out that eyes with astigmatism do have nodal rays and he proposes a revised definition for visual axes. A nodal ray is any ray whose incident and emergent segments are parallel, that is Embedded Image .45 98 100 Nodal rays may undergo translation. A visual axis is defined as a nodal ray with two segments, an internal V and external Embedded Image segment98 and is not one continuous ray or line through the eye. The internal segment reaches the fovea at Embedded Image with Embedded Image , while the external segment is in object-space and of greater clinical relevance.

The pupillary axis is defined as the line perpendicular to the cornea that passes through the centre of the entrance pupil of the eye.82 100 Clinically, it is the line that objectively determines a person’s direction of gaze. The entrance pupil is an image of the pupil seen through the cornea and in an eye with astigmatic anterior elements, this image breaks up into two line foci and the familiar interval of Sturm and becomes blurred, and consequently, the pupillary axis is not well defined. A modified definition that includes astigmatic eyes and decentred or tilted anterior elements is ‘the pupillary axis is the infinite straight line containing the incident segment of the ray that passes through the centre of the (actual) pupil and is perpendicular to the first surface of the eye’.101 Defining the pupillary axis this way allows the definition to extend to the compound system of an eye with optical compensation such as a contact lens or IOLs. The pupillary axis is obtained from the transference for the anterior subsystem of the eye Embedded Image and additionally needs the curvature Embedded Image and tilt Embedded Image of the cornea. Embedded Image is the Embedded Image symmetric matrix defining the minimum and maximum curvatures of the cornea and Embedded Image is a Embedded Image tilt vector.101

The line of sight, also known as the foveal chief ray, is the line joining the fixation point and the centre of the entrance pupil.100 The line of sight is taken to be ‘the infinite straight line defined by the portion of the foveal chief ray in object space, that is, the portion incident onto the eye’.102 The position of the fovea Embedded Image and the position of the chief ray through the pupil Embedded Image determine the line of sight. When the pupil is decentred, Embedded Image . The line of sight is not fixed and may vary as any of the properties of the eye change, such as due to ocular accommodation or refractive surgery. The equation102 is general and may be applied to eyes with optical correction or to pseudophakic eyes. The line of sight does not depend on the position of an object point or fixation target and is a property of the eye.102

Each of the four axes is obtained from the transference for the eye T and/or the transference for the anterior subsystem Embedded Image (figure 3). The axes are defined with respect to the position Embedded Image and inclination Embedded Image at incidence to the corneal plane Embedded Image . The pupillary axis and line of sight only make use of the top block row of Embedded Image and/or T. Sometimes additional information is needed to obtain an axis and this is summarised in table 1.

Table 1

The axes for the eye defined in object space in terms of the position Embedded Image and inclination Embedded Image at the cornea

Chromatic aberrations

Relationships are available for the wavelength-dependent refractive indices for the reduced eye103 and a four-surface eye,104 which allow one to obtain the reduced distance ζ and power of the refracting surface F, both of which depend on the refractive indices of the media and are therefore wavelength-dependent. One is thus able to obtain the wavelength-dependent or frequency-dependent transference for the eye from the elementary transferences (equations 2 and 3).12–14 65 105

The transference of a reduced eye12 13 or Le Grand’s four-surface eye13 can be obtained for any chosen wavelength or frequency. The fundamental properties of the Embedded Image transference for model eyes are nearly linear in frequency and hyperbolic in wavelength.12 For the reduced eye Embedded Image and is independent of frequency. By transforming the frequency-dependent transferences of Emsely’s reduced eye and Le Grand’s eye to Hamiltonian space, using the Cayley transform, fitting a straight line in Hamiltonian space, and transforming back to a symplectic matrix, a relationship is obtained for the transference for a model eye on the frequency of light across the visible light spectrum.13

When dichromatic light is traced from an object point through a Gaussian optical system, the light makes two point foci, for example blue (b) and red (r), at different positions. Longitudinal chromatic aberration Embedded Image is the distance from the red to the blue image points, measured along the optical axis, while transverse chromatic aberration Embedded Image is the distance between the two foci, in the transverse direction.65 δ represents a chromatic difference. Chromatic aberration is defined13 14 65 from the red to the blue focus, that is from low to high frequency or energy12 in contrast to the classical definition of chromatic aberration which gives the chromatic aberrations as unsigned. The chromatic aberrations are distances between images and are therefore dependent on the position of the object point and the system itself. If the object position is changed, the chromatic aberration changes too.65

The classical definition of chromatic aberration is limited to Gaussian systems, that is stigmatic powers and centred optical systems. Chromatic aberration is the only first-order aberration and is therefore suited to methods in linear optics. In an optical system that has astigmatic elements the image breaks up into two line foci, separated by the familiar interval of Sturm. The image is no longer a point, but a fuzzy (noisy) region and its longitudinal position can be represented by a Embedded Image distance matrix Z, which is related to vergence through Embedded Image .65 In the case of chromatic aberration, each of the red and blue foci will break up into the familiar interval of Sturm and the longitudinal chromatic aberration generalises to Embedded Image . The eigenstructure of Embedded Image provides the positions and orientations of the orthogonal red line foci, and similarly for the blue orthogonal line foci. Interestingly, the orientations of the red line foci are not necessarily in alignment with the blue line foci.65 Similarly, the transverse chromatic aberration is given as a Embedded Image vector as Embedded Image , where each of Embedded Image and Embedded Image are the transverse position vectors of the astigmatic red and blue images.65

Chromatic properties of the eye are defined for astigmatic heterocentric eyes from the frequency-dependent transferences. Aperture-independent chromatic properties include chromatic difference in power, Embedded Image (F from equation 19) and chromatic difference in refractive compensation, Embedded Image (Embedded Image from equation 20).105

Aperture-dependent chromatic properties of the eye are all dependent on the longitudinal position of the limiting aperture as well as the distance of the object from the eye. Aperture-dependent chromatic properties include chromatic difference in position Embedded Image and inclination Embedded Image at the retina which additionally depend on the position Embedded Image of the chief ray through the aperture as well as the transverse object position Embedded Image or Embedded Image ; chromatic difference in image size Embedded Image or angular spread Embedded Image at the retina depend on the object size Embedded Image or Embedded Image and chromatic magnifications are the generalised ratio of the blue to the red image size Embedded Image and angular spread Embedded Image .105 Consistent with the symbols introduced earlier, y is a Embedded Image position vector, a is a Embedded Image unreduced inclination vector, M is a Embedded Image generalised magnification matrix, and subscripts O represent the object plane, K is at the corneal plane, P at the pupillary or irideal plane and R at a plane immediately in front of the retina (figure 3). δ indicates a chromatic difference and Δ a physical size difference.

The aperture-dependent chromatic properties are dependent on both longitudinal and transverse changes in position of the aperture. In an eye the aperture is usually the pupil, however, it may be a surgically implanted aperture, such as a corneal pinhole inlay which is sensitive to misalignment.105–107 The implication is that introducing an artificial aperture such as a corneal pinhole inlay may alter the aperture-dependent chromatic properties of the eye.70 105

An achromatic axis is one that traverses the eye with no dispersion. Specifically, the Le Grand-Ivanoff achromatic axis is the polychromatic ray that reaches the retina at the same point, without dispersion.100 108 109 It is not possible to find a polychromatic ray that can fulfil this definition, however, it is possible to define a dichromatic ray, that minimises the chromatic spread at the retina.108

The Thibos-Bradley achromatic axis103 is actually a chief nodal axis and is not strictly achromatic.110 In the presence of astigmatism the nodal points also break up into fuzzy regions similar to the interval of Sturm for an astigmatic focus .25 In addition, chromatic dispersion will cause the cardinal points to spread out for a polychromatic pencil.14 The Thibos-Bradley chief nodal axis is therefore the nodal ray, described as the incident and emergent nodal rays, that intersect the pupil at its centre. For a polychromatic incident pencil, the incident and emergent axes are usually distinct, depend on frequency and are therefore not truly achromatic.110

Systems that are flipped, reversed, reflect and that account for anatomical symmetry

Optical systems are often flipped or reversed, for example, the Jackson cross-cylinder, telescopes in low vision and even the eye during ophthalmoscopy. The effect of flipping the system is that light travels through the system in reverse.73 111 Flipped systems may be flipped about any axis θ and each of these flips will affect the fundamental properties of the system differently, resulting in a flipped transference Embedded Image .73 111 The flipped system makes use of a flipped Embedded Image -axis to maintain the left-hand rule for the axes (figure 1).55 73 111

A catadioptric system has light travelling forwards through the system and then in reverse after reflection at a surface.55 112 The reversed system used for a catadioptric system makes use of a reversed Z-axis and resulting right-handed axis system and is slightly different to the flipped system. The transference of a reversed system is given as Embedded Image , a Embedded Image block matrix with trivial bottom row omitted for brevity.55

A system that accounts for anatomical symmetry of the optical character of eyes is slightly different to a flipped or reversed system.113 For example, under symmetry, an axis of 30° in the right eye is equivalent to an axis of 150° in the left eye. This has the effect of changing the sign of the off-diagonal elements of each of the Embedded Image fundamental properties as well as the first entry of each of the Embedded Image fundamental properties.113 This situation has relevance for quantitative analysis of astigmatic eyes, where both left and right eyes are included in the sample.

In the case of catadioptric systems, the system may be partitioned into subsystems. For example, Purkinje image PIII is reflected off the front surface of the crystalline lens. The system is made up of the dioptric system of cornea and anterior chamber and transference Embedded Image , the anterior catadioptric system with transference Embedded Image and the reversed subsystem of anterior chamber and cornea and transference Embedded Image . The transference of the catadioptric system is therefore Embedded Image , multiplied in reverse as usual (equations 4 and 15).55 112 114

There are two types of catadioptric subsystems, the anterior catadioptric subsystem Embedded Image which reflects forwards travelling light back upstream, for example Purkinje images PI to PIV and the posterior catadioptric subsystem Embedded Image that reflects backwards travelling light downstream, such as found with Purkinje images PV–PVII. The catadioptric transference has the form of equation 16, with Embedded Image and Embedded Image for Embedded Image and Embedded Image and Embedded Image for Embedded Image .55 112 K is the Embedded Image astigmatic curvature matrix (similar to the power matrix), m is the Embedded Image tilt matrix, Embedded Image is the refractive index immediately anterior to the reflecting surface and Embedded Image is the refractive index of the medium immediately downstream of the reflecting surface.

Langenbucher et al 114 described the stigmatic centred catadioptric system and used Le Grand’s eye to illustrate the positions of the seven Purkinje images on phakic and pseudophakic model eyes.

An eye has a unique optical axis.58 115 Harris obtained equations for the optical axis of catadioptric systems with both an odd and even number of reflecting surfaces. For an eye with Purkinje systems it turns out that there are an infinity of optical axes.115

Gradient index

The system with a radial GRIN material may present with a medium that varies radially or perpendicular to the optical axis.116 A decreasing radial-gradient has the maximum along the optical axis and rays traversing this system are attracted to the optical axis and oscillate or rotate about the optical axis in a periodic motion, like a corkscrew.116 An example is an optical fibre. An increasing radial-gradient has the index increasing away from the optical axis and rays are repelled from the optical axis. The transference of such a medium has no periodic structure.116 The transference is symplectic and the power, front- and back-vertex powers and the cardinal points may be obtained from the transference.116

In the human eye, the GRIN crystalline lens is a combination of radial and linear GRIN systems. The lens is a decreasing GRIN lens, and the refractive index is a function of the transverse (y) and longitudinal (z) positions, describing a transverse parabolic distribution.8 117 The transference of a GRIN lens will depend on the maximum and minimum refractive indices, age, lens thickness, radii of curvature of the lens and nucleus front- and back-surfaces, number of shells and the power chosen to represent the distribution of the GRIN across a normalised distance.8 117

Quantitative studies

Transferences were used to analyse the cornea pre and post corneal refractive surgery and showed that the cornea needs to be treated as a thick lens system rather than a thin lens.37 71 The change in lateral magnification, including astigmatism, at the retina preoperatively and postoperatively and between follow-ups for corneal surgery can be modelled using the transference and is estimated from keratometry, refraction, vertex distance and anterior chamber depth.87 This has relevance to aniseikonia and is especially relevant to surgery on corneas with high astigmatism.


Linear optics is a powerful tool that allows for surfaces that are astigmatic and decentred or tilted. Linear optics works with two concepts, the system and the ray. The fundamental properties, obtained from the transference T, are properties of the eye or system alone, while position y and reduced inclination α are properties of the ray at a transverse plane. The transference operates on the incident ray to determine the emergent ray.

Only linear optics allows one, in the case of systems with elements that may be astigmatic and decentred, to obtain explicit formulae for relationships and so gives immediate insight into which variables depend on each other, for example, various axes,45 58 98 101 102 108 110 115 vergence64 (equation 25), nodal structures14 25 57 97 (equations 26 to 28), blur patches1 18 26 27 88–90 and chromatic aberrations.12–14 65 70 105–107 As equation 20 shows, refractive compensation Embedded Image is not directly dependent on the power F of the eye; the dependence is indirect via symplecticity.

Linear optics is a three-dimensional generalisation of Gaussian optics. An advantage of linear optics over Gaussian optics is that it allows for systems that are astigmatic or decentred. This cannot be done completely with Gaussian optics, nor can thick systems with prism be handled by Gaussian optics. Linear optics is basically an application of linear algebra46 47 which is a hugely developed and sophisticated mathematical field. The eye’s function is essentially linear and is well modelled with linear optics. Linear optics is the natural tool for the optical behaviour of the eye.

An advantage of linear optics is that, within the limits of first-order or paraxial optics, linear optics gives insight into the optical system; how astigmatism and decentration or tilt of the elements affect the light traversing the system. For example, using linear optics, Harris25 97 showed that in the presence of astigmatism, the nodal points and principal points break up into fuzzy nodes similar to the focal lines and interval of Sturm, however, the pair of lines are not necessarily orthogonal nor are there necessarily a pair of lines.25 97 In a system with decentred elements, the cardinal points or nodes may be decentred.25 Another example is the geometry of the blur patch in the presence of astigmatism. Linear optics shows that in an astigmatic system, not only is the blur patch elliptically shaped, but as the light travels and passes through the limiting aperture, so the blur patch may rotate and reflect with respect to the aperture.1 106 Similarly, the aperture may be referred upstream to locate the effective corneal patch which may also be elliptical and rotated with respect to the aperture.88 89

Paraxial optics accounts for most of the aberrations in the eye. Linear optics is ideally suited for modelling paraxial optical phenomena, many of which have been derived from the transference. A disadvantage is that linear optics is not exact, nor does it treat higher-order aberrations, however, it provides a starting point for possible refinement using other models of optics, such as geometrical optics.

The transference is easily obtained for schematic eyes, including the reduced eye,12 Le Grand’s eye,13 14 an astigmatic schematic eye65 105 and an eye with a GRIN lens.117 One important advantage of linear optics is modelling optical properties to aid understanding of clinical phenomena. For example, linear optics was used to illustrate chromatic aberrations in an astigmatic eye.65 105 106 The effects of image magnification due to astigmatism is obtained as a Embedded Image matrix and aniseikonia in astigmatic eyes is illustrated and interpreted using linear optics.18 20 21 26 27 79–87 Another advantage is to use linear optics as a first step towards the design process. Linear optics has been used in the design and modelling of IOLs and to compare the available theoretical IOL formulae.7 29 30 91–96

Almost all the familiar optical properties of the system can be obtained from the transference alone. One exception is the optical axis which needs, in addition to the transference, the axial length of the system. The fundamental properties have an affine dependence on the width of gaps and curvature and tilt of surfaces within the system.118 Most other properties, such as refractive index, refractive compensation, cardinal points and changes to other internal structures are more complicated than that.

An advantage of linear optics is its holistic approach to studying optical systems, such as the eye. The transference represents a chosen optical system in its entirety and predicts the behaviour of rays traversing the system. The special rays obey the relationship Embedded Image (equation 28) illustrating how the cardinal and special points and structures are unified rather than separate concepts. Linear optics makes it possible to perform quantitative analyses on a set of optical systems and to calculate changes to an optical system, for example, over time or due to surgery or obtain the difference between two optical systems, rather than the individual components.

The aim of this paper has been to introduce the basic principles of linear optics, including the transference, the fundamental properties of paraxial optical systems and the optical properties derived from the transference. A number of optical properties have been presented here, including power, refractive compensation, vertex powers, various magnifications, cardinal points and axes of the eye, chromatic aberrations, GRINs and the effects of position and design of IOLs, flipped, reversed and catadioptric systems. Many of these optical properties have applications for ophthalmic and vision science and eye surgery.

Data availability statement

Data sharing not applicable as no datasets generated and/or analysed for this study.

Ethics statements

Patient consent for publication

Ethics approval

University of Johannesburg, Faculty of Health Sciences, Research Ethics Committee (NHREC Registration no. REC-241112-035) (South Africa).


We thank Professor WF Harris for reading and commenting on the paper.



  • Contributors Both authors contributed to the review. TE wrote the first draft of the manuscript. Both authors edited, contributed to and approved the final manuscript.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests TE and AR were authors on some of the references as included in this review.

  • Provenance and peer review Commissioned; externally peer reviewed.

Linked Articles