This article provides an overview of motion processing and how factors may have an impact on performing certain tasks safely, such as driving.
When thinking about ‘vision,’ it’s easy to fall into the trap of focusing on stationary letters and lights when in reality vision is hardly ever a stationary percept. Imagine you’re standing at a pedestrian crossing, or sat watching your favourite film, or even just sitting down reading this CET article; these three actions are just a handful of real-life examples that require successful visual motion perception in order to be possible.1 To elaborate, in order to perceive moving cars we need to be able to estimate the direction, acceleration and speed of the vehicle; to watch the television we need to be able to perceive ‘apparent’ motion from 25 stationary images per second;2 and to read we need to be able to appropriately process and suppress information moving across our retina at speed.3 In everyday life the parameters required for successful motion perception largely go unnoticed (see Figure 1: Example of a moving scene), but at the core of this processing there is a highly organised, cortical (neural) pathway that contributes to this dynamic visual processing capability.4 This article will provide a summary of how we are able to perceive moving images, along with detail on how this processing occurs in the brain and how our perceptual ability can vary depending on the conditions.
At the retina, in general terms, each individual cell is responsible for processing differences in light across the receptive field (visual field coverage) of that cell.5 In a stationary scene this would equate to a simple process of determining the location of the light within the receptive field of that particular cell, but with moving scenes, this system requires constant updates on where the light is and which way it is moving (see Figure 2: Outlining the difference between localisation of stationary light (L) and direction of moving light (L) across the receptive field (RF) of a single retinal cell). In this way, it can be thought of as responding to changes in spatial qualities of light over time.5
Due to the highly retinotopic organisation of these cells in the retina (that is to say, neighbouring objects in the visual field are processed by neighbouring cells in the retina and cortex) the signals derived at this stage are able to contribute towards processing the direction of a moving object.6 These dynamic, motion signals are thought to be processed largely by a subsection of cells that contribute to the magnocellular pathway,7 which is one of two pathways thought to be related to classification of retinal input. The second pathway (parvocellular) is largely responsible for processing spatial properties of scenes.8 This is not to say that the contrary parvocellular pathway has no role in the processing of motion, but it is thought that it may be more involved in processing very low velocities.9
The magnocellular pathway begins at the level of parasol retinal ganglion cells (RGCs) and transmits signals through to the cortex slightly differently to the parvocellular pathway. These differences centre on the observation that, in general, the magnocellular signals appear to process information quicker and they appear to be less colour-opponent. This means that magnocellular cells are not responsible for processing the colour of a scene, and similarly they do not contribute much towards the fine spatial detail which would be crucial for recognising or identifying stimuli in the visual world.5,10 Instead, these cells process the information quickly and relatively broadly. One theory explaining why this pathway might prefer to process information quickly across a broad range of retinal locations lies in the evolutionary advantage that stationary objects are less likely to be potential threats to our wellbeing than moving objects.11 This distinction allows our attention to be drawn much quicker to moving signals than stationary ones, thereby potentially alerting us to dangerous stimuli in our environment without having to consciously identify it. This also explains why cells in the peripheral retina are excellent motion detectors while maintaining relatively poor visual acuity. To test this at home, simply move an object at the very edge of your visual field. While it is in motion you will be able to detect the movement without being able to identify the object, but if you hold it still it will become seemingly invisible.
Once the visual motion areas in the brain have processed the signal, the information is passed on to higher areas in order to create a complete perception. To put this into context, the signal of a tennis ball moving towards you at speed will likely result in a transmission of signals to the motor cortex to allow an appropriate movement of your hand, and you would need to use separate object-selective cortices to identify the object as a tennis ball.
Types of motion
There are several ways to describe a moving stimulus, and these descriptions contribute to independent features that are typically processed separately within the cortex. For example, the speed of a moving object is different to the acceleration, and the direction of the object is different again.12 Two slightly more clinically relevant ways of differentiating between moving stimuli describe whether the percept affects a large portion of the visual field or whether it is restricted to a small area (global versus local),24 and whether the motion is defined by changes in luminance (first-order) or changes in texture (second-order).25 These descriptions of motion are considered quite low-level features but they have a huge impact on our perception of the visual world. For example, if we view a field of expanding dots, each dot will be moving in a slightly different direction but overall they will appear to be moving away from a central point (global motion). However, if we focus on just one of these expanding dots, its individual direction will be pronounced, thereby giving a perception of one individual direction (local motion) as shown in Figure 5 (Difference between local and global motion shown in the same example. If focusing on the small rectangle on the left, it produces a local motion signal of leftward motion). If focusing on the whole picture on the right, it produces a global motion signal of expanding motion. In this way, global motion can be thought of as the accumulation of several local motion signals.
In terms of first- and second-order motion, it is difficult to provide real-life examples as these are such low-level features of motion processing that they are usually only tested in a laboratory setting. However, it is possible to provide an example of a visual illusion that comprises first-order motion known as ‘beta movement’. This phenomenon occurs when objects next to each other become illuminated one after another and create a percept that the luminance is moving through the objects, much like when the colours in Christmas tree lights appear to move along the string when they are actually just lighting up one by one (see Figure 6: A real-life example of first-order motion (beta movement) shown in sequentially flashing Christmas lights). Second-order motion constitutes any type of motion not defined by luminance changes. It is thought that these qualities of motion may be separated early on in the visual pathway in order to be processed by different pathways in the brain (a common theme of vision), and then integrated at a later stage of processing.26
Distinguishing between retinal and extra-retinal signals
A necessity for processing motion accurately is the ability to be able to appropriately and quickly discriminate between image-retina (retinal) motion and eye-head (extra-retinal) motion.27 Retinal motion occurs when the eyes are stationary and an object moves across the visual field, thereby also moving across the retina. For example, if you focus on your stationary right hand while moving your left hand towards it, it will elicit perception of a moving object (your left hand) because the image will travel across the retina. Conversely, extra-retinal motion occurs when the eyes follow a moving object. In this instance, motion is still detected but the image is stationary on the retina. This, therefore, implies the involvement of some kind of system in the brain for comparing stimulation of the extra-orbital muscles of the eyes and the signal from the retina. To maintain the current example, you would switch your focus from your stationary right hand to the movement of your left hand over to your right hand (this is called pursuit eye movement). Now you will still perceive your hand as a moving object, despite the object itself never actually moving on the retina (see Figure 7: Retinal and extra-retinal signals. Both circumstances produce perception of moving image on stationary background). This is clearly a key aspect of successful perception, but an important question is: how is it possible? Several scientists have investigated this over the years, and the consensus seems to be that the brain can resolve this issue following a cortical comparison being made between information received from the retina and information sent to the muscles regarding the movement of the eyes;28 this is called the outflow theory. In essence, the theory describes how reception of a motion signal (afferent) at the cortical (neural) level encourages the neurons to quickly compare the signal with any signals that have been sent from the brain (efferent) to move the muscles of the eye. For example, the brain can deduce that if a moving retinal signal is detected and the eyes have not moved, then the object itself must have moved. This also permits the brain to accurately resolve a moving image that is stationary on the retina. One way to test this theory at home is to close one eye and lightly prod the sclera of your other eyeball through your lower lid.12 In this case the brain has not sent a signal to your eye muscles asking them to move, but the cortex is receiving a motion signal from the retina, so you inaccurately perceive the world as moving.
Another important aspect of motion processing is to be able to ignore movement on the retina if the image is stationary. For example, in the case of reading, the words on the page remain stationary but your eyes make very quick (saccadic) movements across the page in order to read them. If the visual cortex perceived the words as moving then reading would be very difficult. In order to investigate this, one researcher studied a subset of motion-sensitive cells in the brains of monkeys who were trained to either stare at a fixed dot on a moving background, or switch their gaze between two stationary dots.29 In both instances the background would move across the retina of the monkey but in the condition with gaze-switching, there would be no genuine motion. They found that in the presence of an extra-retinal signal (saccadic eye movements during the gaze-switching condition), the perception of motion was suppressed and the motion-sensitive cells did not respond. In more specific terms, it is thought that when there is an extra-retinal signal, the cortex may selectively inhibit the magnocellular information before, during, and after the movement of the eyes in order to make the percept appear stationary.30
Can accuracy of motion perception vary?
When considering motion perception across individuals, it would be safe to assume that there will be varying levels of perceptual accuracy across each person. This can be shown empirically by differing threshold levels on measures of psychophysical ability.31 However, it is also possible, given certain circumstances, to have intra-individual variability; that is, one person’s own ability may vary from day to day.
The first example of varying perceptual accuracy can be observed in scotopic (rod-mediated) conditions. In these circumstances perception is altered in several ways including loss of sensitivity to colour vision and loss of fine detail at the fovea.32 However, in terms of motion processing other research has found that rod-mediated vision produces a perceptual bias of moving images, such that they appear to move slower than they actually are.33
On average, objects are perceived to be moving approximately 25% slower than their actual speed under scotopic conditions; this is thought to be related to the attenuation of signals in detectors responsible for processing high velocities. This attenuation is thought to occur due to greater temporal averaging of rod signals relative to cone signals under scotopic conditions.34 This could have practical implications in terms of driving at night because – providing the street is well-lit – the conditions will not truly be scotopic, but if the road is dark the driver may struggle to accurately and appropriately predict the speed of other drivers.
A similar bias is produced in low-contrast conditions,35 often researched due to the important implications associated with the presence of fog when driving. Original research using speed-matching tasks proposed that in general, low contrast simulations led to an increase in perceived speed,36 but more recent research utilised the knowledge that true fog tends to produce a contrast gradient of high contrast near us and lower contrast as objects are positioned further away from us. This, therefore, produces poor visibility in distant vision while maintaining relatively clear visibility in near vision (see Figure 8: Foggy conditions produce low contrast in distant vision and higher contrast in the near vision). Using this principle, researchers found that when visibility is clearer in the near than the distant vision, subjects in a driving simulator actually overestimated their perceived speed which led to them driving slower than the legal limit.37 They conclude that drivers should trust their instincts in the fog and drive slower in order to stay safe on the road.
Overall, perception of a moving world is a highly complicated process involving multiple stages of processing, several processing pathways, and different cortical areas to analyse the information from the retina.
About the author
Dr Samantha Strong PhD, MBPsS, AFHEA is a post-doctoral researcher based in the School of Optometry and Vision Science at the University of Bradford. She holds a BSc in Psychology from the University of York, and a PhD in Vision Science at the University of Bradford. Her research involves using neuroimaging techniques such as fMRI and TMS to investigate perception of visual stimuli in the human brain.
- Stein J (2001) The magnocellular theory of developmental dyslexia. Dyslexia, 7: 12-36
- Kline K, Holcombe AO, Eagleman DM (2004) Illusory motion is caused by rivalry, not by perceptual snapshots of the visual field. Vision Research, 44: 2653-2658
- Stein J (2003) Visual motion sensitivity and reading. Neuropsychologia, 41(13): 1785-1793
- Blake R, Sekuler R, Grossman E (2003) Motion processing in human visual cortex. In J H Kaas and C E Collins (Editors), The Primate Visual System
- Schwartz SH (2009) Visual perception: a clinical orientation. (4th Ed) McGraw-Hill Medical: UK
- Chen EY, Marre O, Fisher C et al (2013) Alert Response to Motion Onset in the Retina. The Journal of Neuroscience, 33(1), 120–132
- Merigan WH, Byrne CE, Maunsell JH (1991) Does primate motion perception depend on the magnocellular pathway? The Journal of Neuroscience, 11(11): 3422-3429
- Goodale MA, Milner AD (1992) Separate pathways for perception and action. Trends in Neuroscience, 15: 20–25
- Maunsell JH, Nealey TA, DePriest DD (1990) Magnocellular and parvocellular contributions to responses in the middle temporal visual area (MT) of the macaque monkey. The Journal of Neuroscience, 10(10): 3323-3334
- Derrington AM, Lennie P (1984) Spatial and temporal contrast sensitivities of neurones in lateral geniculate nucleus of macaque. Journal of Neurophysiology, 357: 219-240
- Andrewes D (2015) Neuropsychology: From Theory to Practice. (2nd Ed) Psychology Press Ltd: UK
- Snowden RJ, Thompson P, Troscianko T (2012) Basic vision: an introduction to visual perception. (Revised Ed) Oxford University Press: UK
- Mishkin M, Ungerleider LG, Macko KA (1983) Object vision and spatial vision: twocortical pathways. Trends in Neuroscience, 6: 414-417
- Zeki S, Watson JD, Lueck CJ et al. (1991) A direct demonstration of functional specialization in human visual cortex. Journal of Neuroscience, 11: 641-649
- McKeefry DJ, Burton MP, Vakrou C et al. (2008) Induced deficits in speed perception by transcranial magnetic stimulation of human cortical areas V5/MT+ and V3A. Journal of Neuroscience, 28: 6848-6857
- Orban G (2005) V3B
- Huk AC, Dougherty RF, Heeger DJ (2002) Retinotopy and functional subdivision of human areas MT and MST. The Journal of Neuroscience, 22: 7195-7205
- Amano K, Wandell BA, Dumoulin SO (2009) Visual field maps, population receptive field sizes, and visual field coverage in the human MT+ complex. Journal of Neurophysiology, 102: 2704-2718.
- Pitzalis S, Sereno MI, Committeri G et al. (2010) Human V6: the medial motion area. Cerebral Cortex, 20: 411-424
- Braddick OJ, O’Brien JM, Wattam-Bell J et al (2000) Form and motion coherence activate independent, but not dorsal/ventral segregated, networks in the human brain. Current Biology,10(12):731-734
- Furlan M, Wann JP, Smith AT (2014) A representation of changing heading direction in human cortical areas pVIP and CSv. Cerebral Cortex, 24(11):2848-2858
- Johansson G (1973) Visual perception of biological motion and a model for its analysis. Perception & Psychophysics, 14(2):201-211
- Zihl J, Von Cramon D, Mai N (1983) Selective disturbance of movement vision after bilateral brain damage. Brain, 106: 313-340
- Bartels A, Zeki S, Logothetis NK (2008) Natural vision reveals regional specialization to local motion and to contrast-invariant, global flow in the human brain. Cerebral Cortex,18(3):705-717
- Cavanagh P, Mather G (1989) Motion: The long and short of it. Spatial Vision, 4(2):103-129
- Nishida SY, Ledgeway T, Edwards M (1997) Dual multiple-scale processing for motion in the human visual system. Vision Research, 37(19):2685-2698
- Warren PA, Rushton SK (2007) Perception of object trajectory: parsing retinal motion into self and object movement components. Journal of Vision, 7(11):2-2
- Gregory RL (1958) Eye movements and the stability of the visual world. Nature, 182(4644):1214-1216
- Thiele A, Henning P, Kubischik M et al (2002) Neural mechanisms of saccadic suppression. Science, 295(5564):2460-2462
- Matin E (1974) Saccadic suppression: a review and an analysis. Psychological bulletin, 81(12):899
- Burr D, Thompson P (2011) Motion psychophysics: 1985-2010. Vision Research, 51(13): 1431-1456
- Hirsch J, Miller WH (1987) Does cone positional disorder limit resolution? Journal of the Optical Society of America, 4: 1481-1492
- Gegenfurtner KR, Mayser H, Sharpe LT (1999) Seeing movement in the dark. Nature, 398(6727):475-476
- Gegenfurtner KR, Mayser HM, Sharpe LT (2000) Motion perception at scotopic light levels. JOSA A, 17(9):1505-15
- Anstis, S (2004) Factors affecting footsteps: contrast can change the apparent speed, amplitude and direction of motion. Vision Research, 44: 2171-2178
- Thompson P, Brooks K, Hammett ST (2006) Speed can go up as well as down at low contrast: Implications for models of motion perception. Vision Research, 46(6):782-786
- Pretto P, Bresciani JP, Rainer G et al (2012) Foggy perception slows us down. ELife, 1: e00031.