Incoming data from the retina is directed into two pathways within the brain’s visual system: one that processes color and intricate spatial details, and another that focuses on spatial positioning and recognizing high temporal frequencies. A recent study from MIT offers insights into how these two pathways may be influenced by developmental factors.
Newborns generally possess limited visual sharpness and subpar color perception because their retinal cone cells are not fully developed at birth. Consequently, in their early stages, they perceive hazy, color-diminished visuals. The MIT team suggests that this indistinct, color-restricted vision might lead to certain brain cells specializing in low spatial frequencies and minimal color tuning, linked to the so-called magnocellular system. As their vision improves, cells may adapt to finer details and more vibrant colors, aligning with the alternate pathway known as the parvocellular system.
To validate their hypothesis, the researchers trained computational vision models on a sequence of inputs resembling those received by human infants during early development—initially low-quality images, followed by full-color, sharper visuals later on. They discovered that these models developed processing units with receptive fields bearing some resemblance to the differentiation of magnocellular and parvocellular pathways in the human visual system. Models trained solely on high-quality images did not acquire such distinct characteristics.
“The results may imply a mechanistic explanation for the emergence of the parvo/magno distinction, a principal organizing feature of the visual pathway in the mammalian brain,” states Pawan Sinha, an MIT professor of brain and cognitive sciences and the lead author of the study.
MIT postdoctoral researchers Marin Vogelsang and Lukas Vogelsang are the primary authors of the study, published today in the journal Communications Biology. Sidney Diamond, an MIT research affiliate, and Gordon Pipa, a professor of neuroinformatics at the University of Osnabrueck, are also contributors to the paper.
Sensory input
The notion that low-quality visual input might facilitate development emerged from research on children who were born blind but later had their sight restored. Sinha’s laboratory, through Project Prakash, has screened and treated thousands of children in India, where reversible forms of vision impairment like cataracts are relatively prevalent. After regaining their sight, many of these children volunteer for studies where Sinha and his team monitor their visual development.
In one of these studies, the researchers observed that children who had cataracts removed displayed a significant decline in object-recognition performance when presented with black-and-white images compared to colored ones. These insights led the researchers to propose that diminished color input typical of early development, rather than hindering, enables the brain to learn to recognize objects even in images that possess impoverished or altered colors.
“Restricting access to rich colors at the beginning appears to be an effective strategy for building resilience to color variations and enhancing the system’s robustness to color loss in images,” Sinha asserts.
In that study, the researchers also discovered that when computational vision models were first trained on grayscale images, followed by color images, their object-recognition capability was more robust compared to models trained solely on color images. Similarly, another investigation from the lab noted that models performed better when initially trained on blurry images, followed by clearer images.
To expand upon these discoveries, the MIT team aimed to investigate the implications of having both color and visual acuity constrained at the start of development. They hypothesized that these restrictions could influence the formation of the magnocellular and parvocellular pathways.
Besides being finely tuned to color, cells in the parvocellular pathway possess small receptive fields, which allow them to receive input from more compact clusters of retinal ganglion cells. This aids in processing fine details. On the other hand, cells in the magnocellular pathway gather information from broader areas, enabling them to handle more global spatial details.
To examine their hypothesis that developmental trajectories could affect the selectivity of magno and parvo cells, the researchers trained models on two distinct sets of images. One model was fed a standard dataset utilized for training models to categorize objects. The alternative dataset was crafted to roughly replicate the input the human visual system experiences from birth. This “biomimetic” data comprises low-resolution, grayscale visuals in the initial training phase, succeeded by high-resolution, colorful visuals later.
Following the training of the models, the researchers scrutinized the models’ processing units—nodes within the network that resemble clusters of cells responsible for processing visual data in the brain. They found that the models trained on biomimetic data developed a unique subset of units responsive to low-color and low-spatial-frequency inputs, akin to the magnocellular pathway. Additionally, these biomimetic models featured groups of more diverse parvocellular-esque units primarily tuned to higher spatial frequencies or more vibrant color signals. Such distinctions did not surface in models trained from the outset on full-color, high-resolution images.
“This provides some backing for the notion that the ‘correlation’ observed in the biological system could result from the types of inputs available simultaneously during normal development,” notes Lukas Vogelsang.
Object recognition
The researchers conducted further tests to uncover the strategies employed by differently trained models in object recognition tasks. In one experiment, they requested the models to categorize objects where the shape and texture were mismatched—like an animal shaped like a cat but with the texture of an elephant.
This technique has been utilized by several researchers in the field to identify which image features a model relies on for categorizing objects: the overall shape or the detailed textures. The MIT team discovered that models trained on biomimetic input were significantly more inclined to utilize an object’s shape for making these decisions, mirroring human behavior. Furthermore, when the researchers systematically eliminated the magnocellular-like units from the models, the models quickly lost their propensity to rely on shape for categorization.
In another round of experiments, the researchers trained models on videos rather than still images, introducing a temporal aspect. In addition to low spatial resolution and color sensitivity, the magnocellular pathway reacts to high temporal frequencies, allowing for the rapid detection of object position changes. When models were trained on biomimetic video input, the units most sensitive to high temporal frequencies were indeed those that also displayed magnocellular-like characteristics in the spatial domain.
Overall, the results lend support to the idea that early exposure to low-quality sensory input may shape the organization of the brain’s sensory processing pathways, assert the researchers. The findings do not dismiss the possibility of innate specifications regarding the magno and parvo pathways, but they demonstrate that visual experiences throughout developmental stages might also play a significant role.
“The overarching theme appears to be that the developmental journey we undergo is intricately structured to provide us with specific kinds of perceptual proficiencies, and it may also impact the very arrangement of the brain,” Sinha concludes.
The research received funding from the National Institutes of Health, the Simons Center for the Social Brain, the Japan Society for the Promotion of Science, and the Yamada Science Foundation.