Picture this: you’re in the passenger seat of a car, weaving through an urban metropolis – say New York City. As expected, you see plenty of people: those who are rushed, lingering, tourists, locals, old, young, et cetera. But let’s zoom in: take just about any one of those individuals in the city, and you will find 86 billion nerve cells, or neurons, in their brain carrying them through daily life. For comparison, this means that the number of neurons in the human brain is about ten thousand times the number of residents in New York City.
But let’s zoom in even further: each one of those 86 billion neurons in the brain is ever-so-slightly different from one another. For example, while some neurons work extremely quickly in making decisions that guide basic processes in the brain, others work more slowly, basing their decisions off surrounding neurons’ activity. This difference in decision-making time among our neurons is called heterogeneity. Previously, researchers were unsure of heterogeneity’s importance in our lives, but its existence was certain. This is just one example of the almost incomprehensible detail of the brain that makes human thinking so complex, and even difficult to fully understand for modern researchers.
Now, let’s zoom in again, but this time not on the person’s brain. Instead, let’s zoom into the cell phone this individual might have in their pocket or their hand. While a cell phone does not function exactly the same as the human brain, aspects of the device are certainly modeled after human thinking. Virtual assistants, like Siri or Cortana, for instance, compose responses to general inquiries that resemble human interaction.
This type of highly advanced digital experience is the result of artificial intelligence. Since the 1940s, elements of artificial intelligence have been modeled after features of the human brain, fashioned as a neural network composed of nodes, some serving as inputs and others as outputs. The nodes are comparable to brain cells, and they communicate with each other through a series of algorithms to produce outputs. However, in these technological brain models, every node is typically modeled in the same way in terms of the time they take to respond to a given situation (Science Daily 2021). This is quite unlike the human brain, where heterogeneity ensures that each neuron responds to stimuli at different speeds. But does this even matter? Do intricate qualities of the brain like heterogeneity really make a difference in our thinking, or in digital functioning if incorporated into artificial intelligence?
The short answer is yes, at least in the case of heterogeneity. Researchers have recently investigated how heterogeneity influences an artificial neural network’s performance on visual and auditory information classification tasks. In the study, each neural network had a different “time constant,” which is how long the cells take to respond to a situation given the responses of nearby cells. In essence, the researchers varied the heterogeneity of the artificial neural networks. The results were astonishing: once heterogeneity was introduced, the artificial neural networks completed the tasks more efficiently and accurately. The strongest result revealed a 15-20% improvement on auditory tasks as soon as heterogeneity was introduced to the artificial neural network (Perez-Nieves et al. 2021).
This result indicates that heterogeneity helps us think systematically, improve our task performance, and learn in changing conditions (Perez-Nieves et al. 2021). So perhaps it would be advantageous to incorporate heterogeneity into standard artificial intelligence models. With this change, technology’s way of “thinking” will come one step closer to functioning like a human brain, adopting a similar level of complexity and intricacy.
So, why does this matter? If parts of artificial intelligence are modeled closer and closer to how the human brain works, real-world benefits abound, and we’re talking on a level grander than virtual assistants. One prominent example is in head and neck cancer prognosis. Clinical predictors of head and neck cancer prognosis include factors like age, pathological findings, HPV status, and tobacco and alcohol consumption (Chinnery et al. 2020). With a multitude of factors at play, physicians spend excessive amounts of time analyzing head and neck cancer patients’ lifestyles in order to deduce an accurate prognosis. Alternatively, artificial intelligence could be used to model this complex web of factors for these cancer patients, and physicians’ time could be spent on other endeavors.
This type of clinical application is still far from implementation, but remains in sight for modern researchers. As the brain is further explored and understood, more and more of the elements that comprise advanced human thinking can be incorporated into technology. Now, put yourself in the shoes of our New York City passerby: how would you feel if the small cell phone in your pocket was just as intelligent and efficient as the 86 billion neurons in your head? How about if that cell phone solved problems like you do and thought like you think, in essence serving as a smaller version of your own brain? It is almost unfathomable! Yet, by harnessing heterogeneity, researchers have come one step closer toward realizing this goal.
References
Chinnery, T., Arifin, A., Tay, K. Y., Leung, A., Nichols, A. C., Palma, D. A., Mattonen, S. A., & Lang, P. (2020). Utilizing artificial intelligence for head and neck cancer outcomes prediction from imaging. Canadian Association of Radiologists Journal, 72(1), 73–85. https://doi.org/10.1177/0846537120942134.
Perez-Nieves, N., Leung, V. C. H., Dragotti, P. L., & Goodman, D. F. M. (2021). Neural heterogeneity promotes robust learning. Nature Communications, 12(1). https://doi.org/10.1038/s41467-021-26022-3.
ScienceDaily. (2021, October 6). Brain cell differences could be key to learning in humans and ai. ScienceDaily. Retrieved February 27, 2022, from https://www.sciencedaily.com/releases/2021/10/211006112626.htm.