Press Release

Researchers Directly Observe Concepts in Human Brain

Illustration of a person processing concepts in the brain
While subjects name familiar objects, activation of semantic attributes is directly observable using electrocorticography.

Credit: APL

When seeing objects in the world, individuals probably are not thinking explicitly about their semantic characteristics: Is it alive? Is it edible? Is it bigger than a bread box? But activation of these kinds of semantic attributes in the human brain is now directly observable, according to recently published findings from Johns Hopkins University, its Applied Physics Laboratory, and its School of Medicine.

“Most research into how the human brain processes semantic information uses noninvasive neuroimaging approaches like functional magnetic resonance imaging, which indirectly measures neural activity via changes in blood flow,” says Nathan Crone, a neurologist at Johns Hopkins Medicine and contributing author on the research. “Invasive alternatives like electrocorticography, or ECoG, can provide more direct observations of neural processing but can only be used in the rare clinical setting when implanting electrodes directly on the surface of the cortex is a clinical necessity, as in some cases of intractable epilepsy,” he explained.

Using ECoG recordings in epilepsy surgery patients at the Johns Hopkins Hospital, the team found that semantic information could be inferred from brain responses with very high fidelity while patients named pictures of objects. The findings were published in the article, “Semantic attributes are encoded in human electrocorticographic signals during visual object recognition,” included in the March issue of NeuroImage and now available online.

Researchers recorded ECoG while patients named objects from 12 different semantic categories, such as animals, foods and vehicles. “By learning the relationship between the semantic attributes associated with objects and the neural activity recorded when patients named these objects, we found that new objects could be decoded with very high accuracies,” said Michael Wolmetz, a cognitive neuroscientist at the Johns Hopkins Applied Physics Laboratory, and one of the paper’s authors. “Using these methods, we observed how different semantic dimensions — whether an object is manmade or natural, how large it typically is, whether it’s edible, for example — were organized in each person’s brain.”

Building on previous brain–computer interface research at Johns Hopkins showing that individual finger movements could be inferred from ECoG to control a prosthetic hand, this work demonstrates that individual concepts can also be inferred from similar brain signals. “This paradigm provides a framework for testing theories about what specific semantic features are represented in the human brain, how they are encoded in neural activity, and how cognitive processes modulate neurosemantic representations,” said Kyle Rupp, a doctoral student at Johns Hopkins and author on the paper. “Likewise, from a decoding perspective, models that decompose items in semantic features are very powerful in that they can interpret neural activity from concept classes they have not been trained on.”

While today’s methods to use brain–computer interfaces for communication are extremely limited, these results showing that semantic information can be studied and recovered using ECoG suggest that improvements may be on the way.