They found that the brain classified these objects in three ways, and each of the three ways is encoded in 3 to 5 different parts of the brain, and everybody used the same parts of the brain. What's more, the researchers were able to predict what parts of the brain would light up when they introduced a new noun. When they went back over the data, they were able to tell which noun a person was looking at by looking at the fMRI with average 72% accuracy.
From the Science Daily article:
(T)he three codes or factors concern basic human fundamentals:
1. how you physically interact with the object (how you hold it, kick it, twist it, etc.);
2. how it is related to eating (biting, sipping, tasting, swallowing); and
3. how it is related to shelter or enclosure.I wonder if the same thing would happen with a native speaker of Chinese, which is said to emphasize verbs more than nouns.
The researchers point out that their list of nouns didn't have any involving sex, love, or reproduction. There would, they said, likely to be some similar way of coding those relationships. I predict they will look at that soon.
They also suggest applications such as agoraphobia, where a person might have an exaggerated shelter dimension, or autism might involve weaker social contact coding.