It’s a wise life form that knows enough to take good care of its environment. Does Homo sapiens fill the bill? The jury is still out. This weekend, cast your vote for taking care of Earth by participating in Earth Hour: between 8:30 and 9:30 P.M. on Saturday, March 28, turn the lights off. (In fact, I’d say to consider ramping down your power use as much as possible: shut down the TV, the stove, other appliances, your computer…shoot, don’t even read this blog—just for that one hour anyway.) It’s a symbolic action to call attention to the need to address global climate change. Our species is good at attaching meaning to symbols, and this particular symbol could speak volumes to elected officials. Learn more and sign up at the Earth Hour web site. (I’m happy that Indiana University Bloomington is a flagship campus for this year’s Earth Hour and will be taking action to reduce the university’s power usage during that hour; I hope that’s the start of long-term energy reduction measures on campus.)
You’ve probably heard about research into the universality of facial expressions, which has revealed that some emotions are associated with particular facial expressions that are recognizable around the world. It turns out that music evidently has some of that same universal ability to express emotions. Twenty-one members of a Cameroonian ethnic group, the Mafa, were able to identify happiness, sadness, and fear in Western music on their first exposure to it. Furthermore, the clues they used to identify the emotions were similar to those used by Westerners: temporal patterns and musical mode. I’ve long been curious about what makes music sound happy or sad, not to mention how it expresses a host of subtle emotional hues and shades, so it’s interesting to see that whatever it is that makes Western music expressive, it may be something about people in general, not just about people who were raised on this music. This article from Science Daily has the details.
A recent experiment at Ohio State, described in this story from Science Daily, looked at how depressed and non-depressed people view positive and negative things in their environment. To examine how people form positive or negative attitudes, researchers used a computer game that neatly sidesteps any possible confusion from pre-existing attitudes about particular topics. The game introduces players to a variety of beans with different appearances. They can accept or reject each bean as it appears on the screen; some beans are good beans, adding points to a player’s score, while others are bad beans, resulting in points being lost. The goodness or badness of a bean is reliably indicated by its appearance, and players have to learn to identify beans based on their experience with the game.
In this particular experiment with the bean game, depressed and nondepressed people were equally good at identifying the bad beans. However, depressed people didn’t do as well as the non-depressed at identifying the good beans. This seems to me to present an interesting chicken-and-egg question: Are people slower to spot the good things because they are depressed, or are they depressed because they’re slower to spot the good things? (I suspect the answer might be “Yes”; i.e., both are true.) The Science Daily article seems to come down on the latter side; it concludes by suggesting that therapists who are treating depressed people might try to make them more aware of the good things in their lives. This is probably excellent advice, but I think there’s more to it than that.
It seems to me—based, I hasten to note, on nothing more than my own experiences with depression—that maybe the crucial missing piece in a depressed person’s experience of the game is that to a depressed person, good things don’t reliably feel good. The word “anhedonia” describes the lack of pleasure in normally enjoyable activities that forms, for me, the core experience of depression, and I think it may be what’s at work in the depressed people’s poorer performance in recognizing the good beans. They just don’t always feel whatever it is that identifies experiences as being positive, pleasurable, or worthwhile. Reminding myself of the many blessings in my life is always a good thing to do, but sometimes it seems like an intellectual exercise that doesn’t really do much to bring back the normal feeling of enjoying those blessings. I wish I knew better what it is that brings that feeling of enjoyment back, or makes it go away, but I’d bet that its absence is at the heart of the difference in performance on the bean game.
Characters in novels, movies, and other fictions can seem quite real (we root for one and boo another, for example, and cry sometimes when one of them dies). Yet for all that, we can easily distinguish them from real people, people that we know personally. But how do you know that your mother is real, for example, but Scarlett O’Hara is not?
An ingenious recent fMRI study compared brain activity in cases where people contemplated scenarios involving fictional characters, famous people that they didn’t know personally, and friends or family members. Participants had to determine the plausibility of actions like dreaming about a fictional character (possible), talking with a fictional character (impossible) or having dinner with a real person (possible).
Two brain areas appeared to be involved in the activity of distinguishing flesh-and-blood people from the purely mental constructs that are fictional characters: the anterior medial prefrontal cortex and the posterior cingulate cortex. These are parts of the brain’s default network, which kicks in when we’re not doing anything in particular and our minds go wandering over an internal landscape; both areas are believed to be important in self-referential thought and the recall of autobiographical memories. These brain areas were most active in the tasks involving friends and family, moderately active in tasks involving famous people who were not personally known, and least active in tasks involving a fictional character. The idea is that perhaps you know your mother is real because your brain codes her as being more personally relevant to you than a fictional character is.
The paper is available on PLoS ONE: Reality = Relevance? Insights from Spontaneous Modulations of the Brain’s Default Network when Telling Apart Reality from Fiction, Anne Abraham and D. Yves von Craman. It’s got lots of interesting background, and some fascinating material on the possible relevance of this work and ways it could be extended. I’d love to know, for example, how particularly well-known and loved fictional characters fall on the spectrum of brain activity, and also what an writer’s brain looks like when it’s contemplating characters it has created. Meanwhile, it’s time for me to immerse myself in a fictional world and a hot bath.
It appears that musical training will do more than enhance your understanding and perception of music. In a study at Northwestern, musicians were better than non-musicians at detecting emotion in the sound of a baby’s cry. An examination of brain activity revealed that musician’s brains appear to be better able to focus on the more complicated part of the sound, which conveys the emotional meaning, while giving less attention to the simpler, less emotion-laden part. This article from PhysOrg.com gives some details. (The article, by Dana Strait et al., will appear in the European Journal of Neuroscience.) I remember hearing a relatively young Joshua Bell play Mendelssohn’s Violin Concerto with what seemed to me an emotional depth and richness beyond his years; I wonder if his years of musical training had anything to do with his sensitivity to emotional nuance.
The next time a particularly memory-laden song from your teen years comes across your iPod playlist and you suddenly start remembering people and places from long ago, thank your medial pre-frontal cortex. A recent fMRI study at UC Davis indicates that this area links our autobiographical memories and our emotional response to the music associated with them. Because this area is one of the last to be affected by Alzheimer’s disease, the findings could explain why people with Alzheimer’s still recognize and respond to music even after other memories are gone. This press release from EurekAlert gives an overview; the complete article is also available online, at least at the moment. (The Neural Architecture of Music-Evoked Autobiographical Memories, Petr Janata. Cerebral Cortex, Advance Access published online Feb. 24, 2009)