Questionnaire

In his half-century career, Jerome Kagan, PhD, became one of the world's pre-eminent researchers in developmental psychology, best known for his studies finding that some aspects of our temperament—such as anxiety and shyness—are inborn, appearing as early as infancy and staying with us throughout our lives.

Never one to shy away from controversy, Kagan lays out in his new book of essays, "Psychology's Ghosts: The Crisis in the Profession and the Way Back," what he sees as the major problems facing psychological research—problems he says hold psychology back from making the kinds of grand discoveries possible in other scientific fields such as biology and physics.

Kagan spoke to the Monitor about his vision for making psychology research more productive.

What are "Psychology's Ghosts?"

"Ghosts" refer to many of the [unfounded] assumptions that psychologists make as they conduct their research.

For example, there are many studies in which a team of psychologists uses one procedure in one setting, gets a result and assumes that that result would hold no matter where you did the study, no matter what the procedure was and no matter what the population was. That strategy tempts the investigator to assume that the concept inferred from the data applies broadly.

Another ghost is our approach to research on mental illness. The DSM categories for mental illness are the only disease categories in all of medicine that do not take etiology or cause into account. In psychiatry, we have disease categories based only on symptoms. That would never occur in cancer or cardiology or immunology, where you always diagnose on the basis of both the symptoms and the cause.

So what that means is that every category today in the DSM-IV, and in the DSM-V, which will be published next year, has a heterogeneous etiology. So the only way to make progress is to collect other psychological and biological evidence, not just reports of symptoms.

If we did that for a category like major depressive disorder, we'd see that it's actually—I'm going to make up a number here—maybe six different diseases with six different causes.

So what's holding us back from developing these kinds of multiple categories in the DSM, based on genetic or other causes?

We don't know enough. But then the committee preparing the DSM and the psychiatrists and psychologists who use it should be sensitive to that. That is to say, a psychologist or psychiatrist treating a depressed person should be sensitive to the possible cause and try to discern it, by gathering psychological data, by requesting biological data, so that he or she can provide better treatment. But few clinicians are doing that.

Let's talk about treatment. In another essay in the book, you argue that cognitive behavioral therapy may be on the verge of losing its effectiveness. Why is that?

A new therapy usually is effective because it's new, and the patient and the doctor think it's very effective. It usually takes about 50 years for a therapeutic ritual to lose its effectiveness. For example, psychoanalysis was very effective for about 50 years, and then the therapists lost faith in it, and therefore it stopped working.

The recent reviews of cognitive behavioral therapy, which is 50 years old, are beginning to show the same thing. Because it is not the specific ritual that matters, it is "does the therapist have faith in this ritual, whether it's valid or not," and does the therapist communicate that faith to the patient?

A wonderful example is that, under Mao Zedong, psychoanalysis never took hold in China. Now that China is growing more capitalist, suddenly psychoanalytic theory is new there. And so psychoanalysts are practicing in China and patients are getting better. Meanwhile, in the United States, very few therapists practice psychoanalysis, because the 50 years are up.

So what type of therapy do you think will be big next in the United States?

Therapy over the Internet. I can predict what's going to happen. You're going to have papers showing how effective therapy over the Internet is, because it's new. And then in 50 years, no one will be doing it anymore.

But the larger point is that obviously one size treatment can't fit all. I'm willing to bet that there are some depressives for whom cognitive behavioral therapy is the perfect therapy. But until we find out for which depressives that is true, we're going to continue this way, where some people get better and some don't.

At the end of the book, you lay out what you call "modest suggestions" to improve psychological research. Where did they come from?

Right now, I'm reading the Nobel laureate acceptance speeches in biology from 1996 to 2000. Now, in every one of these cases, what did these investigators do? They picked a problem that was puzzling and they began to probe it. And their first probes were rarely successful—it took them 10, 15, 20 years. The man who studied hemoglobin worked for 30 years before he discovered its structure.

Now go to a typical psychological journal, our best journals. Most investigators spend a few years [on a problem] and when they get frustrated they shift to another topic. They become impatient.

Psychologists also need to use varied methods and combine all kinds of techniques to learn. But 70 percent or more of the research on humans by research psychologists uses one method, and that method is most of the time a verbal report on a questionnaire.

Now think about that. How could an investigator make a significant discovery by examining only what people say? Humans have been listening to humans for 150,000 years, and we still don't understand human behavior.

It's absolutely necessary to gather more than one source of data, no matter what you're studying. You have to combine verbal report with behavioral observations, and, better yet, combine it with behavioral observations and some biology: fMRI, muscle tension, heart rate, blood pressure, skin conductance. The point is the more data you have, the more likely you will understand the puzzle you're trying to solve. Every biologist understands that.

Do you have any favorite examples of psychologists who have gotten it right?

A nice example would be Marta Kutas at the University of California, San Diego, who has studied the N400 waveform in the event-related potential. Marta has clarified what that waveform means, because she stuck with it. Because every phenomenon in nature is difficult to detect accurately, she realized she had to vary all kinds of contexts to learn what she did.

Another example is Gerard Bruder of Columbia University. Bruder examined the patients that were sent to him with a diagnosis of depressive disorder and tested them with EEG. He found that the ones who got better with SSRIs were left-frontal active and the ones who didn't were right-frontal active. That's a beautiful example of combining the verbal description of the symptoms with a biological measure. That strategy helps the clinician and scientist separate two kinds of depressives. One profits from SSRIs, the other doesn't.

Are more people looking at these physiological measures as the technology improves?

Yes. Compared with 20 years ago, many more psychologists are gathering fMRI [data]. They gather fMRI data and some verbal evidence—that's better than gathering one measure.But often psychologists and neuroscientists invest a great deal of time and money gathering the brain data and then they relate this evidence to the replies on a 15-minute questionnaire, or the answer to one question, "How are you feeling?"This strategy is unlikely to make a major discovery. So, I think we have to get a little more complicated in the verbal reports that we get.

We also have to look at patterns of data. Too often, psychological studies look at one dependent variable. For example, the answers to a personality questionnaire are only one measure. So, too, is cortisol concentration in the saliva or an event-related potential. Nature does not work that way. Natural phenomena consist of patterns of features. A single measure can be the product of more than one condition. Therefore, one cannot know the meaning of any single measure—it is necessary to combine it with other measures. That is what biologists do.

If you look at my book "The Long Shadow of Temperament," which I wrote with Nancy Snidman, it was the patterns that helped us understand our data. We examined the responses of many infants to a series of unfamiliar stimuli. When we began the work, we didn't know that there were high-reactive infants. But when we examined videotapes of the responses of many infants to a series of unfamiliar stimuli, we found that some infants combined a pattern of three responses: vigorous limb movement, arches of the back and crying. This pattern defines a high-reactive infant.

If we had only used one of these measures, Nancy and I would not have made the discoveries we did. Only the infants who combined limb activity, arches and crying became timid, shy children and adolescents with social anxiety. I couldn't ask for a better example of the utility of looking for patterns of evidence.