To Daniel Kahneman, PhD, the human mind is a marvel, but a fallible one.

Kahneman, who is best known as the only psychologist to win a Nobel Prize (in economics), has spent decades investigating people's automatic thought processes. He has found that what he calls our "System 1"—our automatic, intuitive mind—usually lets us navigate the world easily and successfully. But, when unchecked by "System 2"—our controlled, deliberative, analytical mind—System 1 also leads us to make regular, predictable errors in judgment.

Considering those errors in the 1970s led Kahneman and his longtime collaborator Amos Tversky, PhD, who died in 1996, to develop the Nobel-prize-winning theory that explains why human beings often make economic decisions that aren't perfectly rational—in contrast to what economists had long believed.

Kahneman spoke to the Monitor about his new book, "Thinking, Fast and Slow," which sums up his life's research on human judgments, decision-making and, most recently, happiness.

Can you give an example of how System 1 and System 2 work?

I should begin by saying that I don't believe they are really systems. They are expository fictions, and I write the book as a psychodrama between two fictitious characters.

System 1 is in charge of almost everything we do. Most of everything we do is skilled, and skilled activities are largely carried out effortlessly and automatically. That even includes routine conversation; it's very low effort. So System 1 is a marvel, with some flaws. System 2 is slow and clunky but capable of performing complicated actions that System 1 cannot carry out.

For example, if I say 2 plus 2, a number comes to your mind. That is System 1 working. You didn't have to compute it, you didn't have to do anything deliberate, it just popped out of your associative memory.

If I say 17 times 24, no number comes to your mind—you'd have to compute it. And if you computed it, you'd be investing effort. Your pupils would get larger, your heart rate would accelerate, and you'd be working. That's System 2.

What is the significance of these two systems? What are the implications for psychologists and laypeople?

They're really two modes of thinking. And everybody recognizes the difference between thoughts that come to mind automatically and thoughts that you need to produce. That is the distinction.

The main point that I make is that System 1 is very efficient and highly skilled, and in general it's monitored by System 2. But in general we're experts at what we're doing, we do most of what we do well, so System 2 mostly endorses and generates actions from System 1. System 2 in part is a mechanism for second-guessing or controlling yourself. But most of the time, we don't have to do much of that.

But, System 1 can sometimes lead us astray when it's unchecked by System 2. For example, you write about a concept called "WYSIATI"—What You See Is All There Is. What does that mean, and how does it relate to System 1 and System 2?

System 1 is a storyteller. It tells the best stories that it can from the information available, even when the information is sparse or unreliable. And that makes stories that are based on very different qualities of evidence equally compelling. Our measure of how "good" a story is—how confident we are in its accuracy—is not an evaluation of the reliability of the evidence and its quality, it's a measure of the coherence of the story.

People are designed to tell the best story possible. So WYSIATI means that we use the information we have as if it is the only information. We don't spend much time saying, "Well, there is much we don't know." We make do with what we do know. And that concept is very central to the functioning of our mind.

There is a very nice example of this, and it's actually the thing that impressed Malcolm Gladwell when he wrote the book "Blink." We form an impression of people within less than a second of meeting them, in some cases. We decide whether they're friendly, hostile or dominant, and whether we're going to like them. And clearly, we form that impression with inadequate information, just based on their facial features or movements. This is WYSIATI—we don't wait for more information, we form impressions on the basis of what is available to us.

Gladwell emphasized that there was some accuracy to those, but they are very far from perfectly accurate. They're better than nothing … but what is striking is that you form them immediately in the absence of adequate information.

Can a person train him or herself to say, "Wait, what other information is out there that I'm missing?"

Well, the main point that I make is that confidence is a feeling, it is not a judgment. And that feeling comes automatically; it itself is a product of System 1. My own intuition and my System 1 have really not been educated to be very different. Education influences System 2, and enables System 2 to pick up cues that "this is a situation where I'm likely to make those mistakes." So on rare occasions, I catch myself in the act of making a mistake, but normally I just go on and make it.

When the stakes are very high, I might stop myself. For example, when someone asks me for an opinion and I'm in a professional role, and I know that they are going to act on my opinion or take it very seriously, then I slow down. But I make very rash judgments all the time. I will make a long-term political prediction, then a little voice will remind me, "but you've written that long-term political predictions are nonsensical." But you know, I'll just go on making it, because it seems true and real at the time I'm making it. And that's the WYSIATI part of it. I can't see why it wouldn't be true.

You've mentioned that it might be easier for organizations to overcome these cognitive errors than individuals. Could you talk a little about why that is?

Well, it's very difficult for people to overcome their biases. Organizations by their very nature think slowly, and they have an opportunity to set machinery in place to think better. I'm not terribly optimistic about that either; I'm not generally known for optimism. But one could imagine an organization deciding to improve its decision-making, and we have some ideas about what it might do to do that.

Could you give an example?

An example I mentioned in the book is psychologist Gary Klein's "premortem" method. To use the method, an organization would gather its team before making a final decision on an important matter. Then, all the team members are asked to imagine that the decision led to disastrous failure, and to write up why it was a disaster. The method allows people to overcome "groupthink" by giving them permission to search for potential problems they might be overlooking.

And we published some other ideas in the Harvard Business Review this year. We presented a whole checklist of questions that could be asked, and recommendations.

You've more recently moved into the study of happiness, and you've found that life satisfaction and day-to-day happiness are different things, with different causes. Children, for example, seem to contribute to life satisfaction, but not day-to-day happiness. And in a study last year, you found that any income over $75,000 doesn't increase people's day-to-day "experienced happiness," but that more money can increase life satisfaction. Can you talk about this line of research?

Yes, people don't really distinguish between life satisfaction and happiness. At one point I was saying the word "happiness" should be retired because it's so ambiguous. But if it's to be retained, it should be applied to experience, and what you think about your life should be called "satisfaction" or some other word.

There's such a dilemma about defining subjective well-being. All of us prefer unitary definitions to definitions that are all over the place. But for this one, a unitary definition doesn't work because you cannot ignore life satisfaction as a measure of well-being. And the reason you cannot ignore it is that this is what people want to achieve. People have goals, and they want to achieve those goals. And the tradeoff between the misery of diapers and measles and all that, and just the joy of children, are incommensurate. I mean it's clear that one of them is vastly more important than the other. So, this is one way of looking at it.

On the other hand, you really cannot ignore the experiencing self either. So how do you find a balance between these two not entirely compatible ways of looking at life and happiness and well-being? That's unsolved. I haven't solved it. I take some pride in having raised the question, but I haven't solved the question.

One last question. Your book was dedicated to Amos Tversky, and painted a vivid portrait of how you two worked together. Could you talk a bit about that collaboration and how it made your research possible?

We were really exceptionally lucky. What we were doing and the way we were doing it meshed very well. It turned out we could do a lot of research while walking, by testing each other's intuitions, and so we didn't need a lab. If the two of us agreed that we shared the same intuition, collecting the data and showing that other people shared the intuition became almost secondary. We were pretty sure, and we were almost always right.

Each of us found the other extremely interesting. That was a joy and we knew it. We were particularly fortunate because our skills overlapped enough that we understood each other immediately, but we kept surprising each other, and that's because we had somewhat different skills. I was more intuitive and he was more formal; he had the clearer mind. And the combination really worked extremely well. We were very lucky. That comes through, doesn't it?