The Future of Ego Depletion Research
ROY BAUMEISTER and MICHAEL INZLICHT
Moderated by WILHELM HOFMANN
HOFMANN: Nearly 20 years ago, as you all know, Roy and colleagues devised a theory that turned out to be highly influential. It basically states that self-control draws upon a limited pool of resources. And, when that limited pool of resources is low, self-control is typically impaired or compromised resulting in a state called ego depletion. Literally hundreds of studies since then have showed the effect across many different domains and tasks, as summarized in the 2010 meta-analysis by Martin Hagger and colleagues, who arrived at an average effect size estimate - across 198 data points - of 0.62. Past ego depletion research has also uncovered many boundary conditions or moderators of this effect. And that research has led to refined insights into the underlying mechanisms of ego depletion effect and has also lead to reformulations of theory, with the current state of the art being that there are capacity-based theories on one hand and more motivational accounts on the other. And Michael Inzlicht, of course, has been a leading key figure in promoting a motivation-based perspective.
A couple of months ago, however, the field has been troubled by a registered replication report published in Perspectives in Psychological Science. This RRR, led by Martin Hagger, has failed to replicate the ego depletion findings reported in the 2014 Psych-Science paper by Sripada and colleagues. The replication team used the Sripada paradigm and across 23 labs and more than 2,000 participants the reported overall effect size was very close to zero.
In the next 40 minutes or so, Roy Baumeister and Michael Inzlicht will share their assessments and perspectives on this failed replication. But also, on a more general note, they will share their ideas regarding what ego depletion is about, what the underlying mechanisms might be, and how the field should be developing in the future – that is, what types of research they would like to see, what types of explanations they think account best for the overall state of affairs. And so we are extremely happy to have Roy and Michael here today with us. Thanks so much for joining in.
I would like to start with Roy. So, Roy, what is your current assessment? What do you think of the recent RRR, what does it mean to you?
BAUMEISTER: Well, as you said the theory started 20-some years ago. I continued to revise my thinking on it with new data. That RRR, I think, has no impact on my thinking of it. They did a poor study and showed that it didn't work and they did it 20 times and it didn't work 20 times. But it's still one study and it's unclear that it tested the hypothesis. They used this weird computerized procedure. They took out the things that make it work. I made up the the E-crossing procedure because you form a habit and then you break, it as a number of the talks here have said. You need a conflict situation so you have an impulse to cross something out and then you override it. They got rid of the habit forming stage. And so just the weird computerized version without the habit-forming thing didn't capture it. The manipulation checks they had looked like they were not getting ego depletion. So it really shouldn't be called, in my view, a replication failure. If you don't manipulate the independent variable you're not testing the hypothesis.
As you said, there are hundreds of studies finding ego depletion. In that context, the idea that there is not such an effect is at this point preposterous. I don't know what I would believe in social psychology if it were true that there is no ego depletion effect. So, to me it's more an outcome of all the constraints they put on. Several investigators told me they complained and said this is not ego depletion and it's not going to work, but there was no flexibility. You know, I am sorry all those people went through all this work, it’s going to make it harder to get funding and it has this weird effect on the field. That's too bad. But in terms of changing my thinking or shaking my faith in ego depletion: no.
HOFMANN: Michael how does it make you feel?
INZLICHT: So, many of you probably don't know this. I know that Roy does, because I told him many times. Roy is one of my psychology heroes. I study self-control today because of Roy. I'm pretty sure I'm not the only one for which that's true. So, I am tremendously honored to be sharing a table with Roy. It's one of the highlights of my career to be here. Despite my admiration, I've taken the failures to replicate a little bit differently. In a way, I feel I'm in a strange position here today. If this were a professional wrestling match I would be the bad guy, the villain. If this were a Hollywood movie I would have black eyepatch or a black cowboy hat or something. Because I think my opinions are going to be unpopular here. I’ll say this upfront: I still believe depletion is a real phenomenon. I know some media outlets have suggested that the theory of depletion had been debunked, but that's just silly and is not reflective of the way science works.
But my faith in the robustness and replicability of the ego depletion effect has been shaken. I feel less confident about it today than I did six months ago. So I do take the RRR seriously. So where do my doubts come from? They come from a couple of places. And I want to start, not with the RRR, but with a meta-analytic difficulties. Meta-analytic difficulties were first uncovered by Evan Carter & Michael McCullough a few years ago and some of you might be wondering "meta-analytic difficulties,” what are you talking about? I mean Hagger, who is the same person who spearheaded the RRR, conducted a meta-analysis in 2010 of nearly 200 studies. And, he concluded that depletion was robust, it was real and, most certainly, one would think it was replicable with a d of 0.62. Meta-analyses are often described as the best way for us to understand the robustness and size of an effect and that's mostly true, save for one important caveat. And that caveat is that the sample that goes into the meta-analysis is unbiased. By unbiased I mean that all the studies, every single one, that have been run on the topic are included and represented in the meta-analysis. If they are all included, then a meta-analysis is safe and it will reveal an unbiased estimate of the true effect size. If, however, it systematically excludes studies, we're in deep trouble. In this case, meta-analyses can lead us seriously astray. And it's crystal-clear that Martin Hagger’s original meta-analysis missed many studies. Ok, how do I know this for sure? Because Hagger did not include a single unpublished study in this meta-analysis. And this is problematic because the published record, we now know is biased. There was one survey of the published record that suggested that over 90% of psychology findings are significant and positive. And as scientists we know that this is simply not possible; that many, many of our studies are neither significant nor supportive of our hypotheses.
So if these null results don’t make it into the pages of our journals, where do they go? Simply put, the end up in our proverbial file drawer. So what am I afraid of here is something called publication bias. This is the very simple idea that journals and editors do not accept, or tend not to accept, null results. (That’s changed a little bit lately.) And authors don't submit null findings. So when a meta-analysis is conducted we need to account for this publication bias. The problem is that this is really hard to do. But there are ways to do this and there are clues that there could be problems. One clue, one tool that a meta-analyst could use is something called a funnel plot. [pointing to funnel plot on slide]. So this is a funnel plot. A funnel plot is a simple idea. Each one of these dots represents a study. And what you have on the x-axis is the size of the effect, the Cohen's d if you will, the effect size. And on the y-axis you have the size of the study, how many participants or how precise your study is. An unbiased meta-analysis will look like a triangle, a pyramid, a funnel. We have a wide base of studies. The reason that base is wider at the bottom is because random noise is more likely to affect smaller than larger studies; sometimes you get significant results that support your hypothesis. Sometimes you don't. The larger your study, the more precise your estimate will be. In fact, some people have suggested that a good estimate of the size of any body of literature is the highest N study. That's a simple heuristic: the larger your study, the more likely is that study to reflect the true population effect size. And here for this hypothetical example, you’ve got one study that’s right on the centroid. But the point is, the bottom needs to be symmetrical to be confident in a meta-analysis. Now what happens with publication bias is represented with this transparent pink triangle here. Anything now shaded pink is in the land of nonsignificance. This would be studies that, for whatever reason --typically for publication bias because journals reject these papers and authors don't submit these papers -- these papers don't make it into the published record.
And what ends up happening when you do not include the non-significant, typically unpublished, studies is that you get association between the size of the study and the effect size of the study. When you see such an association, it’s a clue that studies might be missing. And that should not be the case. In an unbiased sample, there should be no association between sample size and effect size. But, simply because of publication bias alone, you might find this association. What also happens is that the meta-analytic estimate moves. It moves forward and becomes inflated. So now the problem - and this is a serious problem - is how to correct for these missing studies? How do I adjust for this? How do I fix this? Put simply: We just don't know. [showing funnel plot for the Hagger, 2010 meta-analysis, as produced by Carter & McCullough, 2014] This is the funnel-plot for Hagger’s meta-analysis. The solid bar here is the meta-analytic effect, which is about d of 0.62. And there are a couple of things to note. This is the largest N study over here - I think it has about 500 participants, and it produced a d of 0.18? So I'm happy with a d of .18. That might be perhaps a good estimate of the meta-analytic effect. But, let us note, that this is considerably smaller than a d=0.62. The other thing we note is that there is now this correlation between sample size and effect size that big studies show small effects, and with small studies show big effects. So that is a sign that something is not right here. It’s a clue that studies are missing and that there is a file drawer. Where are all those studies? [pointing to bottom left quadrant of funnel plot] Where are all the studies that should be over here? There is nothing nefarious here, there is nothing that unusual; this is just the way the field is. We don't publish null-results. So meta-analyses need to include unpublished studies, they need to scour through people’s file drawers, and they need to engage in things to correct for bias. So the problem is, we don't have good ways of correcting for this bias. There are a few corrections out there and some people suggest the estimates should be around a d of 0.2., which incidentally corresponds with that one large-N study. Other estimates say it should be closer zero. That is, some bias correction estimates suggest the size of the ego depletion effect is not different than zero.
To be clear, I'm skeptical with a meta-analytic effect of zero. I think that's unlikely. Bu there is no principled way, no a priori way, to determine what the best estimate is. So again I'm not saying that there is no effect, I'm just saying that, there is no principled way of determining which is the best correction to use: is the effect close to d=0.2 or is it more likely that it hovers somewhere around zero? We simply don’t know. And that's kind of crazy, when you think about it. Because I think we all believe in fatigue, in some form of cognitive depletion. Yet this meta-analysis cannot determine whether we are dealing with a real (albeit small effect) or one that is non-existent. Considering that nearly 200 studies went into this meta-analysis, this situation is grave.
The second issue is this RRR - Roy already talked about it - led by the same Martin Hagger, who did this 2010 meta-analysis. I think some of the criticisms that Roy mentioned I agree with. Was this a perfect study? Not even close. Can we do better? Absolutely! Should we do better? There is no doubt about it and I hope that we will. But one thing I want to convey is that I think this was an earnest attempt, this was an honest attempt to replicate an effect, Ok. This is not the case of so-called replication bullies going out to disprove the effect. Martin Hagger has been on record as a defender of ego depletion. He has gone on record being skeptical of the bias-corrected meta-analysis conducted by Evan Carter and Michael McCullough. Before the RRR, he maintained that the effect was real. It's also not the case that it was so-called second-stringers running the study. There were a number of experts on self-control who ran the study, including a few people in this room.
BAUMEISTER: I think we agree on this, that it was an honest attempt. It was a competent job of showing that this procedure does not work.
INZLICHT: Right, so the other thing I want to note, is that 23 out of 24 labs predicted an effect. What would you predict? What's going to happen here? Do you think it is going to work or not? They saw the protocol, they saw the task, the IV, the DV. 23 out of 24 labs said "yes, this is going to work." They put their head on the line and said, this is going to work. But, the thing is, it didn't. For me, this suggests, that we need to update our beliefs. Again, I'm not saying depletion is not real. But my confidence in the strength of the effect has been lessened by this. Like I said, I think there are excellent studies out there and amazing demonstrations of depletion – including some studies that we heard about yesterday. My current favorite study that was published last year by Katherine Milkman and her students. She had nearly 5,000 health care workers around the U.S. with over 13 million data points and what she did was, she looked at the compliance of these health care workers with an externally mandated rule. This rule, was the extent to which they sanitized their hands. It's a rule you have to sanitize your hand before and after you see patients. And it turns that, the longer these health care workers worked, the less compliant they were with that rule. However, this drop in compliance occurred only after 2 or so hours. So I believe there is something there, I believe in fatigue; I'm just less confident in the lab-based phenomenon. I think we can do a lot better at creating a robust and replicable depletion paradigm.
BAUMEISTER: A couple of things to that. First, I agree, that we cannot conclude, that there is an effect of a definite size, but I'm not sure there should be. It's like: What's the effect of being tired? Well, it depends on how tired you are. The notion that there is a single effect size is dubious and I've been on record before saying that effect sizes in lab studies are inherently both inflated and deflated. So, I don't think we learn anything about the size of an effect from laboratory studies. It is the best method in the social sciences for saying: is there a causal relationship or not? But for estimating the size out in the real world, a lab finding is useless. Those models I think are borrowed from medical kinds of testing where you give people a pill and where they take it; but there it doesn't matter whether they believe in the pill or whether the person giving it to them is nice or whether they've done other things that day. So, not all tests are the same thing. They are highly heterogeneous. You said, Hagger, using just published studies estimated a d of .6. Again I'm not sure the effect size is meaningful. Evans and McCullough had this critique and their biggest analysis said it was .4. So, well, probably it is somewhere in between .4 and .6. Evans and colleagues threw out most of the published studies and put in a lot of unpublished ones. Something like five graduate students were responsible for half of the things that Evans had done. So, they were doing the best to get a lower estimate and they did further sophisticated things. You [Michael] wrote a critique of that. But, you know, an effect size d between .4 and .6 to the extent that a lab effect is meaningful at all, indicates there's a real signal. I doubt that there is no effect, again, I don't know if you need to take that seriously. But, if everything being published on ego depletion had just capitalized on chance, there would be just as many effects in the opposite direction. And I would think those would get priority from journal editors. So where are all these studies showing that self-control improves in the so called depleted state? That seems preposterous to me. So, those are, I think, the main things why I don’t make that much out of the RRR. Yes, it was an honest attempt, but it’s too bad they picked this weird procedure; but they were constrained. I mean, they asked me initially for suggestions, and I said, well, the radish and chocolate one works, they said, you know, well, we can't do any of those, things need to be computer-administered, culture-neutral, easy.
INZLICHT: So, I think I agree. The actual meaning of an effect size is unclear, because you create these lab conditions and you are artificially trying to get an effect. So those numbers, I agree, are not necessary meaningful.
BAUMEISTER: And even if you deplete them for five minutes versus 10 minutes, is that going to produce the same effect size? Just before the RRR thing, a paper came across my desk - and I saw the hand washing thing, too - where they had kids playing soccer and they depleted them with a Stroop task and they depleted them for half an hour. Ok, those kids were really depleted. And, sure enough, they missed more shots. Meanwhile, on the nonregulated things such as how fast they ran, they did fine. Clearly that was a serious depletion.
Turning from effect sizes to the theoretical issues, which are more interesting, we again see that things change with extent of depletion. One pattern that has arisen because of so many people doing ego depletion studies has been that some people find a moderator that eliminates ego depletion effects and say "Ok, I can explain all of ego depletions because of this." Kathleen Vohs spearheaded a set of experiments, where we took and showed several of these moderators work fine if you're just slightly depleted, but when you're seriously depleted after doing multiple tests, then they don't work anymore. I always fall back to the analogy of physical tiredness: When you're running or whatever, and you just starting to get tired, you can overcome it by believing you have unlimited energy or by thinking it's more responsible or by some incentive or something like that. But when you're really tired those things don't work as well. So, again the idea that there is one effect size for all levels and procedures of ego depletion seems to me theoretically incoherent if not downright absurd.
HOFMANN: So if I may summarize it at this point in time, it seems like you two agree that the initial estimate was probably an overestimation - the Hagger .62 - because of file drawer problems and potentially other issues, but also that the Sripada RRR was probably an underestimation, because only one, limited procedure had been used and it might not have been used perfectly well. The truth may be somewhere in the middle if we ask the question of the average effect. We don't know yet where it is. But there is also that issue of variation, right. Sometimes depletion works sometimes it doesn't. So in all meta-analyses there are always typically two possible sources of that degree of variation or heterogeneity in effect sizes: The first is measurement/sampling error – and faulty procedures may contribute to that. But the second source of heterogeneity is brought about by deep-level moderators of the effect and I think that's the way we should put our emphasis or attention to, at this point in the discussion. You already mentioned some of the moderators. For instance, the length of depletion may matter. Maybe the depletion effect is much stronger when people get depleted longer. Maybe what we typically have been doing so far as a field is more on the small end of the spectrum of that moderator - namely just five or ten minutes of depletion. So, to some extend we, as a field, may have been tricking ourselves into believing that it is so easy to get the depletion effect. So my question to you guys at this point is: What do you think are the substantial moderators of that variance in ego depletion effects we see when we look at the overall picture? What do you think are the key theoretical moderators?
INZLICHT: So, the length, I think, is an important moderator to consider. So how long people have been working and the idea of severe or extreme depletion is a good one. But, in that meta-analysis for example, I think the modal length of the depleting task (the IV) is probably in the area of five to seven minutes. I have seen some studies that have an IV that is as short as one minute. I repeat, some depleting tasks were one minute long, involving as little as 20 Stroop trials! And, these studies showed downstream effects. How is this even possible? In contrast, there is a paper that just came out, and this is mind-blowing to me. The paper, that came out in PNAS by this French group, Blain and colleagues. These are cognitive neuroscientists and they had about 60 participants - not a massive sample - but they had a within-subjects paradigm. They had participants come in for an entire six-hour day. They did an executive function task, they did two of them over the course of this six-hour day. And the amazing thing here is that their performance barely moved. It did move, but barely, after six hours. I find that crazy. So, that’s severe depletion, and yet we do not see these big effects. They did another second task - which is the delayed discounting task - and they did find a nice robust effect there, and that was good, but only after four and a half hours. So, to me it's like, well, how can these two things live together? How can you get these depletion effects with a really short depleting task and barely get it with a very, very long depleting task? Again the modal time in the Hagger meta-analysis, is probably 5 to 7 minutes,, producing these effects. And then you have these “crazy” studies that don't produce anything. So this heterogeneity is probably due to some task differences. But to me it's a puzzle. I don't get it, how these things can live together.
BAUMEISTER: Alright. Well, I think we are all invested in thinking this out. My perspective I just want to end up knowing what’s right. So, why don't we talk about the motivation issue, including your [Michael’s] work. One would think the motivation for the second test should change after the first task, if only because the person is trying to conserve the depleted resource. But that hypothesis is repeatedly and pretty consistently failed. We haven’t found studies that ego depletion causes a significant decrease in the motivation to do well on the second task, and I'm wondering what to do about that. It seems like motivation would be in there. I mean our theoretical disagreement [between Michael and me] was whether the motivational changes should replace the idea of limited resources, which I don't really see. But you know we both think the change in motivation ought to be there. So that’s one puzzle, why no change in motivation for the second task?
The other puzzle to me in terms of the resource and so on is how ego depletion involves glucose and how that operates. We continue to find pretty reliable effects of giving people a dose of glucose. That part works. Now, the first glucose paper also found that blood glucose dropped from before to after the first depleting task. We, however, don't reliably replicate that.
That’s another thing. I do sincerely believe in the phenomenon of ego depletion. One reason is that many graduate students have come through my laboratory and run these studies and a few things like the drop in glucose showed up once and never happened again, whereas the basic ego depletion effect student after student seemed able to get.
So figuring out the role of glucose: there’s this new central governor, a revived idea. I know you're a bit skeptical of that. The central governor idea is that there’s a real resource being depleted and there's a mental brain system that makes you feel fatigued and makes you withdraw effort. But these two are only very loosely related. There's glucose stored all over your body and maybe you can't keep an updated inventory. The idea we're working on now is that there is adenosine which is a byproduct of glucose metabolism that that leaves a signal. I think of it as “counting the ashes.” After you burn glucose, adenosine is the “ashes”,. Whatever is keeping track, the internal glucose monitor, doesn't know how much glucose you've used, it doesn't know how much you have left, but it see these piles of ashes and says "oh, better cut back". I don't know if that's right, but that’s what we're trying to get a handle on now. So, in terms of the physiology of it - that's really outside of my wheelhouse. I think people act as if there is a limited resource. That's good enough for me and, parsimoniously, we should assume there is some kind of limited resource unless we can come up with a fully available alternative theory that replaces it. There are a couple of other ideas that may actually pan out. What are your thoughts on it?
INZLICHT: I want to talk about a broader theory and what I think depletion is. So, my position - which I think, some people have alluded to earlier - is that I don't believe that self-control wanes over time because of some loss of energy or that some resource has been depleted, either completely or even partially. Instead, I suggest that self-control wanes over time because people’s desires, motives and priorities change. It's actually a subtle change from the original formulation, but the change is simply that I think that depletion is less about an incapacity and more about a lack of willingness. And why do I think this? A few studies that had popped up led me to believe this. And in fact, EJ Masicampo, who is Roy’s former student, in 2014 had a paper where he listed nearly 40 studies that were inconsistent with a strict resource account. And there are a couple of flavors for these studies, but one flavor was that when you added some motivational incentive to the typical depletion paradigm, some sort of reward - things like money, things like self-affirmation, things like cigarettes, things like television, things like prayer - depletion would go away. So, in other words, motivational incentives appeared to moderate the effect; motivation matters. Now at the same time as you had these 40 studies, there is actually not one powerful bit of evidence that resources are involved. Even Roy just said that there is no good evidence that initial acts of self-control deplete glucose. Because glucose might be involved, it might be a moderator. But in terms of glucose as a mediator we’d both agree, that that’s probably not right. So there is no direct evidence for a resource as a mediator; no evidence that something is actually depleted. At the same time, there is all this circumstantial moderator evidence coming from motivation. So both of those things are inconsistent with a strict resource account.
In my mind, the resource account needs to be updated, it needs to be changed, and, in fact, it has. Roy has updated it. This updated/revised resource model now suggest that it's not the case that people's resources or energy has run out, it’s not on empty. You know that fuel tank on empty metaphor is not correct; it’s that resources are plentiful but that they are diminished. So when people are depleted, it's not that they have run out of energy - energy is there – it’s that they simply decide not to use the energy. They want to hoard it, they want to conserve it. , I think this saves the resource model from these forty inconsistent studies, but the problem now, I think - and Roy, tell me, if I'm mischaracterizing - is that the model that I prefer and the revised resource model are identical at the level of the proximal mechanism. So, according to both models now, people - after engaging in effortful control at time 1, do not engage in effortful control at time 2; Roy would say this is because people want to save their resources and I would say it’s for some other reasons. But, at the proximal level, I think they are identical. At the ultimate level I think they are different. And I will tell you how I think they are different in a moment.
So, Roy asked - and I think he is absolutely correct in pointing this out - what is the evidence for motivation as a mediator? We have lots of evidence for motivation as moderator. And moderation can often give us clues about mediation, but what is the direct evidence, that motivation is a mediator? And the truth is: there is not very good evidence out there. There are not many - but there are more and more studies coming out - that examine the motivation for the time 2 task, examining if motivation or desires change after people engage in effortful acts at time 1. And, typically, those results are consistent with the null. That’s a problem for my preferred model. But I think it's also a problem for the revised resource model. So now, what evidence do we have? I just ran one of my first ever pre-registered ego depletion studies. A high powered, pre-registered study using this paradigm called the effort discounting paradigm as the DV. This paradigm is very similar to the delay discounting paradigm, where you assess the subjective value of time. How much is time wort to people or delaying gratification worth to people? In the effort discounting task, we’re assessing how much is effort worth to people. So you get people to make these series of choices - how much money you need to be paid to do this kind of boring, effortful task. And what you see is, generally, that people do not like effort - they avoid it. But, in our pre-registered study, we find that effort avoidance, this desire not to engage in work, increases for those who have been previously depleted. It's only one study, the effect is not particularly large and it needs to be replicated; but, it was pre-registered and it was highly-powered. So, here is one initial piece of evidence. But I agree with Roy, that, at some level, if there are more and more studies that don't find evidence for motivation as the mechanism, this model has been falsified and we have to find something else. I agree with that completely. But, even if the motivation model is falsified, this does not affirm the resource model. We need direct evidence for that.
So the models aren’t the same. They do diverge and they diverge at the level of ultimate explanation. And ultimate explanations are really fun to think about. But they are hard to confirm. So, the ultimate explanation for the resource-model again is about energy. It's about a pool of resources that gets diminished with use. People can supposedly hoard this resource, they conserve this resource and they only use it for tasks that are important, things that they care about. I think that’s actually a really intuitive idea. I think at some level; it just feels right. But just because it feels right doesn't mean it is. There is this text-based analysis conducted by Robert Hockey who is probably the world’s expert on fatigue. He's been studying this for his entire career. He wrote a book a couple of years ago called "The Psychology of Fatigue". And at the beginning of his book he discussed this one little factoid, which I find so interesting. He analyzed the use in the English language of words that relate to fatigue, of being tired. There is evidence of words related to fatigue that go back hundreds and hundreds of years in the English language. But, it is only in the 19th century, around the time of the industrial revolution, around the time of the introduction of energy-using machines, that the description of being tired was associated with a loss of energy. Before then, notions of being tired were not tied to notions of energy loss. The idea here is that had this machine, this new steam engine, that worked on energy and when the energy wasn't there, when it ran out, it wouldn't power the machine any more, the machine seemed tired, it would die out. Hockey’s argument is that this energy-using machine metaphor captured us, enticed us, and it is now what we think is natural. Now I can't prove that is wrong or right. I'm not sure. It's an interesting little factoid suggesting that what we think is natural and normal (that fatigue is the product of losses of energy) is a cultural construction.
The ultimate account that I prefer is one based on competing goals and the desire to balance between competing goals. All organisms - humans and other animals - have multiple goals to live a long happy and healthy life. We don't just have one thing that we need to do. And self-control fatigue, ego depletion might urge us to take a “break.” It might be a signal, a phenomenal signal suggesting that maybe we should switch course and do something else. So for example, I was preparing for this talk at home and I love work but it also takes effort, takes control. And at some point, a competing goal became salient -hanging out with my son who loves to play soccer. Eventually, I got tired of my work goal and switched to play with my son. So without this self-control break, I might not have the flexibility, I might not have the spontaneity to do the multiple things one needs to do in order to live a full and complete life. Now again, this is an ultimate explanation that is hard to confirm. And, it's probably wrong, but I think Robert Kurzban's idea of there being opportunity costs is an important insight. That the time you spend one goal means you have less time on another goal. So you don't want to perseverate too much on one goal because you lose opportunities. The other thing that happens when you perseverate on a goal is that you might work too hard on a goal that is not going to work out. You can imagine working really hard and trying really hard on a relationship that ultimately is doomed and is going to fail. That are other opportunities that you've wasted if you stick too doggedly to one and only one goal. For the most part, I think both Roy and I agree that motivation is the proximate mechanism, but we disagree as to why people might prefer not to self-regulate after initial bouts of effort, we disagree at the level of ultimate explanation.
BAUMEISTER: The idea that it's a signal to stop what you’re doing and do something else that has a certain appeal and elegance. Except, it doesn't fit all the dual task-paradigms that we used to study, because we already switched to do something else. And why does then the fatigue carry over to the next thing. I mean to use your example if you're depleted from working too hard and then you go and do stuff with your son, you're more likely to curse in front of him or say something mad or smack him when he misbehaved or something like that.. There’s a JPSP by Finkel et al., showing that domestic violence increases when people are depleted. You know, how is this an opportunity cost that needs to be evolutionarily protected or something like that. And so, I wonder if the motivation for the first task wanes, if that might be a better way for us to go forward as you say. We're not finding the loss of motivation for the new task which is a bit of a puzzle to all of us.
INZLICHT: Yes, I think you’re getting at the domain general aspect of fatigue. So, you know, why should I be switching from one goal to another ...
BAUMEISTER: But that's the point, that ego depletion is really domain-general.
INZLICHT: Yeah, that's right and I guess one difference with the depletion literature and the fatigue literature - at least one that I've noticed - is that fatigue generally is within one domain. So people work on something for a long, long time and they do worse on that in that domain. And they don't necessarily always show this domain generality. Sometimes they do but not always.
BAUMEISTER: With ego depletion you almost always pick the second task to be as different from the first task as possible. So, you know they suppress thoughts and then they have to not laugh at comedy or something.
INZLICHT: Right. So now, what if the value of work itself declines? So, one economic perspective is that we put value on effort, value on work, but then value declines in a domain-general way, such that we start valuing work less and we value leisure more. And also what is defined as work and what is defined as leisure are subjective. And that can change. But I agree that there are holes. It's not by any means a perfect explanation.
BAUMEISTER: In evolutionary history, I mean, work was not nearly as big a part of the day and so. My thinking - again to go back to Segerstrom's work - that glucose is not used just for self-control but for all the body’s functions including the immune system. I mean, the big threat then was if you got a cut on your foot and it got infected that could kill you unless your immune system was strong enough to do it. So, it's probably true that the human mind evolved to conserve its glucose as much as possible. When it sees “I’m expending I should save it here” or if “I’m channeling too much in one area I need to get it back to fund the other organs’ activity. So I'm suggesting that we conserve too much. If you give people an incentive to do well on something else then they can suck it up and do it, but your priorities become more stringent toward conserving as you expand a lot because if you don't have enough and you get bitten by an animal or something like that, that could be fatal.
HOFMANN: Thanks so much to the two of you for sharing your thoughts with us today!