Previously In Series: No Clarity Around Growth Mindset…Yet // I Will Never Have The Ability To Clearly Explain My Beliefs About Growth Mindset // Growth Mindset 3: A Pox On Growth Your Houses
Last month I criticized a recent paper, Paunesku et al’s Mindset Interventions Are A Scalable Treatment For Academic Underachievement, saying that it spun a generally pessimistic set of findings about growth mindset into a generally optimistic headline.
Earlier today, lead author Dr. Paunesku was kind enough to write a very thorough reply, which I reproduce below:
I.
Hi Scott,
Thanks for your provocative blog post about my work (I’m the first author of the paper you wrote about). I’d like to take a few moments to respond to your critiques, but first I’d like to frame my response and tell you a little bit about my own motivation and that of the team I am a member of (PERTS).
Good criticism is what makes science work. We are critical of our own work, but we are happy to have help. Often critics are not thoughtful or specific. So I very much appreciate the intent of your blog (to be thoughtful and specific).
What is our motivation? We are trying to improve our education system so that all students can thrive. If growth mindset is effective, we want it in every classroom possible. If it is ineffective, we want to know about it so we don’t waste people’s time. If it is effective for some students in some classrooms, we want to know where and for whom so that we can help those students.
What is our history and where are we now? PERTS approached social psychological interventions with a fair amount of skepticism at first. In many ways, they seemed too good to be true. But, we thought, “if this is true, we should do everything we can to spread it”. Our work over the last 5 years has been devoted to trying to see if the results that emerged from initial, small experiments (like Aronson et al., 2002 and Blackwell et al., 2007) would continue to be effective when scaled. The paper you are critiquing is a step in that process — not the end of the process. We are continuing research to see where, for whom, and at what scale social psychological approaches to improving education outcomes can be effective.
How do I intend to respond to your criticisms? In some cases, your facts or interpretations are simply incorrect, and I will try to explain why. I also invite you to contact me for follow up. In others cases, we simply have different opinions about what’s important, and we’ll have to agree to disagree. Regardless, I appreciate your willingness to be bold and specific in your criticism. I think that’s brave, and I think such bravery makes science stronger.
First, what is growth mindset?
This quote is from one of your other blog posts (not your critique of my paper), from your post:
If you’re not familiar with it, growth mindset is the belief that people who believe ability doesn’t matter and only effort determines success are more resilient, skillful, hard-working, perseverant in the face of failure, and better-in-a-bunch-of-other-ways than people who emphasize the importance of ability. Therefore, we can make everyone better off by telling them ability doesn’t matter and only hard work does.
If you think that’s what growth mindset is, I can certainly see why you’d find it irritating — and even destructive. I’d like to assure you that the people doing growth mindset research do not ascribe to the interpretation of growth mindset you described. Nor is that interpretation of growth mindset something we aim to communicate through our interventions. So what is growth mindset?
Growth mindset is not the belief that “ability doesn’t matter and only effort determines success.” Growth mindset is the belief that individuals can improve their abilities — usually through effort and by learning more effective strategies. For example, imagine a third grader struggling to learn long division for the first time. Should he interpret his struggle as a sign that he’s bad at math — as a sign that he should give up on math for good? Or would it be more adaptive if he realized that he could probably get a lot better at math if he sought out help from his peers or teachers? The student who thinks he should give up would probably do pretty badly while the student who thinks that he can improve his abilities — and tries to do so by learning new study strategies and practicing them — would do comparatively better.
That’s the core of growth mindset. It’s nothing crazy like thinking ability doesn’t matter. It’s keeping in mind that you can improve and that — to do so — you need to work hard and seek out and practice new, effective strategies.
As someone who has worked closely with Carol Dweck and with her students and colleagues for seven years now, I can personally attest that I have never heard anyone in that extended group of people express the belief that ability does not matter or that only hard work matters. In fact, a growth mindset wouldn’t make any sense if ability didn’t matter because a growth mindset is all about improving ability.
One of the active goals of the group I co-founded (PERTS) is to try to dispel misinterpretations of growth mindset because they can be harmful. I take it as a failure of our group that someone like you — someone who clearly cares about research and about scientific integrity — could walk away from our work with that interpretation of growth mindset. I hope that PERTS, and other groups promoting growth mindset, can get better and better at refining the way we talk about growth mindset so that people can walk away from our work understanding it more clearly. For that perspective, I hope you can continue to engage with us to improve that message so that people don’t continue to misinterpret it.
Anyway, here are my responses to specific points you made in your blog about my paper:
Was the control group a mindset intervention?
You wrote:
“A quarter of the students took a placebo course that just presented some science about how different parts of the brain do different stuff. This was also classified as a “mindset intervention”, though it seems pretty different.”
What makes you think it was classified as a mindset intervention? We called that the control group, and no one on our team ever thought of that as a mindset intervention.
The Elderly Hispanic Woman Effect
You wrote:
Subgroup analysis can be useful to find more specific patterns in the data, but if it’s done post hoc it can lead to what I previously called the Elderly Hispanic Woman Effect…
First, I just want to note that I love calling this the “elderly Hispanic woman effect.” It really brings out the intrinsic ridiculousness of the subgroup analyses researchers sometimes go through in search of an effect with a p<.05. It is indeed unlikely that “elderly Hispanic women” would be a meaningful subgroup for analyzing the effects of a medicine (although it might be a fun thought exercise to try to think of examples of a medicine whose effects would be likely to be moderated by being an elderly Hispanic woman).
In bringing up the elderly Hispanic woman effect, you’re suggesting that we didn’t have an a priori reason to think that underperforming students would benefit from these mindset interventions and that we just looked through a bunch of moderators until we found one with p<.05. Well that’s not what we did, and I hope I can convince you that our choice of moderator was perfectly reasonable given prior research and theory.
There’s a lot of research (and common sense too) to suggest that mindset — and motivation in general — matters much more when something is hard than when it is easy. Underachieving students presumably find school more difficult, so it makes sense that we’d want to focus on them. I don’t think our choice of subgroup is a controversial or surprising prediction. I think anyone who knows mindset research well would predict stronger effects for students who are struggling. In other words, this is obviously not a case of the elderly Hispanic woman effect because it is totally consistent with prior theory and predictions. What ultimately matters more than any rhetorical argument, however, is whether the effect is robust — whether it replicates.
On that front, I hope you’ll be pleased to learn that we just ran a successful replication of this study (in fall 2014) in which we again found that growth mindset improves achievement specifically among at-risk high school students (currently under review). We’re also planning yet another large scale replication study this fall with a nationally representative sample of schools so that we can be more confident that the interventions are effective in various types of contexts before giving them away for free to any school that wants them.
Is the sense of purpose intervention just a bunch of platitudes?
You wrote:
Still another quarter took a course about “sense of purpose” which talked about how schoolwork was meaningful and would help them accomplish lots of goals and they should be happy to do it.
[Later you say that those “children were told platitudes about how doing well in school will “make their families proud” and “make a positive impact”.]
I wouldn’t say those are platitudes. I think you’re under-appreciating the importance of finding meaning in one’s work. It’s a pretty basic observation about human nature that people are more likely to try hard when it seems like there’s a good reason to try hard. I also think it’s a pretty basic observation about our education system that many students don’t have good reasons for trying hard in school — reasons that resonate with them emotionally and help them find the motivation to do their best in the classroom. In our purpose intervention, we don’t just tell students what to think. We try to scaffold them to think of their own reasons for working hard in school, with a focus on reasons that are more likely to have emotional resonance for students. This type of self-persuasion technique has been used for decades in attitudes research.
We’ve written in more depth about these ideas and explored them through a series of studies. I’d encourage you to read this article if you’re interested.
Our paper title and abstract are misleading
You wrote:
Among ordinary students, the effect on the growth mindset group was completely indistinguishable from zero, and in fact they did nonsignificantly worse than the control group. This was the most basic test they performed, and it should have been the headline of the study. The study should have been titled “Growth Mindset Intervention Totally Fails To Affect GPA In Any Way”.
I think the title you suggest would have been misleading. How?
First, we did find evidence that mindset interventions help underachieving students — and those students are very important from a policy standpoint. As we describe in the paper, those students are more likely to drop out, to end up underemployed, or to end up in prison. So if something can help those students at scale and at a low cost, it’s important for people to know that. That’s why the word “underachievement” is in the title of the paper — because we’re accurately claiming that these interventions can help the important (and large) group of students who are underachieving.
Second, the interventions influenced the way all students think about school in ways that are associated with achievement. Although the higher performing students didn’t show any effects on grades in the semester following the study, their mindsets did change. And, as per the arguments I presented above about the link between mindset and difficulty, it’s quite feasible that those higher-performing students will benefit from this change in mindset down the line. For example, they may choose to take harder classes (e.g., Romero et al., 2014) or they may be more persistent and successful in future classes that are very challenging for them.
A misinterpretation of the y-axis in this graph.
You wrote:
Growth mindset still doesn’t differ from zero [among at-risk students].
This just seems to be a simple misreading of the graph. Either you missed the y-axis of the graph that you reproduced on your blog or you don’t know what a residual standardized score is. Either way, I’ll explain because this is pretty esoteric stuff.
The zero point of the y-axis on that graph is, by definition, the grand mean of the 4 conditions. In other words, the treatment conditions are all hovering around zero because zero is the average, and the average is made up mostly of treatment group students. If we had only had 2 conditions (each with 50% of the students), the y-axis “zero” would have been exactly halfway in between them. So the lack of difference from zero does not mean that the treatment was not different from control. The relevant comparison is between the error bars in the control condition and in the treatment conditions.
You might ask, “why are you showing such a graph?” We’re doing so to focus on the treatment contrast at the heart of our paper — the contrast between the control and treatment groups. The residual standardized graph makes it easy to see the size of that treatment contrast.
We’re combining intervention conditions
You wrote:
Did you catch that phrase “intervention conditions”? The authors of the study write: “Because our primary research question concerned the efficacy of academic mindset interventions in general when delivered via online modules, we then collapsed the intervention conditions into a single intervention dummy code (0 = control, 1 = intervention).
[This line of argument goes on for a long time to suggest that we’re unethical and that there’s actually no evidence for the effects of growth mindset on achievement.]
We collapsed the intervention conditions together for this analysis because we were interested in the overall effect of these interventions on achievement. We wanted to see if it is possible to use scalable, social-psychological approaches to improve the achievement of underperforming students. I’m not sure why you think that’s not a valid hypothesis to test, but we certainly think it is. Maybe this is just a matter of opinion about what’s a meaningful hypothesis to test, but I assure you that this hypothesis (contrast all treatments to control) is consistent with the goal of our group to develop treatments that make an impact on student achievement. As I described before, we have a whole center devoted to trying to improve academic achievement with these types of techniques (see perts.net); so it’s pretty natural that we’d want to see whether our social-psychological interventions improve outcomes for the students who need them most (at-risk students).
You’re correct that the growth mindset intervention did not have a statistically significant impact on course passing rates by itself (at a p<.05 level). However, the effect was in the expected direction with p=0.13 (or a 1-tailed p=.07 — I hope you’ll grant that a 1-tailed test is appropriate here given that we obviously predicted the treatment would improve rather than reduce performance). So the lack of a p<.05 should not be interpreted — as you seem to interpret it — as some sort of positive evidence that growth mindset “actually didn’t work.” Anyway, I would say it warrants further research to replicate this effect (work we are currently engaging in).
To summarize, we did not find direct evidence that the growth mindset intervention increased course passing rates on its own at a p<.05 level. We did find that growth mindset increased course passing rates at a trend level — and found a significant effect on GPA. More importantly for me (though perhaps less relevant to your interest specifically in growth mindset), we did provide evidence that social-psychological interventions, like growth mindset and sense of purpose, can improve academic outcomes for at-risk students.
We’re excited to be replicating this work now and giving it away in the hopes of improving outcomes for students around the world.
Summary
I hope I addressed your concerns about this paper, and I welcome further discussion with you. I’d really appreciate it if you’d revise your blog post in whatever way you think is appropriate in light of my response. I’d hate for people to get the wrong impression of our work, and you don’t strike me as someone who would want to mislead people about scientific findings either.
Finally, you’re welcome to post my response. I may post it to my own web page because I’m sure many other people have similar questions about my work. Just let me know how you’d like to proceed with this dialog.
Thanks for reading,
Dave
II.
First of all, the obvious: this is extremely kind and extremely well-argued and a lot of it is correct and makes me feel awful for being so snarky on my last post.
Things in particular which I want to endorse as absolutely right about the critique:
I wrote “A quarter of the students took a placebo course that just presented some science about how different parts of the brain do different stuff. This was also classified as a “mindset intervention”, though it seems pretty different.” Dr. Paunesku says this is wrong. He’s right. It was an editing error on my part. I meant to add the last sentence to the part on the “sense of purpose” intervention, which was classified as a mindset intervention and which I do think seems pretty different. The placebo intervention was never classified as a mindset intervention and I completely screwed up by inserting that piece of text there rather than two sentences down where I meant it to be. It has since been corrected and I apologize for the error.
If another successful replication found that growth mindset continues to only help the lowest-performing students, I withdraw the complaint that this is sketchy subgroup mining, though I think that in general worrying about this is the correct thing to do.
I did misunderstand the residual standardized graph. I suggested that the control group must have severely declined, and got confused about why. In fact, the graph was not about difference between pre-study scores and post-study scores, but difference between group scores and the average score for all four groups. So when the control group is strongly negative, that means it was much worse than the average of all groups. When growth mindset is not-different-from-zero, it means growth mindset was not different from the average of all four groups, which consists of three treatment groups and one control group. So my interpretation – that growth mindset failed to change children’s grades – is not supported by the data.
(In my defense, I can only plead that in the two hundred fifty comments I received, many by professional psychologists and statisticians, only one person picked up on this point (admittedly, after being primed by my own misinterpretation). And the sort of data I expected to be seeing – difference between students’ pre-intervention and post-intervention scores – does not seem to be available. Nevertheless, this was a huge and unforgiveable screw-up, and I apologize.)
III.
But there are also a few places where I will stick to my guns.
I don’t think my interpretation of growth mindset was that far off the mark. I explain this a little further in this post on differing possible definitions of growth mindset, and I will continue to cite this strongly worded paper by Dweck as defense of my views. It’s not just an obvious and innocuous belief about about always believing you should be able to improve, it’s a belief about very counterintuitive effects of believing that success depends on ability versus effort. It is possible that all sophisticated researchers in the field have a very sophisticated and unobjectionable definition of growth mindset, but that’s not the way it’s presented to the public, even in articles by those same researchers.
Although I’m sure that to researchers in the field statements like “Doing well at school will help me achieve my goal” don’t sound like platitudes, it seems important to me in the context of discussions about growth mindset. Some people have billed growth mindset as a very exciting window into what makes learning tick, and how we should divide everyone into groups based on their mindset, and how it’s the Secret To Success, and so on. Learning that a drop-dead simple intervention – telling students to care about school more – actually does as well or better than growth mindset seems to me like a damning result. I realize it would be kind of insulting to call sense-of-purpose an “active placebo” in the medical sense, but that’s kind of how I can’t help thinking of it.
I’m certainly not suggesting the authors of the papers are unethical for combining growth mindset intervention with sense of purpose intervention. But I think the technique is dangerous, and this is an example. They got a result that was significant at p = 0.13. Dr. Paunesku suggests in his email to me that this should be one-tailed (which makes it p = 0.07) and that this obviously trends towards significance. This is a reasonable argument. But this wasn’t the reasonable argument made in the paper. Instead, they make it look like it achieved classical p < 0.05 significance, or at least make it very hard to notice that it didn’t.
Even if in this case it was – I can’t even say white lie, maybe a white spin – I find the technique very worrying. Suppose I want to prove homeopathy cures cancer. I make a trial with one placebo condition and two intervention conditions – chemotherapy and homeopathy. I find that the chemotherapy condition very significantly outperforms placebo, but the homeopathy condition doesn’t. So I combine the two interventions into a single bin and say “Therapeutic interventions such as chemotherapy or homeopathy significantly outperform placebo.” Then someone else cites it as “As per a study, homeopathy outperforms placebo.” This would obviously be bad.
I am just not convinced that growth mindset and sense of purpose are similar enough that you can group them together effectively. This is what I was trying to get at in my bungled sentence about how they’re both “mindset” interventions but seem pretty different. Yes, they’re both things you tell children in forty-five minute sessions that seem related to how they think about school achievement. But that’s a really broad category.
But doesn’t it mean something that growth-mindset was obviously trending toward significance?
First of all, I would have had no problem with saying “trending toward significance” and letting readers draw their own conclusions.
Second of all, I’m not totally sure I buy the justification for a one-tailed test here; after all, it seems like we should use a one-tailed test for homeopathy as well, since as astounding as it would be if homeopathy helped, it would be even more astounding if homeopathy somehow made cancer worse. Further, educational interventions often have the opposite of their desired effect – see eg this campaign to increase tolerance of the disabled which made students like disabled people less than a control intervention. In fact, there’s no need to look further than this very study, which found (counterintuitively) that among students already exposed to sense-of-purpose interventions, adding on an extra growth-mindset intervention seemed to make them do (nonsignificantly) worse. I am not a statistician, but my understanding is you ought to have a super good reason to use a one-tailed test, beyond just “Intuitively my hypothesis is way more likely than the exact opposite of my hypothesis”.
Third of all, if we accept p < 0.13 as “trending towards significance”, we have basically tripled the range of acceptable study results, even though everyone agrees our current range of acceptable study results is already way too big and some high percent of all medical studies are wrong and only 39% of psych studies replicate and so on.
(I agree that all of this could be solved by something better than p-values, but p-values are what we’ve got)
I realize I’m being a jerk by insisting on the arbitrary 0.05 criterion, but in my defense, the time when only 39% of studies using a criterion replicate is a bad time to loosen that criterion.
IV.
Here’s what I still believe and what I’ve changed my mind on based on Dr. Paunesku’s response.
1. I totally bungled my sentence on the placebo group being a mindset intervention by mistake. I ashamedly apologize, and have corrected the original post.
2. I totally bungled reading the residual standard score graph. I ashamedly apologize, and have corrected the original post, and put a link in bold text to this post on the top.
3. I don’t know whether the thing I thought the graph showed (no significant preintervention vs. postintervention GPA improvement for growth mindset, or no difference in change from controls) is true. It may be hidden in the supplement somewhere, which I will check later. Possible apology pending further investigation.
4. Growth mindset still had no effect (in fact nonsignificantly negative) for students at large (as opposed to underachievers). I regret nothing.
5. Growth mindset still failed to reach traditional significance criteria for changing pass rates. I regret nothing.
Why are there two “leave a reply” boxes?
Also, the “First, what is growth mindset?” section appears to just be them restating your own definition of growth mindset, but in many more words.
There have always been two “leave a reply” boxes, one above the comments, one below the comments.
You just never notice because you’ve never been first to post a comment before, so they’ve never shown up on the same screen for you.
I had the same experience. Incidentally, you rarely tweet links to your blog posts immediately after publishing them, and people who click links in tweets overwhelmingly do so very shortly after the tweet is sent, so perhaps this time more people than usual have loaded the page and found no comments at the bottom.
The two definitions are very different. Think of it this way: One student is taught growth mindset, the other is not (and has more traditional / informal understandings of accomplishment). Do they make different predictions about this scenario?
“Yao Ming practices basketball two hours a day. Verne Troyer practices twelve hours a day. Avoiding direct competition, and instead using general metrics for skill (percentage from 3 point line), which would have more skill after two years?”
The researcher who mailed in says, well, we taught them that people can always improve, but said nothing about the relative importance of practice and innate talent. Therefore the two kids would largely agree.
The quote from Dweck seem to imply that growth mindset is premised on the idea that the kids would disagree, and that having the growth mindset means you believe the primary factor defining success is effort, and so the control kid would say Yao Ming, and the growth mindset kid would put his money on Troyer.
I don’t mean this to be a realistic example per se, but merely to highlight the style of questions they would disagree about. For instance, it could be control kids think innate talent:practice is like 2:1 and the growth mindset kids think it is 1.5:1, (so practice is valued more) so there aren’t any ridiculous disagreements, but there MUST be some watered down version of the above that is true, at least statistically, for growth mindset holders. [It’s also important to note that, depending on the field, either student could be more technically accurate, and therefore better at predicting success. That might be irrelevant to the researchers, because I guess I don’t see it come up in the context of “students whose views of success more accurately reflect our statistical models perform better”. It is probably worth using that as a control so we can talk about whether there are any marginal benefits to, well, lying to children and giving them false hope of success.]
Your position on the functional definition of growth mindset is remarkably similar to the one you were criticizing on the serotonin model of depression from a couple weeks back.
I took Scott’s position there to be that references to “chemical imbalances” represent a noble act of misleading whose nobility derives from the truth of reductionism/materialism and the fact that people don’t recognize these truths.
In the case of growth mindset, the nobility of the act of misleading depends on the truth of the sophisticated growth mindset theory, which is what is being called into question here.
Sometimes claims like “X says Y” are true. Other times they are false.
This is really admirable. I wish more people could correct their beliefs as readily as you do.
Self-esteem also comes to mind as a example where harmful effects seem to happen (‘Does high self-esteem cause better performance, interpersonal success, happiness, or healthier lifestyles?’, Baumeister et al 2003 http://ardenm.us/p710/baumeister.pdf ) and a reminder of why, despite how reasonable the choice may seem as one navigates the garden of forking paths in one’s analysis, one-tailed tests are questionable except as part of a pre-registered design – because failure is always an option.
He’s going for the Bailey Metaphor – hit the deck!
When and why did we stop using the bailey metaphor? Did I miss some sort of tumblr event? Do we now have a substitute?
I think you’re misunderstanding what Null meant, but to answer your question, Motte/Bailey was just getting overused and becoming annoying and watered-down. People have suggested using “bait and switch” wherever applicable instead, since that seems to cover a good bit, though not all, of the same territory and isn’t a neologism.
When all you have is a motte and bailey, everything looks like an… invading army?
Well that’s a shame. I still like “Motte & Bailey” because it implies emergent behavior. “Bait & Switch” sounds like something an individual does deliberately, like a car salesman. Maybe we can salvage the term if we strengthen the requirements for its invocation.
The link is missing from the edit at the top of Growth Mindset 3
dave and Scott’s tones and style of writing are so different it is difficult to imagine them agreeing in writing even if they agreed in their heads. It also makes it hard to compare arguments. I love the writing on this blog. But it is sobering to see Scott’s clever, usually sarcastic, turns of phrase coldly picked apart by a clinician who is used to describing the same things only in the most dry, exact language possible.
I also think that the bottom third of the class is exactly who you would expect to benefit from a hearty “you can doo it!” because they’re getting a lot of signals that they can’t. It might undermine growth mindset as a mass movement but not perhaps as a useful tool as dave says. Hope he responds…
There are two separate questions here:
1) Would the bottom third benefit if they honestly thought, “I can do it”? Yes, of course.
2) Would giving someone “a hearty ‘you can doo it!'” (or something similar) make her believe she can do it? Probably not.
Part of the appeal of “growth mindset interventions” is that they are generally short and simple. But because of that, they are probably also ineffectual.
I don’t think we can say anything like “probably ineffectual”.
The most one can say is “not yet proven effective”.
After seven years as a teacher and a lot of reading in education, “probably ineffectual” is my present prior.
The bottom third of the class gets a lot of “you can do it!”
The only plausible way this intervention could make a difference is that the kids take the researchers unusually seriously (relative to other adults in their lives) because they are strange and impressive seeming.
I found the author’s response entirely unconvincing (along with condescending and deliberately verbose). Among other things (the lack of the non residualized data for example) the author fails to address at all one of Scott’s meta points, which is what Andrew Gelman calls “the garden of forking paths.” Even if the authors of a scientific paper can make an ex post justification for a set of analytical decisions that are structured to yield statistically significant results, we as consumers of science shouldn’t let them get away with it. If the raw data doesn’t speak for itself, there’s more or less nothing there.
If the whole point was to study at-risk students, why didn’t they just study them? Why are 2/3 of the sample population not the supposed objects of study? It is extremely hard to believe that the original intention was to discard 2/3 of the subjects for most of the analysis. Also, there is the issue that there are many degrees of freedom for defining at-risk students after the fact.
I was amused by the paragraph about platitudes. It in no way addressed the question of whether the intervention consisted of platitudes. But who cares? If platitudes work, I want to know that they work.
The thing about platitudes is that one mans platitude is another’s inspiration.
About a decade ago, I was terribly annoyed over “motivational” posters at work. I looked up some research (a quick google failed to find it today) which found that people who saw through to the platitudes didn’t change their intrinsic motivation for work, but those who took the inspirational message at face value had improved intrinsic motivation. (as far as I can recall, they asked about work satisfaction, then installed posters, then asked about work satisfaction and what they thought about the posters, then assumed being happy @ work == intrinsic motivation)
I would not be surprised if something similar is at work with the lowest performing students.
+1 to this. Psychology is weird: so research it.
I think the sense of purpose intervention is actually a good idea. Unmotivated students do need a reason as to “Why do I have to go to school/learn this subject?” other than “Because it’s the law” or “You have to do it because we adults say you have to do it”. Finding personal reasons for themselves, rather than imposed from on high, as to why they might get something out of attending school (rather than mitching and getting involved with petty criminality) would be useful.
The example used about the student learning long division has me caught between laughing and crying, because that was me (and as I’ve said before, the ‘more effective strategy’ for learning long division was my Victorian-era granny teaching me how she’d learned it). I did put effort into learning maths and I did seek help – to the point that it almost damaged my relationship with my father, as his attempts to help me with my maths homework resulted in frustratin on both our parts and always ended badly.
The best success in maths exams I ever had (and this was true for my entire class) was when Sister Alphonsus taught us for the Intermediate Certificate year (that would be ninth grade in the U.S. if I’m interpreting the Wikipedia article correctly). Her strategy was instilling fear and terror and by cracky, it worked. But in the main I’ve never understood maths, I’ve just blindly learned off the formulae which I then plugged in to get the result and I’ve forgotten everything other than basic arithmetic once I left school and wasn’t using anything more complicated in my working or personal life (ironically, the only practical applications of problems such as the one you may have seen in textbooks were in the industrial food processing sector and I wasn’t on the factory floor so didn’t have to calculate rates of flow etc.)
So, were I that hypothetical third grade student, my conclusion would indeed be that I should interpret my struggle as a sign that I was bad at math — because I did seek help, put in effort, adopt and practice alternative study strategies, and they’ve not improved my abilities much if at all.
Yes, you can improve ability by hard slog, but a low level of ability or talent or capability to start off with will always be low, even if “better than it would be without intervention”. Thinking that growth mindset teaching will make the likes of me (failing pass maths) turn into someone able to take and get a reasonable grade in honours maths is probably not on, and will only make the student feel worse (“I should be getting better, I’m expected to get better, and the teachers don’t believe I’m putting in the work because the model predicts if I put in the work I will do better than I’m actually doing so they think I’m lazy and lying about it”).
Why are 2/3 of the sample population not the supposed objects of study?
I wonder if that has to do with the fact that the schools involved selected the students for the study, and that probably they picked the kids most likely to stick with and complete the sessions (the really unmotivated ones either won’t volunteer to be part of the project in the first place or will drop out halfway through). If that were so, then the mix won’t be the same as if the schools picked straight up “who are your worst students, or the ones you consider most at-risk?” for the study.
The proper way to do it would have been to ask schools to select only students with GPAs below 2.0. But I don’t think they had planned to make such students the focus of the study beforehand.
Yes, exactly. From my reading of Dweck’s work I’ve never got the impression that growth mindset interventions are expected to benefit only poorly performing students. If Paunesku et al. believed that only those with GPAs below 2.0 would benefit, why did they mostly recruit those with above 2.0 GPAs? Two thirds of the sample had baseline GPAs above 2.0.
It’s plausible that the researchers expected the intervention to work better for “at-risk” students, but I doubt that they anticipated that it would not work at all in the general student population. The almost exclusive emphasis on poorly performing students in the paper is surely post hoc. The full-sample study, with its impressive sample size, did not support the hypothesis that growth mindset interventions work in unselected samples, but this disappointing result is de-emphasized in the paper, getting no mention in the abstract. In the abstract, they claim a sample size of 1,594 and report positive results without divulging that the positive results apply only to a subsample of 519.
If they had pre-registered their study, stating their hypotheses before recruiting subjects, I think the paper would have been very different.
— David Foster Wallace (source)
(Scott, is there a way to get the “cite” attribute in the blockquote tag working in the comments?)
Cite works perfectly fine. It is defined to do nothing. Related, you’re using it wrong. It is for citing webpages, not people.
Wait, why is it designed to do nothing? That seems like an odd decision
It is meant to be read by machines. Google may choose to give it some juice. You could write an add-on that follows the link. Or you could write an add-on that verifies the accuracy of the quotation.
You could wrap your citation in cite tags, but that’s really intended for titles.
I must say, your apologies put all other apologies to shame.
The way to settle disputes over definitions is to look at the intervention that was actually used in the study.
Did they sit kids down and spend 45 minutes telling them things like “Ability matters and effort doesn’t. The kids who do best in school do it by working really hard, not by having natural talent”?
Or, did they spend 45 minutes telling the kids things like “People change a lot as they learn. Even if you start out bad at something, you can grow to become really good at it”?
If it’s the former, then the thing which was actually investigated by this study is kids’ beliefs about the relative importance of ability and effort. If it’s the latter, then the thing which was actually investigated by this study is kids’ beliefs about their potential for improvement.
(In studies that measure “growth mindset” rather than manipulating it, you instead look at how they measured it. Did they measure it by asking the kids questions like “Agree or disagree: Ability matters a lot more than effort” or did they ask questions like “Agree or disagree: If you’re bad at something, then you’ll always be bad at it”?)
Scott documented this in great detail for several studies in the first post. One legitimate and definite complaint is that the studies themselves are wildly inconsistent.
Different studies are about different things.
The study that this blog post is about, Paunesku et al. (2015), describes its “growth mindset” intervention as follows:
And,
I see no problem with a 1-tailed test. It’s perfectly appropriate if you’re willing to state the expected direction of the intervention ahead of time. Which the author didn’t, technically, but it’s clear which direction they were looking for.
2-tailed tests test for “X is different than Y” as the alternate hypothesis. 1-tailed tests test for “X is better than Y”.
The whole frequentist framework is pretty flawed, in my view, but there’s nothing wrong with 1-tailed tests given that framework.
One tailed test is fine.
P=0.07 is not. P=0.05 is the absolute, bare minimum level of significance, which people are beginning to see is now where near strict. Anything *more* than that….
The phrase “tending towards significance” captures one of the central problems in social psychology.
P-tests are dumb and arbitrary, but then, at least they’re consistent (Barring tricks and manipulation like the Elderly Hispanic Woman Effect). But once you start giving people flexibility on arbitrary bright lines, the whole system falls apart.
Using a two-tailed test and then, when your intervention does indeed have the sign you expected, saying “oh but we really should have used a one-tailed test” is, uh, much less okay.
I’m sympathetic to frequentism, but Fisherian null hypothesis significance testing needs to be thrown out the window. The reason it’s the dominant paradigm isn’t because it’s especially good for anything, but because Fisher was an excellent and ruthless politician who basically broke off academic contact with anyone who tried to do statistical work outside of that paradigm. At least this is my understanding.
Alternatively, Fisher was well ahead of his time, and his methods were a big improvement over previous ones even if they weren’t as good as what would have shaken out as the best methods if Fisher had had more competition at the time.
Analogously, Newton’s way of doing calculus was really hard for anybody not as smart as Newton. His fellow Brits stuck with Newton’s ways for a century, during which they didn’t make much progress because it was so awkward. Fortunately, Newton had had a great rival in developing calculus in Leibniz, and Leibniz’s more tractable notation caught on in Europe. Eventually, the British swallowed their pride and came around to the Continental customs, but it took a long time.
I think both can be true? It’s difficult to overstate Fisher’s influence on the development of statistics, of course, and in many ways he’s a personal hero. On the other hand, his academic fights with Neyman and (the younger) Pearson were legendary, and I’m not the first to argue that these fights, and the subsequent unpopularity of the Neyman-Pearson approach, were detrimental to the development of statistics.
So Tldr, 90 minutes of growth mindset intervention significantly increases GPA and non significantly increasing passing rates with p ~ .07 – .13
I suppose I agree that the author overstated their results by failing to mention that non-significance of the intervention in the abstract. But I don’t think this paper demonstrates that growth mindset fails to do anything. Even all of the results were nonsignificant, it would mean only that, they results are not significant, you can’t make any conclusions about whether the results were due to random chance.
Frankly I’m surprised they got any result out of two 45 minutes sessions. From just this paper I think its quite plausible that a fuller immersion into the growth mindset would improve student’s performance more.
There are tons of papers that get academic results a year later out of a 45 minute intervention. So it is not at all surprising that this paper does.
Could you share some more not associated with PERTS? I am trying to build a list.
https://www.perts.net/about#team
Dave Paunesku’s dissertation has some very interesting results from other experiments, including tests of a banner ad (saying you can improve your brain) that increased the average number of questions people answered on Khan Academy before quitting by ~3% with a large sample (250,000 across 5 conditions).
https://web.stanford.edu/~paunesku/paunesku_2013.pdf
Also see cost-effectiveness estimates in chapter 7.2.
Isn’t his growth mindset comment mixing two levels? Growth mindset isn’t what Carol Dweck believes, growth mindset is what Carol Dweck wants children to believe. She may know perfectly well that success is say 80% talent and 20% effort. But the belief of growth mindset advocates seems to be (I could be wrong about this) that if you teach child A that success is “100% effort”, child B that it’s “80% talent 20% effort” and child C that it’s “100% talent”, then child A will do better in life than child B, who will do better in child C. Thus growth mindset, the thing that you want to instill in the child, is in fact that success is all about effort.
The standard suggestion by growth mindset advocates isn’t that we praise a child for talent 80% of the time and for effort 20% of the time; it’s that we never praise a child for talent and always praise for effort. It could of course be because praising for talent doesn’t increase talent, while praising for effort is likely to increase effort.
A thousand times this comment. It’s even funny how Dave in his reply says “Growth mindset is not the belief that “ability doesn’t matter and only effort determines success.””. That’s not what was said. What was said was that growth mindset is the belief that people who have the belief that abiliy doesn’t matter, but only effort does, do better. So Dave is defending against a statement that wasn’t made at all.
It was certainly foreboding for the rest of the reply that Paunesku’s very first point of the whole critique consisted of a horrific misreading of Scott.
Wait, what? How can a quote from one of Scott’s posts consist of a misreading of Scott?
Or is the problem that there’s one less level of indirection, in the Paunesku rephrasing, than in the quote of Scott’s he’s replying to? But that’s an improvement, a polite and implicit correction – because “Growth Mindset” names a student’s belief/attitude, not a researcher’s belief about that belief. (I think. I didn’t check.)
The second paragraph is exactly what Obrigatorio said. He is very clear.
Ah, right. I get it now. Scott made a very different mistake, a trivial one confusing the levels. He called the researchers’ beliefs “growth mindset”, whereas that term is supposed to apply to students’ beliefs.
Dave Paunesku read Scott as if Scott was talking about the level of student beliefs, in which case the nearest (but way off!) interpretation is that Scott thinks growth mindset is the belief that talent doesn’t matter and only effort does. So by ignoring the possibility of a trivial level confusion, Dave attributed a much deeper confusion to Scott.
Wow! I thought such snafus were only possible in philosophy. Well, and Sesame Street. (“Without the letter R, all your friends would be fiends.”)
Maybe. But a much simpler hypothesis is: Scott made a trivial mistake; Paunesku silently corrected it and objected to something else; Obrigatorio didn’t understand a word that was written, but saw the correction and thought it was the disagreement under discussion; Probably the same with Thecommexokid.
(And it isn’t an error to say “evolution” to mean “the theory of evolution.” But it’s more confusing in psychology because theories are about ideas.)
This is completely correct and I was disappointed Scott did not respond directly. Scott’s definition did not at all predict that Dweck et al would believe “ability doesn’t matter”… only that “the (possibly false) belief that ability doesn’t matter is helpful for students”.
Reposting a comment by LW user ’emr’, because I found it very insightful (hopefully that’s okay with emr):
—————–
The discussion itself is a good case study in complex communication. Look at the levels of indirection:
A: What is true about growth, effort, ability, etc?
B: What do people believe about A?
C: What is true about people who hold the different beliefs in B?
D: What does Dweck believe about C (and/or interventions to change B)?
E: What does Scott believe about C (by way of discussing D, and also C, and B, and A)?
Yikes! Naturally, it’s hard to keep these separate. From what I can tell, the conversation is mostly derailing because people didn’t understand the differences between levels at all, or because they aren’t taking pains to clarify what level they are currently talking about. So everyone gets that E is the “perspective” level, and that D is the contrasting perspective, but you have plenty of people confusing (at least in discussion) levels ABC, or A and BC, which makes progress on D and E impossible.
Source is one of the discussions of SSC for LessWrong readers only which ‘tog’ (Tom Ash of the Effective Altruists bunch) has started starting: http://lesswrong.com/r/discussion/lw/m12/ssc_discussion_growth_mindset/
I have seen Dweck speak at a conference and she started with this proposition. Intelligence is highly malleable and believing you can change mindset is one of the many levers of this malleability. Under questioning she moved towards the position of course talent exists but overall outcomes depend on attitude. At the time I categorised her view as that there exist a small class of people with “talent” Terrence Tao’s & Maryam Mirzakhani’s but for everyone else mindset was more important than talent.
Of course this is separate from believing that telling children mindset is more important than ability will be beneficial. Although if you didn’t believe the former you would be unlikely to believe the latter.
I’m always struck by how people act like “growth mindset” is some sort of subversive radical underpublicized new theory undermining the suffocating American conventional wisdom that motivation doesn’t matter, as exemplified by how Professor Linda Gottfredson is a huge celebrity who is on TV all the time recounting the latest depressing findings from psychometrics.
Oh, wait, I’m sorry, that’s only in Bizarro America … In this America, television is absolutely jammed with motivational speakers telling you that all that matters is motivation and you can be anything you want to be and so forth and so on. Heck, motivationalism is pretty much the characteristic mainstream of American ideology going back maybe to Ben Franklin’s “Autobiography.”
What’s improbable from a Bayesian standpoint about these growth mindset studies is how small a 45 minutes motivational sermon is relative to all the other hundreds of hours of motivational sermons kids have heard in their lives and will hear in the rest of their lives.
How great that Dr. Paunesku took the time to compose this response! This is a gold-standard example of healthy debate between informed and well-intentioned people, and I feel privileged to have a front-row seat for it.
Kudos for taking the corrections on the chin. However
1. The graph was, at best, “confusing”. (https://www.youtube.com/watch?v=J7QNw1LRJv4) This looks a lot like an attempt to make the effect look larger than it was by stretching the Y axis. I think the authors can share a lot of the blame on this point.
2. As others pointed out there was a lot of room to move in the study to get a significant effect. With 1500 students this should not have been too hard, and they basically failed. They only got a ‘significant’ result by aggregating growth mindset *with* goals on a *subset* of students. They did not, as far as I can tell, announce their intention to do this before the event so I am calling this data mining.
3. The reason for the insignificant result is that the alleged effect is *tiny*: 1/10 of a grade point!. Are students going to continue to have growth mindset etc with such puny results on average? I doubt it.
One way that the graph was confusing was that it was a bar chart, and thus strongly implied that the zero point was meaningful, which it was not.* Even when that is true, I recommend the simple rule: never use bar charts.
* OK, technically, the zero point was the population average. But that isn’t meaningful. For example, if you eliminate one of the arms of the study, the population average moves.
A single 45-minute online course that improves GPA by 0.1 may well be worth it. Imagine if we found ten separate such interventions and they all added linearly.
Perhaps i’m misunderstanding or misinterpreting but it seems like your true criterion has less to do with p-values and has more to do with replication. Perhaps we’d be better off with “Don’t tell me what the p-value is until you replicate… THEN p-values can perhaps explain how much i care about the replication”.
P.S. props to Dr Paunesku for engaging a critic with reasonable conversation rather than the alternative we see all too often these days. Also for “owning” the message communication problem. Seems like it’s a positive thing.
On the subject of replication … . It’s more convincing when done by people other than the people whose initial result is being replicated.
Yup.
By the way, it shouldn’t be too surprising if it turns out that some people are better at motivational inspiration than other people and thus their efforts replicate while attempts by others to imitate them don’t replicate. Notre Dame football coach Knute Rockne, for example, was famous for inspirational halftime harangues that got big results. But that doesn’t mean every coach who tries to do what Rockne can replicate the effect. Motivational propagandizing is notoriously driven by individual charisma and fortuitous intersections of the man and the moment.
Need more social media / blog peer review in our lives.
Given what I know of parents, I suspect the non-underachievers are already basically getting a bunch of “growth mindset” and “make your parents proud” and “go you!” type coaching at home, and so giving them additional such interventions via a workshop is like adding salt to the sea. These are just things that the kinds of parents who tend to have average to above average performing kids already tend to do. By contrast, the underperforming kids probably don’t generally hear a lot of positive feedback about their educational skills, period, so it may be fairly easy to make them feel a little better and boost their willingness to try harder.
Scott, I think you’re unfairly misreading Dweck, because she equivocates between growth mindset is true (“New work in psychology and neuroscience is demonstrating the tremendous plasticity of the brain—its capacity to change and even reorganize itself when people put serious labor into developing a set of skills”) and growth mindset is useful (“Other groundbreaking work (for example, by Anders Ericsson) is showing that in virtually every field—sports, science, or the arts—only one thing seems to distinguish the people we later call geniuses from their other talented peers. This one thing is called practice.”)
She also says “Which mindset is correct? Although abilities are always a product of nature and nurture, a great deal of exciting work is emerging in support of the growth mindset.”
That’s all from the same paragraph.
Just pulling out the parts where she says that growth mindset is true gives an unfair perception of her argument. Attacking her for equivocation would be fine, attacking her for a view she spends some time defending but doesn’t seem to fully hold isn’t. Dave’s response, that Dweck doesn’t believe that only training matters, seems like it should clarify which of the two positions she holds. Continuing to be annoyed at the other seems unreasonable.
I can heartily endorse this, because I see the ‘graduates’ of the secondary school and early school leaver programme I worked for regularly in the court pages of the local paper. Just this week, a girl who was fourteen when I was working in the school office and is now just twenty-one was again up in court and is now looking at doing jail time. She has a raft of convictions for petty crime, a two-year old child who is in care, and a heroin addiction. Her older sister has gone much the same way. It’s sad, is all I can say.
And if this study can find effective interventions, it’s no exaggeration to say it will add to the sum of human happiness if it can help girls (and boys) like that fourteen year old from a disadvantaged and troubled background from ending up on this path. My fear is that it will get boiled down to the simplistic notion of growth mindset that Dr. Paunesku criticises as a misunderstanding and misrepresentation (“ability doesn’t matter, hard work does!”) especially when it comes to being implemented as a wide-scale government intervention.
And that in the name of cost-cutting and increasing productivity, the very necessary ancillary supports will be reduced or taken away (“Sally doesn’t need a Special Needs Assistant, just the three online sessions of Growth Mindset Implementation to make her pay attention in class, work harder at home when doing her homework, and pass her exams with good grades so she can go on to get a good job! And if she doesn’t, it’s because Sally is to blame for not working hard enough like the programme teaches her, not because she’s in the position of parenting her younger siblings because her mother suffers from psychological problems preventing her from being an engaged and involved parent and she’s never had contact with her father, there’s no model of engagement with education in her family for her to emulate, and all her friends are urging her to ditch school, hang out with them, and get involved in petty crime!”)
I was very surprised by this paragraph. Either I’m understanding it incorrectly, or after all this discussion growth mindset turns out to be something completely and utterly trivial.
It is a believe that… abilities can be improved through work or in some other way? Was there a single human being in history who did not believe that? Or that somebody who tries to improve will do better than somebody who has given up?
I don’t think anyone out there believes that they have an improvement(effort)/effort ratio of zero. But many people believe (often quite reasonably, I think) that they have a lower ratio than others.
Let’s think again about our hypothetical student, let’s call him Bob. Bob wants to become a mathematician. He studies very hard and, indeed, learns a lot of new things. However, he has noticed that it always takes him 10 times as much effort to learn a new theorem or an algorithm as it does for his many friends, who also want to become mathematicians, and since they all invest equal amounts of effort his friends are 10 times more knowledgeable as a result. Bob decides that he will not succeed in a competition like that and becomes a dancer instead.
Are Bob’s actions consistent with the growth mindset? If we take the quoted paragraph literally, apparently they are. After all, Bob never doubted that he could improve his abilities. But it also feels like in some way they should not be.
Is there something more to the growth mindset than the paragraph tells us, some extra assumptions? Or is Bob’s decision entirely in line with it?
Given the bits about “improving intelligence”, I am rapidly coming to the conclusion that nobody knows what the hell the “growth mindset” thing really is, and the popular notion of “it’s not about ability, it’s hard work!” is the nearest to a common understanding, despite the caveats of the researchers.
Possibly growth mindset (a.k.a. “hard work and practice with talent will do more than talent alone”) was or is a response to the notion that sorry, you have a fixed IQ and nothing can change it and if you’re only in the range 100-120 IQ points, get used to being a peon because that’s all you’ll ever be.
“Even if you don’t score as a genius on IQ tests, by working hard, learning skills and techniques, and applying these to problem-solving means that you can be successful both at school and work” is a lot more hopeful and encouraging a message than “You have a fixed IQ and there’s nothing you can do about it”.
Part of the problem here is that we have no useful definition of intelligence; what the hell are we talking about when we talk about “intelligence is malleable”? Basic hard-wired IQ? There is/isn’t such a thing? What we might call “functional IQ” which is learning techniques which you then apply to problems and tasks you encounter so you don’t need to be a genius who can identify and solve from first principles a problem they’ve only clapped eyes on for the first time now? I mean, that’s the difference between someone who’s a maths whizz who can look at a problem and re-invent the wheel (so to speak) about solving it from first principles, as apart from an idiot like me who needs to be given a formula and told “This is how you tackle the problem”.
Yes, the idea that “you can always improve” is certainly a platitude. I would be shocked if even the most demoralized students believe that their effort will yield literally 0% improvement. Perhaps they just don’t see a way forward because they feel like they aren’t cut out for the “conventional” routes to success that they are aware of from their peers, teachers, and school system, and so they just live life for the moment until they meet their inevitably unsuccessful conclusion in life. If their effort only yields 10% of the progress that it yields most others, then it is a perfectly rational choice to salvage what happiness you can from life in the short-term until you die on the street…
…That is, unless you can find your “comparative advantage” in some non-mainstream career path.
Perhaps need to teach kids two things:
1. Discover your “comparative advantage” in terms of talent.
2. Apply effort to cultivating this talent.
Only then can kids expect to be successful.
Let me explain with examples:
Imagine that StudentA has an IQ of 60. StudentA does not have much talent at arithmetic or writing. However, the student shows comparatively more talent at sorting objects of different shapes into matching piles.
Should StudentA be taught, “Just work really hard, and you can realistically become a NASA engineer!”? No.
Should StudentA be taught, “If you work really hard, you have a good chance at outcompeting the other IQ-60 people out there one day for a decent job at a recycling sorting facility, rather than freezing on the street or having to be cooped up in assisted care for the rest of your life. On the other hand, if you don’t work hard, you might not be able to outcompete the other IQ-60 people out there who did work hard for the recycling sorter jobs.” Yes.
Let’s look at StudentB. StudentB has an IQ of 140, but is very very short (let’s say…150 centimeters, or about 4’10” tall).
Should StudentB be taught, “Since you have an IQ of 140, you are destined to be a NASA engineer no matter how much or how little effort you put into things.”? No, obviously not.
Should StudentB be taught, “Since you have an IQ of 140, if you work really hard to cultivate your innate math talents, then you have a decent shot at outcompeting the other 140-IQ students out there who will also be working hard for that job at NASA.”? Yes.
Should StudentB be taught, “If you try hard enough, you can become the world’s best NBA basketball player!” No.
If the theory of comparative advantage is correct, then there should be something that just about anyone can contribute to society, if they add at least a little effort to the equation. Perhaps this is what growth mindset should be telling kids.
Came here to say that, but you’ve said it very well already. It’s troubling as its such a basic fundamental point.
Carol Dweck is the co-founder and board director of Mindset Works, a company selling mindset products to schools. However, the COI statement in the Paunesku et al. paper says that “The authors declared that they had no conflicts of interest with respect to their authorship or the publication of this article.”
Intellectuals tend to be only weakly aware of how immense the motivation industry is.
By the way, the sheer amount of money spent on stuff that claims to boost motivation suggests to me that some of it does work. But it also suggests that the marginal impact of one more 45 minute sermon from the motivational industry is likely, on average, to be marginal.
If you get the chance could you ask them in more standard terms to tabboo “ability” because they appear to be using a different meaning.
The letter writer, Dave, appears to be using it to mean “What I am capable of doing at this moment” but in your arguments you use it more similarly to “natural ability” or your starting position from before you start really trying at all.
Also Dave says that they’re “planning yet another large scale replication study this fall”. Has this trial been preregistered or will it be? All well run human trials ethically should be and it makes them dramatically more valuable to everyone.
The study design should be pre-registered and exactly what analysis is planned. It means that if there’s any question about why a subgroup is chosen for analysis you can point to the pre-registered plan and prove without a shadow of a doubt that it’s not just p-hacking and that you planned to analyse that subgroup from the start.
Also that interpretation of the chart sets my stats sense tingling. You can’t just point at error bars on the post-intervention groups and say that they don’t overlap, otherwise you risk falling foul of a version of one of the most common stats errors in research:
http://www.sandernieuwenhuis.nl/pdfs/NieuwenhuisEtAl_NN_Perspective.pdf
It may actually be significant or it may not but that looks like an extremely bad way of looking at the data.
If you get the chance could you ask them in more standard terms to tabboo “ability” because they appear to be using a different meaning.
The letter writer, Dave, appears to be using it to mean “What I am capable of doing at this moment” but in your arguments you use it more similarly to “natural ability” or your starting position from before you start really trying at all.
I think this is exactly right. Scott is using “ability” to mean what Dave meant by “talent,” which is getting this all confused. Scott is saying: results are a function of effort and ability. Dave is saying ability is a function of effort and talent. But Scott’s “ability” is the same as Dave’s “talent.” And thus this part of the disagreement is purely semantic.
I also had this impression, and I see a couple of people beat me to mentioning it.
There’s ability[1]: How good you are at the task at hand, and ability[2]: How easily you improve at the task at hand, and these are not the same thing. Broadly, ability[1] = practice * ability[2], and ability[2] = some function of IQ and other stuff.
I find it quite likely that ability[1] can be improved by growth mindset simply because it will encourage additional practice. On the other hand, conflate the two and you get people saying dumb things like “these findings about growth mindset mean IQ doesn’t matter!”
I also think there is an implication about a) what the upper bound of ability [1] is for “most” people, and b) the idea that ability [2] is non -trivially greater than zero.
I think the hypothesis is that some significant portion of at risk students believe that they cannot succeed at school, much in the way I am convinced I cannot succeed at professional basketball. Convincing them that effort can result in successful results seems like a pre-requisite for success.
I think the IQ60 example given above is non-helpful. This is not the student population we are talking about.
Statistics: (1) They do a t-test. (2) Yes, you can just point at error bars and say that they don’t overlap. That is a more stringent test than a t-test. (3) This has nothing to do with the paper you cite.
http://www.graphpad.com/guides/prism/6/statistics/stat_relationship_between_significa.htm
It’s hard to say since the error bars aren’t labeled and I’ve not gone through the paper to see whether it says whether they’re SE, SD or Confidence interval error bars.
If the error bars are definitely Confidence interval error bars and if we know the groups in each group are of approximately equal size (it doesn’t work if one is 250 people and the other is 125) then it’s probably safe to say that they’re statistically significant.
I thought that they were confidence intervals, but, actually, the paper does say that they are SE bars. (Oddly, the caption on the image is not the caption in the paper.)
If confidence intervals don’t overlap, then the difference is significant, regardless of the population sizes or variance.
The sample size bit is important: from the same page, you can try this example out in your favorite stats package:
otherwise it is a good rule of thumb.
Let’s just say that graphpad is not my favorite statistics package.
First of all, the obvious: this is extremely kind and extremely well-argued and a lot of it is correct and makes me feel awful for being so snarky on my last post.
Here’s what I still believe and what I’ve changed my mind on based on Dr. Paunesku’s response.
You’re great, and I love your blog and the way you think.
(Dr. Paunesku is pretty great too.)
I appreciate Dr. Paunesku’s comments as well as Scott’s thoughtful response. With regard to combining the interventions I remain unconvinced by Dr. Paunesku’s justification for this and agree with Scott.
Dr. Paunesku said:
We wanted to see if it is possible to use scalable, social-psychological approaches to improve the achievement of underperforming students.
Combining the results was not necessary to achieve the aim stated here. One of the interventions did produce a significant result on its own, so presenting combined results only obscures things.
Perhaps more importantly, the sense of purpose and the growth mindset interventions involve different mechanisms of change, hence they are heterogeneous. The authors used a manipulation check to determine if the interventions affected participants beliefs about the malleability of intelligence. They reported that the growth mindset did have this result (p = .005), while the sense of purpose intervention did not (p > .24). Additionally, they assessed meaningfulness of schoolwork. The sense of purpose intervention produced a significant (p = .018) effect on this, while growth mindset was only “marginally” significant (p = .078). Hence, the two interventions operate by theoretically different mechanisms. If either of these interventions does have a real effect on student grades it is important to understand their respective theoretical bases for producing results. Combining results from two such different interventions makes it difficult to understand the theoretical importance of the results. (I’m sorry if I seem to be repeating myself, it’s late at night where I am, and I am hoping to make my point clear.)
I thought that was very clear (and an important point!)
We wanted to see if it is possible to use scalable, social-psychological approaches to improve the achievement of underperforming students.
First, though, I’d really appreciate a clarification of what is meant by “underperforming students”.
Bright but lazy? Learning difficulties? Not academically oriented but would be whizzes at sports, landscape gardening, or working in the construction industry (my brother was plenty smart if only he worked, but he could not wait to leave school and only wanted to get a job working with his hands)? Children from disadvantaged backgrounds?
There’s not really going to be a one-size-fits-all solution here, much as I’d love to see one. And I rather fear much as any government would love to see one that it could work up into a programme and use for every school in the country, under the mantra of “Get our kids all getting A+ in their exams so they can be the future STEM employees and innovators our economy needs!”
If the Growth intervention had an effect on the measure of Purpose, that suggests that the good done by both interventions was by changing Purpose and that the strong effect of the Growth intervention on Growth mindset (or desire to emulate it) had no effect. This also explains why the joint intervention did not aggregate.
I love these kinds of thoughtful, detailed conversations and I wish they happened more often. This is a good mode of discourse.
I will state up front that I am not a professional psychologist or statistician, just a bored computer programmer with an outsized sense of self-importance. Nevertheless, I see a couple problems here. One of them is relatively small; one of them, despite having only been mentioned once or twice by Scott and in the comments, I think is much much larger.
First of all, I think it only sort of matters that focusing on low-performing students “was perfectly reasonable given prior research and theory.” Even if one has the best of intentions, it is in general very easy to fool oneself — something that psychologists should know especially well! Therefore we see studies where only a couple of subgroups are analyzed, but the subgroups were chosen after “eyeball analysis”; these are especially insidious because the “eyeball analysis” isn’t usually reported. More troublingly, in the absence of preregistration or other public commitment to a study design, it’s not really possible to tell genuinely well-supported subgroup analyses from elderly Hispanic woman effects. Paunesku says that research and theory support the decision to focus on low-performing students — but this is meaningless if I can find different research and theory that state that growth mindset is disproportionately good for high achievers. (Important note: I have no idea whether this is actually the case.)
The much larger problem hanging over all of this is that even if growth mindset interventions do absolutely nothing, it would be unsurprising for Paunesku et al. to find a significant effect, and then replicate it again and again. After all, parapsychological researchers can do the same thing. Especially when experimental results feature small effect sizes, effects moderated by multiple variables, and effects “approaching significance” in some conditions, the prospects for clarity are dim. What I’d really like to see is for Dave Paunesku or Carol Dweck to perform a preregistered and tightly controlled experiment working with growth mindset skeptics (like the water memory experiments with James Randi), and see if it replicates then.
“Growth mindset skeptic” is not a useful category. Psychology consists of communities that ignore each other. In a sense, almost all psychologists are growth mindset skeptics, but they’re also skeptics about all the rest of psychology and don’t care enough to get involved in any particular area.
There are people who don’t care about growth mindset, and then there are people (like Scott; presumably there are others) who do care, but are skeptical of the usual conclusions reached. I am referring to the latter group.
That Dweck paper is super weird, and it almost seems like she’s strawmanning the “fixed mindset”. The way she describes it, I think almost everyone subscribes to the growth mindset. The fixed mindset describes someone who thinks that abilities are not malleable at all. If you believe that, you very likely have a false belief about reality, and it’s not surprising that you have some problems as a result.
If you define fixed mindset as (100% innate ability, 0% effort), and growth mindset as (0-99% innate ability, 1-100% effort), it’s completely unsurprising that the growth mindset people do better.
These things can be assessed with numerical scales of the form “1 constitutes agreement with X, 7 agreement with Y, and 4 is neutrality midway between them.” And I have seen these in some mindset studies.
Carl, your comments in this thread are thoroughly, quietly rational and constructive. Not to imply that your comments elsewhere are any worse. It’s just that I’m noticing it right now.
What did the students get for completing these tasks? Did they get told to do them, did they volunteer? Did they get paid or get course credit (by their schools)?
Were the students all grouped together for the tasks? (I know that there were several locations around the US for the 13 schools, but in each condition did the low ranked and high ranked get mixed together for the task)
Has another study been done where the tasks at hand were done from the 1st person perspective? The tasks described in the article for growth mindset stress thinking about the self in 3rd person or imagining helping another party. It might be that the effect is caused by this imagined seperation, in a similar way you can help with phobias etc The growth tasks basically set this dynamic up – “Imagine what a person much smarter than you would do with their friends to help them. Now do that yourself.”
As an aside, the reason that the growth mindset and life purpose approaches don’t gel together might be because they have different locus of controls and you can only use one locus of control at once.
“For example, imagine a third grader struggling to learn long division for the first time. Should he interpret his struggle as a sign that he’s bad at math — as a sign that he should give up on math for good?”
If you are struggling when others are not this is a sign you are a bad at math (relative to them). If you are sufficiently bad maybe you should “give up” on math. At least if “give up” means accept you will probably never be skillful at math and you should work toward minimizing the amount of damage math will do to your life.
Deciding you are not talented at something is often wise. I have very little musical or artistic ability. In order to be skilled at music/art I would need to expend massive amounts of effort. And even then I would probably just be mediocre. I wish I was skilled at music but I do not have enough talent for practice to be worthwhile.
There are some skills that are so useful its worst “pressing on” even in the face of pretty serious difficulties. Literacy comes to mind as a good example. People should try very hard before they conclude reading is “not for them.” But mathematics is not overly useful. Lots of people get on fine without the ability to handle fractions or percentages. Plus people have phone calculators!
I don’t think you’re reading this the way it was intended. Many kids struggle with long division initially, including some who will go on to be just fine at math relative to their peers.
Emily, as I said, I was that third grader 🙂
Very much struggling to master the subject even when the others in my class had grasped it, and this was exacerbated by being capable in other subjects, so I got the “You’re just not trying hard enough” from teachers and family, when I was bashing my brains in trying to understand it.
I’m not good at maths. I don’t have the ability. And dyscalculia is as genuine as dyslexia. We’ve thankfully moved on from telling dyslexic children they’re either stupid or not trying hard enough; if we could recognise that dyscalculia* also exists – well, for one thing, my school life would have been less miserable when it came to maths and I might actually have understood some of the subject.
*Honestly, reading the list of indications gave me so many “aha!” moments – including confusing left and right, something I have a tendency to do to this day; being hopeless at directions; transposing numbers; remembering names (again, I am useless at names, despite all the cute memory tricks people advise me to try; I can recall faces very well, but I’ve forgotten the names of people I’ve worked with for five or more years three months after leaving the job).
Wait, “confusing left and right, being hopeless at directions; transposing numbers; remembering names”. I’m a math professor and 1,2 and 4 all describe me. Transposing digits depends on context: all the time if I am trying to remember a phone number or credit card number, basically never if I am doing a computation. (It’s the different between memorizing “Whose woods these are I think I know” and “When llamas that is he swallow me run”. Same parts of speech, but one of them has rhythm and meaning.) I always loved math because I could work out all the things I couldn’t remember. Does dyscalculia mean the inability to apply algorithms mindlessly?
If you are in academia, you probably believe that it is immoral for K-12ers to give up on any schoolwork. Everyone should “press on” because success in school is the way and the truth and the light, the alpha and omega.
Okay, I exaggerate. A little.
If at first you don’t succeed, quit?
Seems.like.pretty bad advice. I’m not even sure how to steel-man your argument.
Commenters above have already rightly pointed out that it is obvious that there can always be *some* improvement at something with any given amount of effort and practice, even if it is infinitesimal. Yes, kids might sometimes say, “Practicing doesn’t help me AT ALL,” and of course, they are exaggerating, and I bet if you really nailed them down on the issue and got them to clarify their views, they would re-phrase their hyperbole to something like, “Practicing helps me so little that the added benefit is insignificant.”
So, is the growth mindset model merely changing the magnitude of this assessment on a sliding scale? Such as: “You, StudentX, falsely estimated that practicing 2 hours each day would improve your math scores by 1%, but actually it will improve your scores by 10%” (which might or might not be empirically true, by the way…in which case, if the student then was motivated to practice math 2 hours a day and his scores improved merely 5%, would this make the encouragement a sort of “noble lie”?) or is the growth mindset trying to do something more fundamental than changing this magnitude?
I get the sense that the growth mindset people think in a fundamentally different way about effort because the growth mindset researchers are not the ones having to expend the effort.
To students, effort is a cost, and improvement is the “revenue” that one obtains from putting down the up-front cost. At a certain point, if you aren’t getting enough improvement (“revenue”) from contributing your up-front effort (“cost”), then the investment is actually not worthwhile, and you are operating at a loss rather than a profit. In that case, it would be perfectly rational to “cut costs” (cut down on effort).
To the researchers, however, effort is not a cost. Researchers can never envision any scenario in which cutting down on effort would be rational or beneficial. The researchers are like Soviet managers who have the goal of maximizing one output (grades, or graduation rate, etc.), and they personally themselves don’t have to pay any of the costs of the inputs, so their view is automatically, “Maximize gross tonnage of steel!” (Maximize grades!) without any consideration of whether this production is creating a net profit in terms of human happiness.
In saying “Improvement is always possible!” the growth mindset people seem to be saying, “Generating revenue is always possible!” And yes, both are technically true. By slashing prices, for example, one’s goods could always be sold at some (very low) price level. At some point the market will clear. But that doesn’t necessarily mean that generating that revenue will translate into making a profit. If you had to spend more in costs (effort) than you get in revenue (improvement), then it was not rational to expend those costs.
Really, the growth mindset people might do better to teach kids actual formulae, like, “Study 2 hours a day now, and your lifetime income is likely to improve by $1 million compared to where it would have been, given the same level of talent but no studying.” I know that when I was a student, whenever counselors “got down to brass tacks” and actually started throwing out numbers like that, that I always found that more motivating than platitudes about “you can always improve!” (Of course, typical mind fallacy, etc.)
Tom Loveless of Brookings had a good post recently showing that on a cross country level the students in countries that do well on the PISA international math exam say thet enjoy and are engaged in math less than students in countries that do poorly on the exam. There are similar within-country patterns. The natural conclusion is that improved learning has a cost, and diminishing returns.
Many aspects of our society are run in tournament fashion, so at any one time for any one kid there may be a lot of benefits to getting out a little ahead of the next guy. For example, when I met the founder of the KIPP schools around 2000, he was regularly sending lots of kids to private boarding schools on scholarship, since he had a near monopoly on 8th graders from poor neighborhoods who had just finished a super-academically intense middle school. Now there are a gazillion schools on the KIPP model, so it seems likely the expected benefit to any one kid is lower.
Kids are generally doing their best to find their place in society, and investing in aspects of life other than academics is probably correct for most of them.
This Tom Loveless post.
I’m glad you objected to the one-tailed argument. Good call, Scott.
UCLA has a FAQ that addresses this question specifically: http://www.ats.ucla.edu/stat/mult_pkg/faq/general/tail_tests.htm.
Dave’s knowing enough to bring it up without knowing enough to reject it himself is another point of evidence suggesting they might be engaging in data-fishing, whether or not they notice themselves doing it.
As much as I want there to be a real thing that works here, I only want them to find it if it’s real, and I hope they catch on before it’s too late.
Here’s the relevant quote:
“When is a one-tailed test appropriate?
Because the one-tailed test provides more power to detect an effect, you may be tempted to use a one-tailed test whenever you have a hypothesis about the direction of an effect. Before doing so, consider the consequences of missing an effect in the other direction. Imagine you have developed a new drug that you believe is an improvement over an existing drug. You wish to maximize your ability to detect the improvement, so you opt for a one-tailed test. In doing so, you fail to test for the possibility that the new drug is less effective than the existing drug. The consequences in this example are extreme, but they illustrate a danger of inappropriate use of a one-tailed test.
So when is a one-tailed test appropriate? If you consider the consequences of missing an effect in the untested direction and conclude that they are negligible and in no way irresponsible or unethical, then you can proceed with a one-tailed test. For example, imagine again that you have developed a new drug. It is cheaper than the existing drug and, you believe, no less effective. In testing this drug, you are only interested in testing if it less effective than the existing drug. You do not care if it is significantly more effective. You only wish to show that it is not less effective. In this scenario, a one-tailed test would be appropriate.
When is a one-tailed test NOT appropriate?
Choosing a one-tailed test for the sole purpose of attaining significance is not appropriate. Choosing a one-tailed test after running a two-tailed test that failed to reject the null hypothesis is not appropriate, no matter how “close” to significant the two-tailed test was. Using statistical tests inappropriately can lead to invalid results that are not replicable and highly questionable–a steep price to pay for a significance star in your results table!”
I am not a researcher in this area, but you are the only one I hear with this definition of growth mindset. As long as you continue to use a different definition than everyone else, you will continue to be frustrated. The question is quite simple: whether believing that ability can be improved through effort allows for more improvement of ability than believing that effort cannot improve ability does.
You read way too much into this line: “The more a player believed athletic ability was a result of effort and practice rather than just natural ability, the better that player performed” without understanding it must have a parenthetical [relative to other players of the same natural ability level]. I would suggest that Dweck does not hold the Controversial Position. In the same article you like, she says ” They don’t believe that everyone has the same potential or that anyone can be Michael Phelps, but they understand that even Michael Phelps wouldn’t be Michael Phelps without years of passionate and dedicated practice.”
I think the real problem is too much being made of something obvious, but there is still a real psychological problem with people that think they can’t improve, or stop trying to improve, because they are “bad at math”. Finding some interventions to help people understand that being bad at something doesn’t mean you can’t improve seems harmless. It’s not really a groundbreaking change in how the world is understood though…
Yes, but the problem is that the popular version of “growth mindset” is not “Okay, you’re bad at maths/playing the tuba/basketball, but putting in effort and practice means you’ll improve. Starting from a base level of pure wojous, you will be somewhat less worse than you would be if you didn’t practice”.
Growth mindset is touted as “You too can play golf like Tiger Woods if you just practice, practice, practice and put in that effort!” For schools, it is “Turn those D grade students into A grade students by teaching them it’s got nothing to do with their intelligence, success is down to hard work and effort!”
Yes, of course success is down to hard work and effort. But trying to motivate Johnny that “The only reason Bill gets all As is that he studies four hours a night; if you study like that, you too will get all As”. Well, Johnny might – or he might get all Cs. All Cs might be a huge improvement from his previous grades. Now if we’re telling Johnny that ONLY As count, or that if he’s not getting As then he’s just not working hard enough, that’s every bit as bad as telling him that it’s all down to pure brains and he’s just too dumb to ever improve.
I would need a citation that said your interpretation is the one being used in experiments. The “popular understanding” is not being debated here.
Here’s a guy who read about Ericsson’s 10,000 Hour research in a Malcolm Gladwell book and is devoting himself to trying to make the pro golf tour by spending 10,000 hours on focused practice:
http://thedanplan.com/statistics-2/
He went from never playing golf before 2010 to a 2.6 handicap (which is better than 95 to 99% of all golfers) by last summer, but now he’s up to a 5.5 handicap. He needs to cut about 10 more strokes off his average round and that goal seems unlikely.
I don’t think this is a nitpick – conclusions 4 and 5 are wrong. A 45 minute growth mindset intervention failed to demonstrate a significant effect on pass rates. This is very very different than “growth mindset had no effect on students”. I strongly suspect that people who actually take the growth mindset to heart – always looking for ways to improve, figuring out what worked and what didn’t* do better than a someone who merely attended two lectures on the growth mindset. And I agree with Dweck that everyone I know who is at the top of their game is both talented and adopts a growth mindset.
*I don’t think this is the bloody obvious position. Its not a matter of practicing more or the fact that practice + talent is better than just talent, its a matter of how you think about practice and growth. Figuring out exactly what separates you from people who are better than you, the specific ways in which you are falling short and correcting them – that is the attitude that I observe in people who are very skilled. This is stated very well in this popular article and this more obscure one, but its one of my favorites.
P.S. Of course it is possible that the growth mindset itself is heritable so once you control for genetics, it would seem to have little explanatory power.
You are not a jerk by insisting on a 0.05 significance level. You wouldn’t even be a jerk insisting on a 0.01 level imho. And you should not accept anything as “trending towards significance”. That is exactly what people say when they want to overstate the certainty of their results, or simply don’t know how to use p-values. This is anyway something that has caught on, and something we statisticians cringe at. One of my favorite things on the Internet is this list of quotes describing non-significant results.
Big thanks to Dr. Paunesku for such a through reply. Love the back and forth it really brings the issue to life.
And the reference to follow up studies and on going and future research is also a nice perspective.
Sometimes this issue strikes me as a kind of Schrodingers cat, where kids really do have ability floors and ceilings but we can’t really find the ceiling unless we get them to ignore ability and focus on effort.
Particularly in America with a very meritocratic focus, people WANT growth mindset to be true. Even as I quibble with the theory I definitely raise my kids that way.
I don’t disagree with you on the statistics or the particulars of this experiment. But as others have said, I’m not sure you’re right as regards Dweck’s understanding of the growth mindset. I think it is much closer to the “effort matters a lot” side of things than you’re making it out to be. Certainly, in the paper you link, there are sentences that could support any position on the “Effort Matters A Lot” “Caring About Ability At All Means You’re Essentially Cursed” continuum of growth ability definitions. But I think a lot of the trouble comes, not from the discussion of the definition of growth mindset, but from an exaggeration of the effects that it has. That is to say, saying that ‘people with ability mindset are more likely to cheat’ is not something that actually really touches the definition of ability mindset or growth mindset. Whether or not it’s true.
Second, like many, my initial reaction to the concept of growth mindset was that, if all it really meant was believing that effort was important, it was incredibly banal and almost meaningless. But I’m less and less sure that’s true, and I’m starting to suspect that it might be – I believe the customary term in these parts is a typical mind fallacy? Where most people commenting on this blog have internalized the growth mindset to a pretty great extent, and so it seems pretty obvious to us because these are lessons we’ve learned. But I don’t know, maybe it actually isn’t as blindingly obvious to everyone as it seems to be to me.
Everyone commits the typical mind fallacy. At least, I do. (h/t: Steven Brust and Scott Alexander)
I wonder if growth mindset intervention could be summarized as “Many individuals consitently overestimate how close they are to their personal growth ceiling. It is almost certain you can improve a great deal, but only with effort.”
This is not the bloody obvious. position, and is actually significantly different, but incorporates it.
I am not sure that “become an NBA player” or “become a NASA engineer” should be the most central examples for looking at whether growth mindset is useful or not.
If you have okay vision, okay fine motor skills, and okay intelligence, you can probably learn a basic knit stitch in less than half an hour. When I teach knitting to teens and pre-teens, I get a lot of people saying “This is too hard, I’m bad at this, I can’t do this.” And I tell them: Everyone is bad at first, but if you keep going, you’ll get good at this. It doesn’t, frankly, seem to help very much. It’s the same thing with other sorts of craft projects, the kind you don’t have to be really talented or artistic to at least have a basic competence with. So if you have this sort of experience, it’s only natural to think “how much better could these kids be doing if they weren’t SO QUICK to give up on themselves?”
But I wonder if teaching kids how to tolerate discomfort, failure, and disappointment in the early stages of learning isn’t a more fruitful path than just telling them that they can get better with practice.
The one-sided test is appropriate. The two-side test is a crazy hybrid. It’s really doing two one-sided tests and applying a Bonferroni correction. The problem is that people are not doing corrections for the many unrelated tests that they do in the analysis. Arbitrarily imposing this correction on a pair of related tests doesn’t seem to me like a good solution to that problem. Especially since most people don’t really do both of them. It’s just a back-door way of lowering the significance threshold. Yes, we should lower it, but I am skeptical of ad hoc surreptitious methods.
Sure, lots of interventions cause negative effects. But these are usually reported as failures, not as statistically significant findings worthy of study. People studying interventions just want to know that their intervention worked. If they were scientists, trying to understand the world, they’d be interested in surprising reversals, but they’re not, which is the basic problem.
I feel one concern with this is that, even if the one-tailed version is the right way to compute the p of some null hypothesis, the p-value cutoff itself is arbitrary.
Like, maybe the community has inappropriately standardized on computing two-tailed tests when one-tailed tests would be appropriate, but it has *also* accepted that .05 is a good cutoff that strikes the right balance between false positives and false negatives. Unilaterally switching to a different test is mathematically equivalent to unilaterally changing the significance cutoff.
Except that t-tests aren’t the only kind of test.
The world of professional golf can provide a perspective on these kind of questions of motivation and self-confidence. Pro golfers are interesting because they tend to be a type of Red State Republican person that academics and intellectuals don’t have much contact with: highly effective, very focused, diligent, not very imaginative individuals of moderately above average intelligence. They tend to have a lot in common with the corporate executives whom they socialize with at pro-ams and charity golf events.
Golf is a game where slumps often seem to be psychological — the golfer has to start the ball rolling himself rather than to react, so golfers notoriously can get in moods that are self-destructive to their performance and even careers.
Over the last generation, it’s become common for tour pros to have a sports psychologist that they regularly consult with. Here’s Golf Digest’s list of the top 10 golf psychologists:
http://www.golfdigest.com/golf-instruction/2013-07/top-10-golf-psychologists#slide=1
What this suggests to me is that motivation professionals can earn their keep. Maybe this is just a pointless fad, but I doubt it. And if you have a lot of money, you would ideally pay for a motivational speaker who is very much on your specific wavelength, who is talented at your resolving your doubts in conversation with you one-on-one.
Scott, I’m surprised you didn’t respond to Dr. Paunesku’s remark that “a growth mindset wouldn’t make any sense if ability didn’t matter because a growth mindset is all about improving ability.” There’s a slippery double-meaning to the word “ability” here. It appears that the entire point of “growth mindset” is to contrast the ability to “grow” one’s abilities against one’s innate “starting” abilities.
Everyone believes that “total ability” can be improved over time with hard work. But everyone also believes that different people have different levels of (presumably innate) talent for various things, resulting in different levels of “starting ability” among different people for different skill-sets. The “growth mindset” argument appears to be that the “total ability” is almost entirely determined by hard work, independent (or nearly independent) of “starting ability.”
(I suppose a slightly stronger reading would be the argument that “total ability” is almost entirely determined by one’s belief in hard work, rather than by the hard work itself.)
As a piano teacher, my experience is that there really is such a thing as talent, and it really does have a large effect on people’s overall ability, independently of their “belief” in talent vs “growth.” (I believe this lines up pretty well with your own experience as a piano student, given what you’ve previously written about your brother.)
The most significant thing I can see here is that this (seemingly very nice) guy (who wants to help children) got the results that say “growth mindset interventions do not work, sorry” and instead of clearing the table for the next idea he is grasping for the straws to continue to work on growth mindset. Which is kinda sad.
Yep, this was my impression, too. Not a lot of real self-reflection in the post, but that’s the cost of our intellectual-academic system, where social science researchers are really cheerleaders for ideas (and in this case, cheerleaders for an idea that is already pervasive in our “stick-to-it-ive-ness” obsessed America) rather than critical observers of phenomena.
In summary, there are no doubt some numbers of children who hold the self-undermining attitudes of the kind Dr. Dweck is out to eradicate. But of course there are other children who are monsters of self-esteem, the Dunning-Kruger Effect in living, breathing form.
Further “growth mindset” research should be focused on figuring out how to identify those children who would actually benefit from Dweck-style interventions so that limited resources could be focused where they are likely to do the most good.
So to expand on Scott’s “Bloody Obvious” position, both effort and genes matter. Genes shape an invisible curve determining how quickly you improve with effort and where your skills plateau. Effort determines where you are on that curve. No one knows their own personal curve until they try their hardest.
When students struggle, teachers should say: “Everyone can improve with practice. Maybe this does not come as quickly, but we’re going to put in extra effort, and try different techniques, and we’re going to get you to pass this class.” If the student struggles for multiple years, and a basic life proficiency has been reached, then teachers should say: “There is no shame in dropping this subject, and choosing an area where you have more of a comparative advantage. Not everyone can reach the same heights in every subject.”
When students excel while barely trying, teachers should say: “You have a gift in this subject. I am going to give you more challenging material. You have a responsibility to work hard, and push this skill as far you as can. No more coasting, you could make something special if you actually try.”
This messaging should not be told in one 45 minute session, it should be conveyed many times, from teachers, from parents, from coaches, all during the course of life.
Also, it is obviously a bad idea to heap praise gifted kids solely for their genes. Praise them for putting in effort and maximizing their genes.
It is also obviously dumb to tell a student who struggles in math for one term, “Sorry, you just suck at math. It is not worth it for you to try, there is nothing to be done. We’re tracking you in the bottom class and there is nothing you can ever do to get out of it.”
It also dumb when people say something like, “I am just bad at art, I don’t have the genes/talent for it,” when the person never even tried to practice art, when the person never worked through tutorials and learned the basics of sketching and proportion, etc.
So returning to “Growth Mindset”…
Was “growth mindset” ever tested against this “Bloody Obvious” messaging? If not, why not?
If “growth mindset” is not claiming that only effort matters – then what exactly is “growth mindset” claiming that is both novel and true? How does “growth mindset” messaging differ from the “bloody obvious” messaging?
Pingback: Jack’s Links | The Zeitgeist Log
Not my area of expertise and apologies if i’m repeating anyone, but:
“Growth mindset is the belief that individuals can improve their abilities — usually through effort and by learning more effective strategies”
As already mentioned by others, this is so encompassing it’s hard to see how anyone could not have a growth mindset. The disagreement is clearly about the ratio of work put in to increase in ability, and this glosses over that.
“I can personally attest that I have never heard anyone in that extended group of people express the belief that ability does not matter or that only hard work matters. In fact, a growth mindset wouldn’t make any sense if ability didn’t matter because a growth mindset is all about improving ability. “
This equivocates between different meanings of “ability” between the two sentences, as well as within the second sentence.
“One of the active goals of the group I co-founded (PERTS) is to try to dispel misinterpretations of growth mindset because they can be harmful.”
I’m always a little concerned when a person studying whether X is true or false got into that field in order to prove X is true. This is especially the case when empirical measurement and experimentation is involved.