Training Based Research Studies: the Biggest Con in sport since the Muffin.
Remember how when we were kids everyone liked to eat cupcakes.
Then when we got older and a bit more health conscious we were told to give them up because of the sugar and flour and other stuff in them.
Then along comes a sports nutritionist who said “Muffins are a great food for athletes – nutritious, high carbohydrate energy foods”. So we all started eating them again even though they are basically still just big cupcakes.
What a big con.
Almost as big a con as Training Studies in Sports Science Research.
So here’s a typical training study – i.e. a research study being which changes something in an athlete’s training program, measures the difference between pre and post change and concludes with a research finding recommending that by implementing a similar change other coaches and athletes will see similar effects.
What’s wrong with this picture?
It all seems logical – I measured each athlete’s power output. I changed one element of their training program. When I re-measured power output at the end of the training period which incorporated the new element, power output had increased. Therefore what we did (assuming the stats were done correctly) worked and if other coaches and athletes do what we did they will also see an improvement in power output.
Seems to make sense.
No it doesn’t.
It – and most training studies – do not make any sense. And they don’t make any sense for one reason – Assumptions (note the first three letters of the word assumptions by the way).
The big hole in all training studies is that they assume everything else – i.e. other than the training variable being manipulated – is equal and constant.
It is impossible to control and/or measure all the variables that could potentially impact on the results of a training study in one athlete’s life – let alone the lives of several athletes involved in a typical training study and even more importantly it is impossible to control every relevant aspect of the lives of other athletes not involved in the research study but who will try to apply the study’s findings in their own training.
Yet, time and time again, we see researchers present the findings of training studies at conferences and in peer reviewed journals that are for all intents and purposes – useless and irrelevant.
Here’s just ten problems with training studies – there are hundreds:
- You can’t control the diet variables of the athletes;
- You can’t control the quality or quantity of their sleep;
- You can’t control their emotional / psychological well being;
- You can’t control their other activities (i.e. those physical activities not directly involved with the training study);
- You can’t control their rest or recovery activities;
- You can’t control their overall life workload, e.g. work, family activities, study etc;
- You can’t control the engagement / commitment to their execution of the training variable being measured;
- You can’t control their hydration;
- You can’t control their tolerance to pain or discomfort;
- You can’t control their honesty – and honesty is critical – because if they are not complying exactly to the researcher’s requirements the results are even more meaningless.
And let’s not even begin to think about individual genetic variation – which renders most training studies irrelevant and let’s not even begin to look at socio-economic factors, cultural differences, differences in training backgrounds and athletic history………..you just can’t do a simple training study then get up at conferences and tell coaches and athletes it will also work for them.
The bottom line is – the vast majority of training studies are useless because the subjects are subjects for a relatively short time – they are human beings 24 hours a day and as such, every thing they do will potentially impact on everything else....including the research study.
And even more damming of training based research studies is this – as a researcher you can pass on the results of the research study to coaches, athletes and sports science colleagues but unless they can duplicate the research environment exactly and precisely with identically matched athletes, protocols, level of engagement and environment, the results are meaningless.
Now we come to the real issues. The majority of researchers do the training research study then try to apply it to the sport, rather than developing a real understanding of the sport, asking the sport what they want to know, then solving real performance problems for coaches and athletes.
The only people who really benefit from the traditional way of doing training research studies are the researcher (who gets to publish the work in a peer reviewed journal) and the individual athlete/s involved in the study itself. But to take the results of a carefully controlled study and try to apply it the broader sporting community is just plain wrong.
The results may be relevant to the person/s directly involved in the study but they are not necessarily generalizable to the broader population.
There are millions of potentially great research projects out there – but they involve working with real athletes and real coaches in real situations to solve real problems.
There are three different types of research and each one has its place and role in the overall scheme of understanding and learning:
- Academic driven research, i.e.usually conducted by Universities for the purpose of generating research papers and obtaining grants;
- Practitioner driven research, i.e. conducted by sports science professionals in settings such as Academies and Institutes of Sport, High Performance Centres;
- Coach and athlete driven research, i.e. simple, practical, applied research projects to solve actual performance problems.
Each of these research methods has strengths and weaknesses. The Academic driven research is high in reliability and validity but often lacks practical understanding and relevance. The coach and athlete driven research is simple, practical and immediately relevant but often lacks accuracy, reliability and vailidity measures.
Clearly, the one simple message is for the Academic, the Practitioner and the Athlete and Coach to work together in a performance partnership committed to finding the best possible solutions for performance problems in the shortest possible time.
So, the next time you come across a journal page boasting the breakthrough findings of their new training study, let me know. I will send you the link to a great site which teaches you how to make over 100 different types of paper airplane so you can put the page to good use.
And stick the muffin on top so we get rid of the two biggest cons in sport at the same time!
Wayne Goldsmith
6 Comments
Ben Rattray · February 8, 2010 at 2:59 pm
what shitty journals do you read?
can’t say i agree with much here. rarely does research make wild claims, the media interpretation of them however often twists and sensationalises, as do many of the less informed coaches/practioners/athletes(similar to what you have just done). I don’t think the research is bad at all, the communication of it is no doubt crappy most of the time.
Wayne Goldsmith · February 9, 2010 at 7:26 am
Thanks Ben.
I think we agree to disagree on this.
More than once I have listened to researchers present claims about their research and its capacity to enhance the competition performance of athletes. Yet, most of the time, they lack the understanding of the multi disciplinary nature of competition performance and are coming at it from such a narrow perspective that what they present is of little relevance.
Everything is linked and inter-related: a change in technique might be supported by the findings of a research paper to increase speed in athletes, but all it takes is for the athlete to have a few nights of poor sleep, an emotional disturbance, a family issue, poor nutrition or some other complication and the benefits of the technique change are negated.
This is what the single discipline researchers don’t get – when it comes to competition performances, everything is inter-related and dependent on everything else.
Thanks again ben – I value your contribution.
WG
Jeremy Pryce · February 8, 2010 at 6:49 pm
Good on ya Wayne! Any kind of research has to consider as many of the variables that influence the subject as possible, and the findings must always refer to the environment (circumstances)under which it has been conducted.
As a track coach, most of what I know and practice is based on trial and error and I´ve made a mess of many athletes along the way. However, evaluation has lead me to my current beliefs and anything I “add on” taken from research has to concure with my value system. Not to say that I hold universal truth. But my truth is still my own.
Wayne Goldsmith · February 9, 2010 at 7:18 am
Thanks Jeremy,
I am a huge fan of research, innovation, creativity and new ideas in the pursuit of excellence in sport but some stuff I read is of such limited value to athletes and coaches I can not believe it has been published or funded. Research for research sake which aims to increase the knowledge base of humanity should be done – but in a University setting. Research which aims to enhance the competition performance of athletes should be practical, applied, commonsense and relevant and targeted at solving performance problems quickly – so that the athletes gain a competitive advantage over their opposition. Isn’t that what it’s all about?
WG
mike · September 14, 2010 at 12:05 am
“Assumptions (note the first three letters of the word assumptions by the way).
The big hole…”
Not sure that was an intentional transition but it made me shot Dr. Pepper out my nose.
Great points – There is so much good that research can do and at times they put time and energy trying to reinvent the wheel.
Wayne Goldsmith · September 14, 2010 at 7:26 am
Hi Mike.
The point to the article was not that all research is bad but that training studies when conducted from a single discipline point of view are ridiculous.
We test. We change one thing. We re-test and find some sort of effect. Then we make outrageous claims about the impact of the measured effect on competition performance.
For example. We test an athlete’s power. We add in some power exercises to their gym work. We re-test and find an increase in power. Then a sports scientist gets up a conference and says, “Our results suggest that doing these exercises will improve competition performances” – it is insane.
It is like telling someone that reducing their salt intake will increase their life expectancy when they are 200 pounds over weight, smoke, have high levels of stress, suffer from depression and do no exercise. Everything is connected, inter-related and multi-factorial.
It does not make any sense yet I have received more negative emails from this post than any other I have ever written – all the negatives coming from either Academics or institutionalised sports scientists. No big surprises there!
As for the “hole” comment – I don’t dare do anything like that. There is a group of Zealots in the south of the US who monitor the internet and send me a rude email threatening to blackball me anytime I use words they don’t like so it was unintentional.
WG