by Dominique P Bureau, member of the IAF Editorial Panel
First published in International Aquafeed, July-August 2015
Ten heads and ten tails: Dr Young Cho’s parables about making sure results are adding up
Each year, I have the chance to supervise many graduate students, carry out peer-review of scientific publications, host foreign scientists and pay a visit to the Research and Development personnel of different public and private institutions and research facilities in different parts of the world. During my numerous interactions with all these people, I am given the chance to review the results of exciting research projects.
I enjoy discussing results, what they mean, how they are making the field of aquaculture nutrition evolve, etc. Strangely enough however, now I am finding that most of my attention and time is devoted to verification of reliability of the results and to troubleshooting of problems. I am slowly but surely becoming highly skeptical right from the start!
As a PhD student at the University of Guelph a couple decades ago, I studied under the mentorship of Dr C Young Cho, a colourful “no non-sense” scientist who has taught me much about the process of science and research. Dr Cho retired 15 years ago and I am often reminiscing about the things he used to tell the young grad student I was. He always had many vivid and compelling real life stories or fables to share.
When discussing research results, he once told me:
“Someone has 10 fish and this person cuts each fish in half and throws them in a cooking pot. The person should therefore have 10 heads and 10 tails in his pot. Now, the person counts the fish and he is finding 11 heads and 9 tails. He may only be off by 10 percent but there is something fundamentally wrong going on!”
That was Dr Cho’s whimsical way of telling me that results, whether from a chemical analysis or from a research trial, should be logical and that biological or analytical variability is sometimes a nice excuse for work relatively poorly done.
To illustrate with an example: In recent months, I had the chance of reviewing the results from a number of digestibility trials carried out by my own research group or by some collaborators or during peer-review of scientific manuscripts for journals. Up to a few years ago, I have not realised all that could go wrong with estimating the apparent digestibility of nutrients of diets and feed ingredients! And no, I am not talking about the methods used for collecting the fecal material! The fish nutrition community has been discussing the issue of fecal collection method for years and yet sometimes overlooks basic issues.
When carrying out a digestibility trial, a digestion indicator (e.g. chromic oxide, yttrium oxide) is generally carefully incorporated in the experimental diets at a pre-determined, concentration (e.g. 0.5 percent, 100 ppm). However, for a good 30 percent of the digestibility results (sample analysis) that I am reviewing each year, the concentration of the digestion indicator of the experimental diets measured (or reported by the lab) for the experimental diets does not concur with the levels that were incorporated in the diet. How can this be?
In digestibility trials like in most other nutrition trials, the experimental diets are combination of different ingredients included a pre-determined level and that are blended to form a homogenous mix. Consequently, nutrient content of a diet sample should reflect the weighted average of the said nutrient concentration of the different ingredients used. Again, it is surprisingly common to see chemical analysis values for experimental diets that are not reflection of the weighted average of the nutrient composition of the ingredients!
Every nutritionist knows that (gross) energy is a property of nutrients. Consequently, the apparent digestibility coefficient (ADC) of gross energy (GE) should be the weighted average of the ADCs of crude protein, lipids and carbohydrate of the feed. In several digestibility studies I have reviewed in recent years, the ADC of GE is not a reflection of the weighted average of ADC of protein, lipids and carbohydrate.
Where is the problem? Does it lies in the (careless) preparation of experimental diets or in poor reliability of the chemical analyses carried out? The latter is generally the most probable reason. Mathematical or calculation errors are also not that uncommon.
I have learned from Dr Cho that one has to be skeptical about his own results and that every researcher is responsible for ensuring that the results are logical. This doesn’t mean that one has to be omniscient or know from the start what results to expect in all cases. However, there are a number of aspects that needs to add up. The process by which someone determines whether different elements add up can actually be an effective method for verifying the quality and reliability of research endeavours.
Agree or disagree? Let me know! dbureau@uoguelph.ca
Read the magazine HERE.
First published in International Aquafeed, July-August 2015
Ten heads and ten tails: Dr Young Cho’s parables about making sure results are adding up
Each year, I have the chance to supervise many graduate students, carry out peer-review of scientific publications, host foreign scientists and pay a visit to the Research and Development personnel of different public and private institutions and research facilities in different parts of the world. During my numerous interactions with all these people, I am given the chance to review the results of exciting research projects.
I enjoy discussing results, what they mean, how they are making the field of aquaculture nutrition evolve, etc. Strangely enough however, now I am finding that most of my attention and time is devoted to verification of reliability of the results and to troubleshooting of problems. I am slowly but surely becoming highly skeptical right from the start!
As a PhD student at the University of Guelph a couple decades ago, I studied under the mentorship of Dr C Young Cho, a colourful “no non-sense” scientist who has taught me much about the process of science and research. Dr Cho retired 15 years ago and I am often reminiscing about the things he used to tell the young grad student I was. He always had many vivid and compelling real life stories or fables to share.
When discussing research results, he once told me:
“Someone has 10 fish and this person cuts each fish in half and throws them in a cooking pot. The person should therefore have 10 heads and 10 tails in his pot. Now, the person counts the fish and he is finding 11 heads and 9 tails. He may only be off by 10 percent but there is something fundamentally wrong going on!”
That was Dr Cho’s whimsical way of telling me that results, whether from a chemical analysis or from a research trial, should be logical and that biological or analytical variability is sometimes a nice excuse for work relatively poorly done.
Image: Kevin Botto |
When carrying out a digestibility trial, a digestion indicator (e.g. chromic oxide, yttrium oxide) is generally carefully incorporated in the experimental diets at a pre-determined, concentration (e.g. 0.5 percent, 100 ppm). However, for a good 30 percent of the digestibility results (sample analysis) that I am reviewing each year, the concentration of the digestion indicator of the experimental diets measured (or reported by the lab) for the experimental diets does not concur with the levels that were incorporated in the diet. How can this be?
In digestibility trials like in most other nutrition trials, the experimental diets are combination of different ingredients included a pre-determined level and that are blended to form a homogenous mix. Consequently, nutrient content of a diet sample should reflect the weighted average of the said nutrient concentration of the different ingredients used. Again, it is surprisingly common to see chemical analysis values for experimental diets that are not reflection of the weighted average of the nutrient composition of the ingredients!
Image: bill lapp |
Where is the problem? Does it lies in the (careless) preparation of experimental diets or in poor reliability of the chemical analyses carried out? The latter is generally the most probable reason. Mathematical or calculation errors are also not that uncommon.
I have learned from Dr Cho that one has to be skeptical about his own results and that every researcher is responsible for ensuring that the results are logical. This doesn’t mean that one has to be omniscient or know from the start what results to expect in all cases. However, there are a number of aspects that needs to add up. The process by which someone determines whether different elements add up can actually be an effective method for verifying the quality and reliability of research endeavours.
Agree or disagree? Let me know! dbureau@uoguelph.ca
Read the magazine HERE.
The Aquaculturists
This blog is maintained by The Aquaculturists staff and is supported by the
magazine International Aquafeed which is published by Perendale Publishers Ltd
For additional daily news from aquaculture around the world: aquaculture-news
No comments:
Post a Comment