UKC

More questions for the statisticians! Coefficients of variance

New Topic
This topic has been archived, and won't accept reply postings.
 koalapie 06 Apr 2014
Hi!
Just wondering if there are some normal ranges for CV that may indicate what may be a very reliable/accurate, reliable, suspect, horrendous, or fatal way of taking measures? Say <5% is very reliable, <15% reliable etc?

I realise there is probably at least a few other factors to consider, but a basic general guideline for this or direction to a reference which may give a simple overview for a non-statistician would be greatly appreciated.
Cheers
 Banned User 77 06 Apr 2014
In reply to koalapie:

I think it depends on what you are measuring and how.. what subject etc. In biology, especially on wild populations we can get huge variability between individuals, or even the same animal over time.. molt cycles, reproduction, season etc..
OP koalapie 09 Apr 2014
In reply to IainRUK:

I google found <5% good, >10% bad.
Anyone with more credible sources for such figures?
 Doug 09 Apr 2014
In reply to koalapie:

As Ian said, depends on the context, I used to work with data on root production & the errors were always huge due to technical problems. At other times the 5 & 10% you found are pretty valid although 5% seems to be widely accepted in many fields
 pneame 09 Apr 2014
In reply to koalapie:

95% confidence is the usual range quoted - and statisticians seem to be en-masse rubbishing p-values as being misused. Personally, I'm a huge fan of box plots which give a nice graphical view of how tight / all-over-the-place the data is.
 Banned User 77 09 Apr 2014
In reply to koalapie:

Yeah I did my Phd on stress in wild populations of lobsters.. lactic acid raises from almost zero so its variability was much smaller, glucose varied from a mid level which was dependent on many factors so hugely variable..

I'd be suspicious of any definitive statement..

I have also worked with mouse biologists where every mouse is almost genetically identical, fed and exercised exactly the same.. the variability they dealt with was incomparable to those working on wild populations when we have little idea of their history.
 Shani 09 Apr 2014
In reply to koalapie:

Check out these links which you may find interesting/useful:

Pt1: http://www.dcscience.net/?p=6473
Pt2: http://www.dcscience.net/?p=6518
 Banned User 77 09 Apr 2014
In reply to pneame:

Yeah box plots are great as a first look to see where the data lies..
OP koalapie 09 Apr 2014
In reply to Doug: I guess it's to do with statistical validity of a measure for research purposes, whether the measurement technique itself has too much variability to be able to be utilised to measure potentially real change. So in my example two measurement techniques are compared across 5 tests, one measure consistently has a CV of 5% and the other is consistently around 13%, but the latter is much simpler, quicker and less expensive. There is no significant differences between either measure and the different conditions within each of the 5 test conditions, just a consistent difference in CV across, as above. I'm trying to build a case that the second measure isn't great, but it's consistently okish given it's other advantages. Is this reasonable given the above numbers and trends?

OP koalapie 09 Apr 2014
In reply to Shani:

Wasn't there an article recently proving the Atkins diet was worse for your health than smoking!?
OP koalapie 09 Apr 2014
In reply to IainRUK:

Thanks that sounds quite interesting! Sorry I didn't make that clear, the context was the statistical variability(CV) in the measurement procedure to measure the change rather than the change itself, if that makes sense.
 Shani 09 Apr 2014
In reply to koalapie:
> (In reply to Shani)
>
> Wasn't there an article recently proving the Atkins diet was worse for your health than smoking!?

I wouldn't know as I don't buy in to The Atkins Diet. But I'd doubt the quality of any study equating diet to the negative impact of smoking! (Unless you can prove otherwise)
 Banned User 77 09 Apr 2014
In reply to koalapie:

So you mean repeated measures of say the same animal, the same time..

So taking 100 blood tests on the same batch of blood and looking at the variability?

That was always the first step invalidating any new assay.. I'd generally look at 90-95% as being acceptable but again it depended on the magnitude of the change I was likely to be measuring...

My pHD was very industrial so it was about making quite cheap assays of stress which we could use in the field/industrial setting.. so if I could do x% more tests with an 80% accuracy in a given time it was preferable to another test which may have been 99% accurate which could only be done later on.

For me lactic acid would rise 10-20 fold in stressed animals, unstressed were 0-0.5 mmol.. so I'd use blood sticks sometimes which were much less reliable and more variable but would give me an idea to classify animals to unstressed or stressed. For them I think I accepted around 75% accuracy.. it was much less reliable than a blood test but was fit for purpose.
 pneame 09 Apr 2014
In reply to IainRUK:

Is it a good rule of thumb when looking at variables that go from near zero to much larger numbers to transform to a log scale?

Of course you can usually see this right away when you do the transformed vs. untransformed analyses!
 Oujmik 09 Apr 2014
In reply to koalapie:

The important thing is the relative magnitude of the effect you are looking to observe and the error (by which I mean random variation in your measurements resulting purely from imprecision in the measurement technique).

For example, if dehydration can kill a person when they have 5% less body water than usual, a measure with an inherent CV of 10% is not going to be very useful clinically.

However, if people can actually tolerate a reduction of 50% before dying, the same measure could be a life-saver.
 Banned User 77 09 Apr 2014
In reply to pneame:

It depends, always try to transform and see how the data looks.. tbh I just do what the reviewers want.. one minute they want it transformed, the next raw data.. as long as its published I don't care. The data is the data.

As long as its continuous data you can do what you want with it.

Sometimes I do fold changes from control.. that can make differences between species easier to analyse.
 pneame 09 Apr 2014
In reply to IainRUK:

Agreed - always best to be nice to reviewers. Unless they are spectacularly wrong. I think that this is one of the benefits of open sharing of data; it don't matter quite so much what mad ideas reviewers have....
OP koalapie 10 Apr 2014
In reply to Oujmik:

Thanks, yes, it appears this is a really good way of describing it. I think I have an article that deals with this in great detail, but I couldn't be sure. Cheers

New Topic
This topic has been archived, and won't accept reply postings.
Loading Notifications...