In the world of so-called “hard science” including the medicine, biology, and human physiology realms, it is a general notion that more is better; that is sample sizes must be sufficiently large enough to produce reliable results. But does this principle discount the quality of the sample, especially if the sample is small?
Take for example, that just about everything that’s known about the hard science of paleoanthropology and the origins of the homo genus is based on the analysis of fewer than 220 bones found since modern homo sapiens thought it was a good idea to study our past (Wong, 2012). A adult human body, by the way, has 208 bones … so that’s a pretty small sample size, don’t you think? Yet, even in this “hard science,” it is widely accepted that those 220 bones tell us how modern humans came to be in the evolutionary sense. Well, that is until recently when another finding of a similar number of bones have sparked a debate. But the point is, small samples are not necessarily bad, especially when the quality of the sample is better than any alternative.
So why is it that social scientists like myself are sometimes pressured to have an abundance of quantitative data to compliment the qualitative data when we conduct program evaluations? This is not usually a problem for me, as I am more of a quant guy anyway, but at times I as well as colleagues of mine struggle to strike the balance between quantitative and qualitative data in our work on program evaluations. In general, a mixed methods approach (using both quantitative and qualitative) serves all well, but is there a balancing point; should we use more of one than the other?
The programs we are working with will often dictate the types of data collection we conduct. But we sometimes hear from clients that “we need more numbers,” or “we need more stories.” Obviously, setting a data collection plan up prior to actual data collection is prudent, and discussions taken place at this point in the evaluation will save everyone from headaches later.
However, what do you with a client that insists that one type of data should have priority over the other when the given situation does not warrant it? At this point, as social scientists and for those of us that are professional evaluators, our professionalism and expertise must take over here. Educating the client is a good start, and subtle and even not so subtle reminders that we were hired for our expertise and experience in these matters and our professional judgment should be trusted is another option.
I understand there is no magical formula for striking the balance between quantitative and qualitative data, but as social scientists, a mixed-methods approach is always better. Careful consideration of all data sources and insuring high-quality (meaning good) data will always trump the effort to strike a mythical balance between data types.
Wong, K (2012). First of Our Kind: Sensational fossils from South Africa spark debate over how we came to be human. Scientific American, 36(4), 31-39.