The topics of Equality and Equity relate to concepts of Fairness and Bias in credential program development and evaluation in ways that are not always clear, especially considering for the most part the former two are commonly considered societal issues, and the latter are more technical concepts that live in the credentialing world. I aim here to clarify the connections.
We read about Equality and Equity a lot these days, and in recent times there has been a very insightful graphic that has appeared in various social media outlets that explains the difference between the two. Both are noble approaches but seek to accomplish markedly different outcomes. On the one hand, Equality is about sameness, and it looks to promote fairness and justice by giving everyone the same thing. But this works only if everyone starts with the same. So, in the graphic, Equality works only if everyone were the same height. On the other hand, Equity is about fairness, and it’s about making sure people have access to the same opportunities. Sometimes our history and differences create barriers, and as the graphic shows, if we provide people with what they need to overcome barriers so all people have the same opportunity (to view the game, in this example), then we are being equitable. But how do these concepts relate to the world of credentialing?
Very simply, both concepts come into play. For example, licensure and certification bodies, like my clients, want to make sure that all candidates/employees have equal or the same learning and testing materials – that’s equality. But on a larger front, these credentialing organizations need to make sure that all groups of candidates, for example across geographic regions, languages, and socio-economic status etc., have the same opportunity to access the learning and testing materials – that’s equity. But, these concepts as explored are more social than technical. In the credential testing world, we have some important technical considerations to think about as well.
Two very important technical concepts in credentialing focus on validation of scores from associated tests: Fairness and Bias. These terms are sometimes mistakenly used interchangeably, especially by the public (Zieky, 2006). Test fairness is a value-laden judgement based on several factors, including moral, ethical, social and sometimes legal standards (Thorndike & Thorndike-Christ, 2010), and is often expressed in guidelines for fairness in testing (Zieky, 2006). Tests that are unfair result in systematic differences in scores for predefined groups of examinees – that is bias. So, test bias is a psychometric property of test scores; it is the quantitative evidence that supports claims of test unfairness (Furr & Bacharach, 2008).
Fairness, in the credential testing realm, is a type of validity evidence in which the principle rests that test results are based solely on the ability of candidates to provide safe, competent practice. It is important to note here that not only examination items, but exam policies may also hinder the performance of pre-defined groups of candidates based on factors such as their gender, language, culture, ethnicity and disability. Any exam item or examination policy that systemically advantages or disadvantages groups of candidates is said to be unfair. Thus, test fairness is more akin to the social concept of equity.
On the left side of the graphic, the provision of crates to boost sightlines may represent the provision of equal access to learning material to boost test performance. But, if learners – potential examinees – start their learning in different places (or have different heights like in the graphic), the crates, or the learning material do not help everyone. For example, if learning material is made available online only, then perhaps those prospective examinees with limited internet access would not be helped as much as those with ease of internet access. The provision of the online learning materials is equal but not equitable; therefore, it is not fair, and could lead to bias in test scores for those examinees with limited internet access. For everyone to benefit from the learning material, symbolized by the crates, prospective examinees must be given the learning materials in a way that addresses their needs – the right side of the graphic.
How may we identify an unfair or inequitable exam? Statistical comparisons of performance data of pre-defined groups of examinees (identified by gender, ethnicity, age and the like who should in theory have equivalent levels of ability) can help identify the presence of bias in the exam results. The source of the bias then needs to be found and addressed. It is the responsibility of the credentialing body to provide evidence that results of exams are not biased against subgroups; ultimately this requirement is manifested first in test development and analysis guidelines that require fairness studies be conducted periodically (American Educational Research Association, American Psychological Association, & National Council on Measurement in Education, 2014).
When I work with my clients, whether they be bankers, lawyers, IT professionals or accountants, I always provide them with guidance to promote equity in their credentialing programs, if not for the social and altruistic reasons, but for the simple reason that an inequitable or unfair program creates bias in results that seriously undermines the validity of any inference drawn from them. These credentialing programs are too expensive to be threatened by avoidable unfairness; moreover, the cost on a societal level of unfair programs is even greater.
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational and psychological testing (5th ed.). Washington, DC: American Educational Research Association.
Furr, M. R., & Bacharach, V. R. (2008). Psychometrics: An introduction. Thousand Oaks, CA: Sage.
Thorndike, R. M., & Thorndike-Christ, T. (2010). Measurement and evaluation in psychology and education (8th ed.). Upper Saddle River, NJ: Pearson Education.
Zieky, M. J. (2006). Fairness review in assessment. In S. M. Downing & T. M. Haladyna (Eds.), Handbook of test development (pp. 359-376). New York, NY: Routledge.