RAECON BLOG POSTS

 

Thoughts on AI, Bias and the need for Human Subjectivity

About a year ago, I had the privilege of being invited to be a panelist discussant for the Humber College – Research Analysts’ Program Spring Symposium. Each discussant was asked to comment on the following question: What new ethical concerns are being raised in market, social, and evaluation research due to the advent of AI and automation? – The points I made in my remarks are below. But after a year, do I think differently? The answer is “No.” I just returned from a trip to New York where I was working with a client that specializes in trade and economic sanctions. The client invited several subject matter experts for me to work with over two intense days. We were working on the foundational steps to building a certification exam for international sanctions specialists. I learned much from these highly educated, intelligent, seasoned experts in the field. They worked for […]

EQUITY AND EQUALITY, FAIRNESS AND BIAS: MAKING CONNECTIONS IN CREDENTIALING

The topics of Equality and Equity relate to concepts of Fairness and Bias in credential program development and evaluation in ways that are not always clear, especially considering for the most part the former two are commonly considered societal issues, and the latter are more technical concepts that live in the credentialing world. I aim here to clarify the connections. We read about Equality and Equity a lot these days, and in recent times there has been a very insightful graphic that has appeared in various social media outlets that explains the difference between the two. Both are noble approaches but seek to accomplish markedly different outcomes. On the one hand, Equality is about sameness, and it looks to promote fairness and justice by giving everyone the same thing. But this works only if everyone starts with the same. So, in the graphic, Equality works only if everyone were the same height. On the other hand, Equity is about fairness, […]

PUTTING EVALUATION USERS FIRST TO BUILD EVALUATION CAPACITY

Recently I had the opportunity to write a piece for the Canadian Government Executive Magazine, an electronic and print publication that reaches some 70,000 readers per month. Many of these readers are key players in evaluation in Canada, from program managers to evaluation sponsors and consumers. Since we work predominantly in the private sector, many of the organizations we work with are deeply reliant on performance measurement metrics to gauge success on key initiatives. There is nothing wrong with this approach, but sometimes a different lens is necessary to gain a more fulsome look at program operation and outcomes. This evaluative lens is new to many of our clients, so we help them build the capacity to use the lens and ultimately use the data to make informed program-related decisions. Program evaluation can be a valuable source of business intelligence for our clients and we are best-equipped with our experience […]

TIPS ON DATA VISUALIZATION: THE NEXT STEP ON THE JOURNEY

I have posted previously about my Transformation in Data Visualisation and without that paradigm shift, I believe I would not be adding as much value to clients as I do now. This post is about the next step on the journey for me. I believe a lot of you are in the same boat, so the message of this post is aimed at helping you along, as we are in this together. I am a very visual person, from learning style to my photography hobby, to how I see the world. So for years when I was writing reports, I was actually out of my comfort zone. Then when exploring Garr Reynold’s Presentation Zen (http://www.presentationzen.com/), the works of Edward Tufte (http://www.edwardtufte.com/tufte/), participating in Stephanie Evergreen’s data visualization workshop (http://stephanieevergreen.com), and reading the works of various other data visualisation bloggers, I realized that most people are out of their comfort zone when reading text, and visualisation of […]

WHY DO WE CONDUCT JOB ANALYSES FOR CERTIFICATION EXAM DEVELOPMENT?

In one word: Validity.   When working with my clients from professional accountants to bankers to IT specialists, whenever we are building new certification or updated certification exams, early on in the project execution we conduct a job analysis. Now, our focus is a little different than those in the Human Resources world who conduct job analyses, but our methods are pretty much the same. Our perspective as credentialing program consultants is to build the validity evidence for the scores associated with the given certification exam. In other words, we want to make sure that if examinees are credentialed as a result of a successful score on the exam, then those scores represent that examinees actually demonstrated the requisite knowledge/skills/attitudes that are required to perform a given job in a real-world setting.   “So how does that work?” you may be asking. Our job analysis helps us identify the content domains for the exam, and from these content […]

BIRTH OF A CERTIFICATION EXAM

As a consultant, I work on various projects and when the work is completed, I move on to the next ones, often not being able to see the fruits of my labour, such as improved programs, new program growth and the like. But recently I had the tremendous pleasure of seeing the culminating product of a project I was fortunate enough to be a contributing part of. My consulting business started out in 2008, and one of my first clients were the Society of Certified Management Accountants of Canada (CMA Canada). My team and I conducted some work on their national certification program’s assessments. Over the years, I worked on several smaller projects for them, all relating to their national assessments. In 2011, my work with CMA Canada was instrumental in landing a contract with CICA (the Canadian Institute for Chartered Accountants). All the while, I was hearing talk of “unification” between Canada’s three governing bodies […]

PUTTING “PRESENTING DATA EFFECTIVELY” TO WORK: MY TRANSFORMATION IN DATA VISUALISATION

As a trained educator, I certainly see the intrinsic value in continuous learning. As an entrepreneur and practitioner in an evolving field, continuous learning through professional development (PD) is not only essential to survival; it offers opportunity for competitive advantage. I regularly participate in PD courses at industry conferences, looking to stay sharp and get on the leading edge. A while back at the American Evaluation Association annual conference, I enrolled in the highly sought after Presenting Data Effectively two-day workshop conducted by Stephanie Evergreen. I had purchased her book of the same name, and figured the workshop would be a good way to familiarize myself with the concepts. Before the conference and workshop, Stephanie suggested we bring a report or presentation to work with. An actual working workshop certainly appealed to me! I was going to take full advantage of this opportunity, and I had just the type of report […]

CERTIFICATION OR CERTIFICATE? … DON’T CONFUSE THEM

In my previous blog post, I discussed the difference between “Accreditation” and “Certification.” Without repeating, the essence of that post was that an individual cannot be accredited. Individuals, however, can be certified – meaning earning a certification. But what can they be certified for and what do they get a certificate for? One of the problems organizations experience when they begin to develop certification programs is the lack of a common understanding of what “to certify” actually means. The Institute for Credentialing Excellence has the best and most recognized definitions under the credentialing umbrella, including certification. My last blog post detailed some of these definitions. That blog’s impetus was the claim of a notable organization advertising on one of my professional association web sites that they would “accredit” participants of a weekend workshop. I have addressed whey they can’t “accredit” individual participants (see previous blog post), but now will tackle […]

DO YOU KNOW THE DIFFERENCE BETWEEN ACCREDITATION AND CERTIFICATION?

As I sit with clients and they speak to me about their “accreditation” needs, I listen intently, because the next part of the conversation tells me a great deal. The vast majority of the time, my clients have “certification” needs, not “accreditation” needs. I now then know I must do some capacity-building with them, sometimes subtly so, so we are all on the same page with our “credentialing” terminology… There is a profound difference between “accreditation” and “certification” and those who work in the professions or professional training and development, especially, should be aware of this. I have seen more than a few times that organizations are offering “accreditation” to members or individuals who take their courses. For example, a professional association I belong to, as a resource, allows other organizations to post advertisements for training on their web site that may be of interest to our membership. I saw […]

MANAGING EVALUATIONS IN THE CORPORATE SECTOR: USING A MULTI-STEP METHOD

Please refer to the AEA365 Blogspot for my short piece on managing evaluations and evaluation capacity building in the corporate sector. http://aea365.org/blog/?p=7047 Also the slide deck for the presentation at AEA 2012 can be seen on this site: https://www.rae-consult.com/wp-content/uploads/2012/10/AEA-2012-Presentation-Ali.pdf

ERROR IN TESTING (AND SPORTS)

With the spring weather upon us (well not really, but its supposed to be) and the start of baseball season, hockey season gearing up for playoffs and Champions League soccer (yes, I tend to watch a lot of sports) I have seen the element of human error of the officials (which we in the testing business will call judges at this point) come into play, sometimes leading to outcome-deciding moments.  In the testing industry or educational and psychological measurement, we understand that no test is perfect and the element of measurement error comes into play ALWAYS!  However, we think proactively and endeavour to minimize the impact of measurement error through various methodologies that are available to us in the measurement field.  So why, then, does the world of professional sports with millions of dollars involved, fail to use the methodology at their disposal to to minimize human error in officiating? We […]

IS THERE A BALANCE BETWEEN QUANTITATIVE AND QUALITATIVE DATA IN THE SOCIAL SCIENCES?

In the world of so-called “hard science” including the medicine, biology, and human physiology realms, it is a general notion that more is better; that is sample sizes must be sufficiently large enough to produce reliable results.  But does this principle discount the quality of the sample, especially if the sample is small? Take for example, that just about everything that’s known about the hard science of paleoanthropology and the origins of the homo genus is based on the analysis of fewer than 220 bones found since modern homo sapiens thought it was a good idea to study our past (Wong, 2012). A adult human body, by the way, has 208 bones … so that’s a pretty small sample size, don’t you think?  Yet, even in this “hard science,” it is widely accepted that those 220 bones tell us how modern humans came to be in the evolutionary sense.  Well, […]

OUTSIDE THE BOX THINKING PAYS DIVIDENDS

Allowing a diversity of minds and creativity in the problem solving process can lead to more effective solutions and soloutions to problems previously viewed as “impossible” to solve.  A recent article in the Globe and Mail attests to this philosophy, one that we employ with our clients.:   http://www.theglobeandmail.com/report-on-business/economy/growth/how-outsiders-solve-problems-that-stump-experts/article2420003/   We have put the idealogy presented in this article put to use with our clients to develop cost-effective and productive solutions.  As the article attests, it is often the “outsider” that has a clear vision of what the solution to a long-standing, perplexing problem should be.

THOUGHTS ON USING COLLABORATIVE EVALUATION STRATEGIES

April 9, 2010 I have been asked on several occassions when I would employ collaborative evaluation strategies and when other evaluation methods would suffice. I cannot think of a straight-forward, simple answer. My choice of evaluation strategies depends on a number of factors. An important factor to consider is the anticipated or articulated level of involvement the client organization wants or needs for the evaluation to be successful. Here, the concept of success, in my mind, and in the mind of some evaluation theorists, such as Michael Quinn Patton, is the utilization of evaluation results. But I digressed, so let’s get back on track. In my experience, I have found that collaborative evaluation methods, such as participatory evaluation (see Earl & Cousins), and empowerment evaluation (see Fetterman) have been quite fruitful in enhancing the utility of evaluation findings. In fact, I prefer these methods to others just based on the […]

SCHOOL RANKINGS: TOO GOOD TO BE TRUE!

Recently the annual rankings for Ontario elementary and secondary schools were published by the Fraser Institute. The key message of this post is any published ranking of schools is too good to be true. First of all, the Fraser Institute does not employ the psychometric expertise or educational measurement professionals that are capable of formulating some educated, informative, and most of all accurate rating system that takes into account the myriad of variables that would play into student achievement – one of the key outcome measures the Fraser Institute utilizes in their rankings. Second, any credible psychometrician or educational measurement professional would not put their names on any system that attempts to rank schools based on heavy reliance on parent income and single-sex environment variables. These variables, fairly easy to measure, are difficult to account for in statistical models, without the proper use of multilevel statistical models, using hierarchical linear […]

PROGRAM EVALUATION UPDATE

This is a very exciting time for me as we move into the evaluations of three York University programs.  All three are funded by the Ontario Ministry of Citizenship and Immigration.    I will be blogging more on these education and training programs in  the near future. We are also continuing our engagements by professional organizations and corporations for training/education and credential testing.  Our work has been met with much positive feedback from the clientele. Each corporate testing or educational testing project is unique in terms of needs, but the general processes involved are very similar. Test development is a process that adheres to the same guidelines and best practices in many contexts. I will be delving into some exciting professional development in the near future.  First, I will be attending the National Council on Measurement for Education (NCME) conference that is part of the larger American Educational Researchers Association (AERA) conference in […]