By Kent Haden Founder and CEO of Resilience Software
This week I had the pleasure of training a group of fellowship directors on how to use our T-Res software. The focus was on trialing mobile assessments in about 10 U.S. training programs.
One topic that I got a bit fired up about, was about the way some organizations encourage the use of counts as a way to ensure competency. In many training programs, people learning technical skills are required to perform skills a minimum number of times, as a way to track competence.
In discussing this, I was reminded that performance counts are an inefficient and sometimes ineffective way to ensure competence partly because there is so much variation in learner experience and aptitude.
At the end of the day performance counts can really only tell you a couple of things:
1) Does the learner have any experience at all? If they have none, it is a clear indication that they have no documented experience and the directors have no idea whether they competent at that skill.
2) Have they practiced the skill as much as you want them to? The counts are often arbitrary though some are based on evidence.
If we want performance counts to be used as a universal way of measuring competency then they have to be based on an average learner. As Todd Rose stated in the article ‘When U.S. air force discovered the flaw of averages’ very few people are ‘average’.
As a very rough analogy: imagine if a restaurant rating service only told you how often the rater had eaten there not how good the experience was. Sure, it tells you something but not what you really want to know.
Here are some risks I believe can happen when using counts to ensure competence:
- The learner takes more time than anticipated to learn some skills. They may complete the count and still not be competent.
- The learner takes much less time than anticipated to learn some skills and could be using the time learning other skills.
- The cases did not represent the range of risks (e.g. comorbidities) they will have to deal with in practice.
- The experiences did not represent the range of situations they may have to deal with in the real world, e.g. none were emergency cases or under high pressure.
- They may be technically proficient at performing technical skills but not have the range of skills needed to be a broadly competent practitioner.
What is the solution? I believe an assessment system like T-Res, that allows the documentation of teacher-observed competence, provides directors a better alternative than counts. This is still a relatively new approach to solving a problem that has been challenging medical education since it began. It is also called Workplace-based Assessment or WBA.
While we won’t be able to change the way programs measure competence overnight, I felt really proud of what we have built and we got very positive feedback from the participants. This topic is important to me so I will provide a follow up post that will go into more detail about using observations to ensure competence.
Read Our Follow Up Post – Assessing Competency, Beyond Counts
The post Why we shouldn’t Use Performance Counts To Ensure Competency appeared first on .