I was listening to a discussion about a study titled “Bullshitters. Who Are They and What Do We Know about Their Lives?”. Most of the other coverage of the study has been around which groups of people were shown to be either more or less truthful while merely describing the method of actually finding the information out. The discussion on the weeds podcast actually discussed the method of the study and some of its weaknesses. This then got me thinking a bit further.
‘Bullshitters’ are individuals who claim knowledge or expertise in an area where they actually have little experience or skill.
The study made use of the PISA test, a standardised test taken by teenagers from many countries around the world. The researchers focused on just over 40,000 teenagers from different english speaking countries. While the main test focused on testing mathematical abilities, students were also given a background questionnaire to fill in, one of the questions was:
Thinking about mathematical concepts: how familiar are you with the following terms?
This was followed by a list of 16 mathematical terms. The trick was that three of them were made up. The researchers determined the bullshitters by looking at which students rated themselves as familiar with the subjects that didn’t exist.
Would you lie in an exam?
As it was pointed out in the podcast, an exam environment is a very specific and odd situation. During an exam situation, I think I would find myself more bullish and confident, even if it is misplaced.
In most exams, there is no downside to getting something wrong rather than stating ‘I don’t know’. If you don’t know something, but have a guess, you might get lucky and get some points. The worst case is that you get no points which is the same as not answering at all. From a game theory perspective, you really ought to answer.
I have used this technique in many exams over the years and even did so last year at work. One of our vendors provided our team with a 40 minute online test to help determine which training courses would be useful1. It was a multiple choice test with five answers per question. Critically one of them was “Don’t know/not applicable to my role”. I made the assumption that I wasn’t going to get any points for selecting that option, I may as well take a punt at one of the others. Worst case is that I am back where I started. And it worked. I did pretty well in the test, even though one of the areas, if you asked me directly, I would say I had no experience with it. The biggest failure was the test itself.
But exams are not like real life
If I select the wrong box in an exam, there is no harm. If I select the wrong option as an engineer… well, there are consequences to that. And this goes for many other professions as well. You don’t want doctors, lawyers, journalists or any professional just taking a guess. If they don’t know, you want them to say so and go speak to someone who does. That is pretty much the definition of being competent.
A more useful exam style
Perhaps we should shift our exam styles to include the downside for getting things wrong. And I am not even talking about equal reward/penalties. If you get something wrong in the real world, it is often much worse than the reward for getting it correct. So I could see an exam question worth 5 marks for a correct answer, 0 for writing ‘don’t know’ and -20 for a (dangerously) wrong answer.
I suspect the grade bands would need to get altered to accommodate such a system. There are probably many other consequences I haven’t considered. But I think it is something we ought to think more about.
- I don’t think many in the team took it too seriously ↩