People are frequently amazed at the accuracy of their personality test report. These reports can be powerfully enlightening as they describe an individual’s tendencies and character traits from what appears to be an objective point of view. When given the opportunity to review their report, I haven’t had one person defer. Everyone wants to know what their report says about them – whether they agree with it or not.
But sometimes personality test results are misleading and of no use at all. And it happens more often than you’d think.
In an experiment with college sophomores, a traditional favorite for academic researchers, the accuracy of personality tests was put to its own test. Following completion and scoring of a personality test given to all of the students in the class, the researcher asked for a show of hands from those for whom the test report accurately described them. A sizeable majority of hands went up – the report was an accurate depiction. There’s one thing they didn’t know:
Everyone got exactly the same report.
Yep. {I wish I’d thought of this first.}
Despite everyone completing the test in their personally distinctive manner, only one report was copied and distributed to the entire class of subjects. No matter how similar you may think college sophomores are, they’re not so identical as to yield precisely identical personality profiles. But still, a “J. Doe” report was viewed as a perfect fit to most. How does this happen?
Take a read of one of your personality test results. If you’re like most, you’ve completed several of these assessments and probably still have a report or two laying around. When reading your report take note of the following indicators of BS reports:
- Conditional Statements: The number of times the words “may,” “might,” “sometimes” show up
Example: “You may be unsure of yourself in a group.”
How “may?” Like, maybe, “90% unsure”, or “maybe completely confident?” The reader typically fills in this blank unwittingly giving the report a “pass.”
- Compensatory Observations: The number of times opposing behaviors are presented next to each other
Example: “You have a hard time sharing your feelings in a group. However, with the right group you find it refreshing to get your emotions ‘off your chest.’”
So which are you? A paranoid prepper? Or a chest pounding demonstrator? Either one of these opposing types could fit by this example.
- General Statements: The specificity of the descriptions, or lack thereof
Example: “You maintain only a few close friends.”
This statement is pretty much true by definition. It’s certainly up for interpretation such that it is befitting for all.
- Differentiating Statements: {fewer is worse} The uniqueness of the descriptions.
Example: “Privately, you feel under qualified for the things others consider you to be expert at.”
The lack of differentiating statements is not exactly the same as making general statements. A specific statement may not be differentiating. The above example is specific, but not distinctive as a fairly large percentage of people do feel under qualified for even their profession.
The point is, anyone can be right when they:
- Speak in couched probabilities,
- about “both-or” samples of a given behavior,
- in very general terms,
- about things that many people experience.
These four “hacks” provide all the latitude needed for ANY report to make you think it has “nailed you.”
Beyond these tactics, many give too much credit to the personality test. Frequently reports are simply feeding you back EXACTLY what you put in via your responses. For example, the item, “I like to organize things” may show up in a report as, “You like to organize things.” There were probably more than a hundred items on the test – you probably don’t remember every response you made for every item.
Another way folks give too much credit to the personality test is by holding the belief that the instrument should be right. Beyond your general position on the validity of personality tests, publishers have various tactics to make the test report more "scientific."
- Lots of statistics
- Lots of figures
- Distinguished endorsers
- Techno-babble
None of these things may have anything to do with the actual validity of the test. But research shows these things enhance people’s opinion of its validity.
What’s a good report look like?
- Good reports take a point of view. They provide specific summaries of behavioral style that really are uniquely you. If you gave the report to a friend and told them this was their report, they’d honestly say that it doesn’t accurately depict them – even if the two of you are inseparable. Fit is determined by both accommodation and exclusion. A good report speaks to you and no one else.
- Better reports don’t provide any narrative at all. They simply provide normative scores on the various dimensions (i.e., characteristic behaviors) covered by the test. This type of report allows an expert to interpret the full spectrum of dimensions in the broader context. Good interpreters know what to look for in terms of how the dimensions interact with each other and can further specify the evaluation with just a bit of extra information on the respondent. This does not mean that they already know the subject. It may be as little as knowing why or when the person completed the assessment.
- Great reports present just the facts. The report is a fairly straightforward summary of your responses, organized by dimension (trait) and compared to a group of others’ responses/scores. Better still, great reports provide more than one score per dimension, or the average. They also give some indication of the variations in responses by dimension. This allows the interpreter to know just how confident a given score is. No variance = high confidence. Wide variance = low confidence.
So, what does your report really say about you? Depending on the factors I’ve outlined – it may say nothing at all (or worse).
It really helps to know some of this stuff.