1. Subjective marking.
I have actually seen an “IQ” test on the Internet which required essay-type answers. Imagine the scenario where a testee is very much more able than the person who wrote the test. This person may submit answers that the examiner fails to understand or appreciate, and hence gets marked down accordingly.
Test questions that require the testee to write or draw out the answer in full could potentially fall prey to subjective marking.
2. Lack of instructions or illustrative examples.
Even if many or most of the questions are too difficult for the testee to answer, he/she has zero chance if no instructions are given, or the instructions are unclear or incomplete. I saw a test a few days ago where one section of it consisted of various lines, some of which had numbers against them, and others did not. The “example” consisted of two lines, one bearing the number 5, the other blank. The line with the 5 did not appear to be 5 inches, 5 centimetres, or any other recognisable unit of measurement. I am assuming that the test author expected test takers to work out the value of the other lines as a ratio to the ones with numbers. But there again, I could be barking entirely up the wrong tree, thanks to insufficient instructions.
3. Questions not clearly printed.
All right, it seems ridiculously obvious. But this same offending organisation that published the aforementioned test had also scanned and uploaded another test to their site which was so poor in quality that the intended shape of certain graphic elements could not be discerned.
4. Questions not well-defined.
Questions which do not include all the information for their solution, and which do not specify all the necessary parameters for their solution (giving rise to the possibility that some testees will identify two or more logical answers), would be poorly-defined questions. An example I saw asked what the probability would be of a second coloured ball pulled out of a bag at random being a certain colour; the question however neglected to specify whether or not the first ball had been placed back in the bag. Knowing this information would have made all the difference to the answer. The question was thus unanswerable, except by pure guess as to which scenario the examiner had intended.
5a. A required answer is the wrong answer, or is not the only possible answer.
5b. Correct (alternative) answers are marked as incorrect as they had not been considered by the author.
The above are down to bad test authorship and insufficient beta-testing to flag up flaws and bugs in the test. I saw a “wheel” question, where each segment had a number double its value on the opposite side, with the last segment blank, which I felt fit this category. Continuing clockwise, the amount in the blank segment would logically have been double the value the other side. Continuing anti-clockwise, the number to go in the blank segment would have logically been half the value the other side. There was also a third option, where all the numbers provided appeared to be a progression of multiples. Which one was the testee supposed to choose? The question had one required answer; the other two perfectly justifiable alternative answers would have presumably received no credit.
6. Specific academic knowledge being assumed.
Unless it is the specific intention of that part of the test to gauge factual knowledge, none should be assumed, and any such information required for the solution of the questions should be provided to the test taker.
Particularly in the case of certain high-range tests, I have seen some fairly advanced academic knowledge (particularly in mathematics) being required in order to solve certain questions. It seems that in order to try and discriminate at the upper levels, these types of questions slip in as it becomes more difficult for test authors to come up with sufficiently difficult questions. That is one of the reasons I am generally sceptical of these tests – a person can be exceptional in terms of raw potential, but not have benefited from a particularly thorough education.
7. Time limits too punitive.
What do I mean by “too punitive”? When nervous testees are put under unnecessary pressure, or testees who prefer to work carefully and check their answers are put at an unnecessary disadvantage.
An impulsive, reckless person answers many questions within the time limit. Another testee of above average ability achieves the same score, but did not answer nearly as many questions before time ran out, having checked his work as he went along. Do these two candidates have equal ability? This particular test would not differentiate.
One member of a forum made a lengthy argument that the lower the intelligence, the slower the person would work, because they make more mistakes in their reasoning. He reasoned that the brighter the person, the faster they would be, because they would make fewer reasoning errors.
This assumption is not borne out in actual psychometric statistics. The gifted do not necessarily score the highest in processing speed. This may be owing to personality factors such as perfectionism or extra careful work habits, which are not directly linked with intelligence.
My own theory on this is that the higher the intelligence, the more possibilities occur to the person as they examine the question. Each will need to be sifted through to see if it will fit. Although it is possible that a very gifted testee may be making few or no reasoning errors, the time factor will be balanced against the fact that more possible solutions will need to be examined. I have personally had the experience of my more average classmates start writing straight away when presented with a problem, whereas I can see multiple nuances which I have to consider first to make sense of what the question is actually asking.
Please comment if you would like to add to the list!