When constructing a test, for whatever purpose, the question of test validity needs to be addressed. That’s because there is a definite link between how accurate our tests are – and the effectiveness of our seminar design and training approach. Test validity asks to what extent does the assessment tool being used truly measure what it purports to measure (known as construct validity in testing jargon). Take for example the classic I.Q. test that many of us took as adolescents. The test used to be administered in print form that the examinee would take within a given time limit. Generations of young teenagers were tested to determine their level of intelligence in order to predict the likelihood of success during the rest of their educational careers and beyond. These tests, however, proved to show only a weak predictive correlation between success on the test, and later success in school and life. In time, test experts began testing the test itself. They found that these tests were “invalid” – because they were essentially testing reading skills under time constraints and not intelligence. The test often failed to spot intelligent children who may have simply possessed poor reading skills due to poor training, lack of practice, motivation, dyslexia, etc. In time, the development of I.Q. tests changed and included, as well as reading, oral questions and responses, drawings, visual recognition, tactile exercises, and discussions.

Criterion Validity
But the story continues. Researchers found that even though tests could be constructed to more accurately measure a given criterion, there was still the question of knowing whether the criterion itself was meaningful. In other words, was the context (the big picture) also consistent with what was being measured.

For example, for years language classes were taught with a heavy emphasis on learning grammar, the so-called rules of a language.  The assumption was based on the surface observation that articulate native-speakers of a language knew and could use their own grammar adeptly. Therefore, learning the grammatical rules (along with vocabulary) should transfer into acquiring the language for the second-language learner as well. Teachers were hired to teach “grammar” lessons in the same way Latin or Greek were taught. Students struggled to learn it, tests were designed to test their knowledge of it, and grades were assigned to determine their level of proficiency. The result? Once out of the classroom, “high-proficiency” second-language learners were barely able to formulate even the simplest sentences in a foreign language they studied for years, let alone comprehend that language from native speakers who spoke it at a normal tempo.

So what went wrong? The tests were indeed accurately measuring grammar comprehension of the second-language learner (construct validity). But these tests missed the big picture. A closer look revealed that many native speakers of a language were also not able to consciously explain their own grammar. Therefore, “knowing” grammar explicitly was not a prerequisite for using it proficiently. Something else was at play here which the original observations had also missed. When researchers looked at how children learned their own language, they found that they do not consciously “learn grammar”. Rather, they learn to hear and then speak their language, in highly contextual, highly meaningful situations where the parts of language are always used in an integrated way (i.e. articles, nouns, verbs, tenses, are not experienced separately, but used together). Rules exist, but they are implicitly learned.

What needs to be asked is whether what we are measuring is truly exists in the concrete world. Getting it right (or wrong) has great implications for the way we design our curriculum and develop our training methods.

Tests and Training in the Business World
In the business world, just consider how many marketing departments have designed product training seminars almost exclusively around knowing the features of that product (similar to knowing the rules of a language). When asked, “Why this is important” the response typically is, “because technical sales people have to know about the features of the product in order to offer our customers the right solution for their needs.” As we have seen, on the surface a test can be designed to assess if a salesperson has learned the features of a certain product (construct validity). But at a deeper level it may be missing the bigger picture, the reality that exists in the concrete world. In the area of sales, an important question to ask is if product knowledge is truly the key criterion for success in sales. The content and training method betray the underlying assumption we have about makes for a successful encounter between a sales rep and a customer (i.e. product knowledge).

The bigger picture, when analyzing sales, revealed that product knowledge was not the key factor in most sales encounters. Rather, the most successful sales reps were able to listen attentively to the customer’s situation; they were able to understand their problem in a big-picture context, ask meaningful questions, handle doubts, build rapport with the customer, and cooperate on finding a genuine solution. Product knowledge was one component of the sales encounter but only a peripheral one. This understanding has since looped back and informed many companies how to design a training workshop for sales reps. Sales training now emphasizes many soft and problem-solving skills – with the addition of lots of hands-on practice.

Summary
Valid testing methods and training go hand in hand. Testing our assumptions must look at what is really happening in the context of the real world. It is crucial to seminar design and training methodology. But it will require the skill of asking good questions at multiple layers. We must then be prepared to make changes, based on our findings, in the content and the approach of how to train our staff.

Comments are closed.