Blinding Me With Science!
The last class lecture was called “Measurement, Validity and Reliability.” Riveting stuff, I know. Essentially the conversation boiled down to discussing how to design tests, specifically selection testing. There are four critical goals – validity, legality, utility and acceptability. Because our class discussions seemingly go wildly off track – we really only got to validity last night, but it was a really interesting topic to me.
There were all sorts of charts about Inferences of Validity with predictor constructs, predictor measures, performance constructs and performance measures. It’s about the leaps we have to take in order to figure out what trait we think will make a candidate should have in order to be successful, how we measure that success and how we can measure the trait itself.
For example, maybe you think that in order to be a good sales person, you need to be extroverted (the predictor construct). So, then you have to figure out how you measure something like “extrovertedness” (the predictor measure). Then, how does being extroverted help you in your sales job (performance construct) and finally – how do you measure the performance of the result of these inferential leaps (performance measure). Is it in the sales numbers of people that performed highly on the tests you created to measure extroversion?
Essentially, it takes these really abstract ideas about what we believe, and dissects them and measures them and checks to see if our intuitive leaps actually make sense. And then if they do, they have to be measured to find out how much success they actually have, and if they are worth selecting and testing. And then it hit me – this stuff is actually science!
Sure, psychology is a social science – some cling to it, others dismiss it out of hand. And when I tell people I am studying industrial-organizational psychology, you can almost see eyes glazing over. I admit that I too have felt it to be all very theoretical and touchy-feely kind of stuff. And I am sure a lot of it is – but for the first time I realized that this actually has a basis in science and math, and I have to start actually paying attention to things like correlation coefficients. Forget ropes courses and trust falls – this stuff actually makes sense! You can test it, repeat it, have to prove validity and back up your research. I thought I would hate this stuff (and just may, in practice) but right now – I am pretty excited about it.
I couldn’t help but think back to my undergraduate archaeology studies when I realized that it’s more than just digging around and display cases. There is actual science in archaeology. Sure, there are leaps of fantasy, and a lot of creative filling-in of unknowns, but there is a lot of good, actual, practical science in it. It legitimized it for me, and made it that much more fulfilling. I think I am feeling the same way about this field as well.
The similarities don’t end there. I can already tell that a large part of an I/O practioner’s job is figuring out how to make people care–how to explain the conclusions, how to educate people so that they understand that this has real-world implications and that these tests and hours of research are worth doing. The aggregate, and not the individual. These are the same issues archaeologists have when trying to get grant money, or educate the public about not destroying sites, or asking politicians to protect land. It all comes back to marketing.