Document 1 of 1
Against Job Interviews
Author: Dana, Jason
Abstract: None available.
Full text:
A friend of mine once had a curious experience with a job interview. Excited about the possible position, she arrived five minutes early and was immediately ushered into the interview by the receptionist. Following an amicable discussion with a panel of interviewers, she was offered the job.
Afterward, one of the interviewers remarked how impressed she was that my friend could be so composed after showing up 25 minutes late to the interview. As it turned out, my friend had been told the wrong start time by half an hour; she had remained composed because she did not know she was late.
My friend is not the type of person who would have remained cool had she known she was late, but the interviewers reached the opposite conclusion. Of course, they also could have concluded that her calm reflected a flippant attitude, which is also not a trait of hers. Either way, they would have been wrong to assume that her behavior in the interview was indicative of her future performance at the job.
This is a widespread problem. Employers like to use free-form, unstructured interviews in an attempt to "get to know" a job candidate. Such interviews are also increasingly popular with admissions officers at universities looking to move away from test scores and other standardized measures of student quality. But as in my friend's case, interviewers typically form strong but unwarranted impressions about interviewees, often revealing more about themselves than the candidates.
People who study personnel psychology have long understood this. In 1979, for example, the Texas Legislature required the University of Texas Medical School at Houston to increase its incoming class size by 50 students late in the season. The additional 50 students that the school admitted had reached the interview phase of the application process but initially, following their interviews, were rejected. A team of researchers later found that these students did just as well as their other classmates in terms of attrition, academic performance, clinical performance (which involves rapport with patients and supervisors) and honors earned. The judgment of the interviewers, in other words, added nothing of relevance to the admissions process.
Research that my colleagues and I have conducted shows that the problem with interviews is worse than irrelevance: They can be harmful, undercutting the impact of other, more valuable information about interviewees.
In one experiment, we had student subjects interview other students and then predict their grade point averages for the following semester. The prediction was to be based on the interview, the student's course schedule and his or her past G.P.A. (We explained that past G.P.A. was historically the best predictor of future grades at their school.) In addition to predicting the G.P.A. of the interviewee, our subjects also predicted the performance of a student they did not meet, based only on that student's course schedule and past G.P.A.
In the end, our subjects' G.P.A. predictions were significantly more accurate for the students they did not meet. The interviews had been counterproductive.
It gets worse. Unbeknown to our subjects, we had instructed some of the interviewees to respond randomly to their questions. Though many of our interviewers were allowed to ask any questions they wanted, some were told to ask only yes/no or this/that questions. In half of these interviews, the interviewees were instructed to answer honestly. But in the other half, the interviewees were instructed to answer randomly. Specifically, they were told to note the first letter of each of the last two words of any question, and to see which category, A-M or N-Z, each letter fell into. If both letters were in the same category, the interviewee answered "yes" or took the "this" option; if the letters were in different categories, the interviewee answered "no" or took the "that" option.
Strikingly, not one interviewer reported noticing that he or she was conducting a random interview. More striking still, the students who conducted random interviews rated the degree to which they "got to know" the interviewee slightly higher on average than those who conducted honest interviews.
The key psychological insight here is that people have no trouble turning any information into a coherent narrative. This is true when, as in the case of my friend, the information (i.e., her tardiness) is incorrect. And this is true, as in our experiments, when the information is random. People can't help seeing signals, even in noise.
There was a final twist in our experiment. We explained what we had done, and what our findings were, to another group of student subjects. Then we asked them to rank the information they would like to have when making a G.P.A. prediction: honest interviews, random interviews, or no interviews at all. They most often ranked no interview last. In other words, a majority felt they would rather base their predictions on an interview they knew to be random than to have to base their predictions on background information alone.
So great is people's confidence in their ability to glean valuable information from a face to face conversation that they feel they can do so even if they know they are not being dealt with squarely. But they are wrong.
What can be done? One option is to structure interviews so that all candidates receive the same questions, a procedure that has been shown to make interviews more reliable and modestly more predictive of job success. Alternatively, you can use interviews to test job-related skills, rather than idly chatting or asking personal questions.
Realistically, unstructured interviews aren't going away anytime soon. Until then, we should be humble about the likelihood that our impressions will provide a reliable guide to a candidate's future performance.
Follow The New York Times Opinion section on Facebook and Twitter (@NYTopinion), and sign up for the Opinion Today newsletter.
Drawing (Drawing by Jun Cen)
Subject: Employment interviews; Students
Company / organization: Name: Legislature-Texas; NAICS: 921120
Publication title: New York Times, Late Edition (East Coast); New York, N.Y.
Pages: SR.6
Publication year: 2017
Publication date: Apr 9, 2017
column: Gray Matter
Section: SR
Publisher: New York Times Company
Place of publication: New York, N.Y.
Country of publication: United States
Publication subject: General Interest Periodicals--United States
ISSN: 03624331
Source type: Newspapers
Language of publication: English
Document type: Op-Ed
ProQuest document ID: 1885391438
Document URL: http://search.proquest.com.proxy-ub.researchport.umd.edu/docview/1885391438?accountid=28969
Copyright: Copyright New York Times Company Apr 9, 2017
Last updated: 2017-04-09
Database: New York Times
Copyright © 2017 ProQuest LLC. All rights reserved. - Terms and Conditions