I wrote a Sunday column about the rise of what is being called âwork force science.â Lots of companies are embracing the trend, but anyone familiar with business history might reasonably ask, Whatâs really new here?
Certainly, the current enthusiasm for worker measurement and trait testing has its echoes in the past. Frederick Winslow Taylorâs time-and-motion studies of physical labor, like bricklaying and shoveling coal, became the âscientific managementâ of a century ago.
And for decades, major American corporations employed industrial psychologists and routinely gave job candidates personality and intelligence tests.
Companies pulled back from such statistical analysis of employees in the 1970s, amid questions about its effectiveness, worker resistance and a wave of anti-discrimination lawsuits. Companies apparently figured that if any of their test results showed women or minorities doing poorly, it might become evidence in court cases, said Peter Cappelli, director of the Center for Human Resources at the University of Pennsylvaniaâs Wharton School.
Today, worker measurement and testing is enjoying a renaissance, powered by digital tools.
What is different now, said Mitchell Hoffman, an economist and postdoctoral researcher at the Yale School of Management, is the amount and detail of worker data being collected. In the past, he said, studies of worker behavior typically might have involved observing a few hundred people at most â" the traditional approach in sociology or personnel economics.
But a new working paper, written by Mr. Hoffman and three other researchers, mines data from companies in three industries â" telephone call centers, trucking and software â"- on a total of more than one million job applicants and more than 70,000 workers over several years.
The measurements can be quite detailed including call âhandleâ times and customer satisfaction surveys (call centers), miles driven per week and accidents (trucking), and patent applications and lines of code written (software).
Their subject is worker referrals, and the paper is titled, âThe Value of Hiring Through Referrals.â
Selecting new workers who are recommended by a companyâs current employees has long been seen as a way to increase the odds of hiring productive workers. It makes sense that the social networks of a companyâs workers would be a valuable resource to tap, and many companies pay their employees referral bonuses.
The researchers found that referred employees â" across the three industries â" were 25 percent more profitable than nonreferred workers. But the referral payoff comes entirely from recommendations from a companyâs best workers, whose productivity is above average.
âA recommendation from Joe Shmoe the dud is worse than hiring a nonreferred worker,â Mr. Hoffman noted.
The paper suggests that companies might want to rethink across-the-board referral policies.
âThe previous work on worker referrals has been mostly anecdotal and impressionistic,â said Stephen Burks, an economist at the University of Minnesota, Morris, who was a co-author of the paper. âIt hasnât been quantified in this way before, the way you can with these rich data sets.â
But another co-author, Bo Cowgill, points to a challenge in work force science, and for much of the emerging social science using Big Data. Mr. Cowgill, a doctoral student at the University of California, Berkeley, spent six years as a quantitative analyst at Google. So he has plenty of first-hand experience in sophisticated data handling.
The data in work force science is observational data rather than data from experiments, which is the gold standard in science. What much of Big Data research lacks, Mr. Cowgill said, is the equivalent rigor of randomized clinical trials in drug-testing. That is, controlled experiments.
Observing how large numbers of people behave, Mr. Cowgill noted, can be extremely valuable, pointing to powerful correlations. But without controlled experiments, he added, you often do not get to the deeper understanding of the causes of observed behavior â" understanding causation rather than merely identifying correlation.
âSome people feel that knowing correlations are enough,â Mr. Cowgill said. âNot me, and most economists would agree.â
But other economists say this kind of Big Data research is just getting under way â" and already yielding significant results. âI wouldnât sell short being able to see the correlations,â said Erik Brynjolfsson, an economist at M.I.T.âs Sloan School of Management. âThat is a big step in itself. And this is the way science works. You start with measurement and it progresses to experiment.â