Artificial intelligence (A-I) gathers and analyzes huge troughs of data to recognize patterns and predict future behavior. This is accomplished by creating algorithms that attempt to model high-level abstractions. For example, feed a computer a million images of cats with the label “cat,” along with a similar number of images of other animals without the “cat” label, and the machine will “learn” through trial and error to distinguish cats from other four-legged creatures. Feed enough medical images to a computer and the job of radiologist may become obsolete. Key to the recent explosion of A-I is the ubiquity of raw data, rapidly increasing computer power, and the decreasing cost of using computers to analyze the data.
A-I has become an indispensable tool in the hiring process for many companies. It is used not just to sort and scan applications, but also to identify characteristics of applicants that correlate with job tenure, employee attitude, upward advancement, disciplinary record, or personality fit with the company.
Another way A-I is used in the hiring process is through pre-hire video-recorded interviews. Applicants are asked questions tailored to the open position. They digitally video-record their answers, usually online from home or their current office, using their desktop or laptop computer. The video is then transmitted to a company that uses A-I to analyze the applicant’s language patterns, verbal skills, and emotions by, for example, identifying facial expressions, intonation, gestures, and word choice. It then uses its machine learning algorithms to evaluate the candidates’ work styles, predict their ability to work with others, and assess general cognitive ability.
After a company uses A-I to hire an employee, it may use A-I to track performance, determine pay, and make decisions about promotions and/or dismissal. One human-resources service provider claims it can examine some sixty factors—such as time an employee takes between days off for vacations, changes in an individual’s supervisor, and other seemingly innocuous considerations—to predict which employees are likely to quit, which are likely to be disgruntled, and how the employer might retain the best employees.
A-I not only creates the potential for highly intrusive monitoring, but also raises questions about how employers will use the data they collect about employees’ performance, with whom they will share it, and how long they will keep it.
A-I also is being used in increasingly sophisticated ways to monitor workers’ every movement and thought. Examples include algorithms that listen to customer-service calls and grade workers on empathy; that registers everything that happens on a worker’s keyboard to flag poor productivity, misconduct, and negative attitudes; and that uses sensors on employees chairs to indicate how often an employee is at her desk and how long she is on breaks.
Ultrasonic wristbands issued by Amazon track workers’ precise locations and hand movements, gauging workers’ productivity and vibrating to nudge workers into being more efficient. Companies have also updated the classic employee badge into a monitoring device. Humanyze requires its employees to wear an ID badge containing a microphone that records conversations, a Bluetooth and infrared sensor that monitors where they are (how long do they spend in the break room? Outside the building smoking?), and an accelerometer that notes when they move. The company’s software collects data on how much time each worker spends talking with people, the gender of the person the worker is talking with, and the proportion of time spent speaking versus listening.
In addition to monitoring on-duty conduct, A-I enables employers to monitor of off-duty online conduct continuously and extensively. Employers may have good reason to take advantage of this new technology. Racist or sexist posts may indicate a proclivity to racist or sexist conduct or harassment in the workplace. Aggressive posts may indicate a bullying personality. Yet ubiquitous monitoring of off-duty conduct raises significant privacy concerns.
A-I not only creates the potential for highly intrusive monitoring, but also raises questions about how employers will use the data they collect about employees’ performance, with whom they will share it, and how long they will keep it. A-I-enhanced data collection, retention, and analytic capabilities threaten to create a permanent record of employee productivity, activity, and medical and physiological attributes – a “virtual resume” that potentially could follow a worker throughout her career. This raises a host of open legal and policy issues: First, do workers have an ownership interest in data compiled about them? If so, under what circumstances can they exclude others from seeing or using it? If not, do they have a right to access the data? Second, do they have any protection from this data being shared with others—such as to prospective employers—or does their data travel with them as a lifetime electronic resume that they can neither see nor rebut? And third, do workers have recourse if their data is incorrect and it is used in an adverse employment action or is shared with others?
In the U.S., the development and use if A-I in the workplace is, to date, almost completely unconstrained by law. Europeans have somewhat more protection thanks to the General Data Protection Regulation (GDPR), which contains a right to access one’s data and a right to have personal data deleted. However, the GDPR was enacted with consumers – not employees – in mind.
The use of A-I in the workplace also raises significant labor law issues. A-I monitoring of workers can be used to listen to employees’ conversations, record employee movements, monitor biological reactions, identify unhappy workers, and identify participants in employee gatherings. These uses enable an employer to pinpoint union supporters, predict workers who might become union supporters, and intimidate others.
A-I can serve legitimate and even salutary needs in the workplace. However, it also raises significant legal and privacy issues that will escalate as the technology becomes more effective and pervasive.