Leveraging AI to Manage Talent: 5 Pitfalls to Avoid
The application of Artificial Intelligence (AI) continues to grow and transform how organizations deliver customer experiences and manage internal operations. When it comes to managing talent, AI has tremendous potential to help companies manage their most important “asset” – which is also often the most elusive and costly. With great promise to help drive efficiency and promote fairness, AI applications for talent management have raised questions regarding efficacy and introduced ethical concerns. This paradoxical application of AI leaves talent management leaders somewhat stuck as they are trying to leverage the power of AI as effectively as their colleagues in other functions, but with an added burden to be thoughtful as the consequences of these applications impact people’s lives and livelihood.
Artificial intelligence, typically defined as the use of digital technology performing tasks thought to require human intelligence, and specifically a subset of AI, Machine Learning, helps classify information and make predictions. In talent management, we are trying to predict and influence human behavior in the workplace – not an easy endeavor. We are trying to answer questions such as: Who is likely to leave the organization? Where can we identify our next senior leaders? What leadership behaviors will lead to true commitment and engagement? Who should I hire? These are challenging questions with imperfect prediction and costly consequences when you get it wrong. Given that context, below are some pitfalls to avoid when racing to increase predicting human behavior:
Lack of Transparency
If your Netflix algorithm suggests a movie you don’t enjoy, you aren’t too concerned about the black box algorithm it used to suggest that terrible action movie. However, if you don’t get the job you want or the promotion you were expecting, you may have more questions about that algorithm. When developing an AI application with the end in mind, you will need to explain how it is used to the end user. In fact, in there are increased legislation that require employers to make their AI algorithms transparent to end users (e.g., applicants) when used to make employment decisions.
Unfortunately, a fancy algorithm cannot make up for bad – or biased – data. Garbage in, garbage out – or in cases with big data, landfill in, landfill out. HR/Talent Management data are notoriously messy, incomplete, and strife with subjectivity. There are noted stories of renowned companies who had to scratch AI applications for selection because the criteria used to identify top performers was too intertwined with race/gender. If your top performers are mostly white males, then the algorithm, without any conscious or intended bias, will pick more white males. Utilizing experts with domain knowledge to clean, understand, and interpret data is key to a successful AI application.
No Scientific Basis
According to one of my favorite sites, spurious correlations, there is a very strong correlation between mozzarella cheese consumption and civil engineering degrees awarded by year. While this is a ridiculous example, taking a data heavy, science lite approach to AI applications can lead to spurious relationships unable to be replicated. In addition to limiting replication and generalization, if you can’t explain the logic or the “why” behind it, then you are left at a disadvantage – often with legal implications. There are decades of behavioral science research to leverage for talent management insights – don’t ignore it, leverage those studies to inform and explain your application.
Are you trying to use AI, just to use it? Several functions in organizations have quickly realized gains in automation and efficiency through leveraging AI, but rushing to show your credibility as a talent management leader without first defining the problem you’re trying to solve can lead to a faulty application. Are you trying to increase efficiency? Increase prediction? Uncover insights more quickly? After framing the core problem you’re trying to solve, think about how the solution will provide a benefit to each of the stakeholders impacted by the application. For instance, leveraging AI for selection may help your overworked recruitment staff, but can raise questions among candidates about how they are being evaluated for roles.
Not Doing Ongoing Audits/Evaluating Outcomes
Congrats, you have successfully deployed an AI application! How do you know it’s working? How do you know it’s not causing unintended consequences for some groups, but not others? Many AI applications have the best intentions, but without careful monitoring to demonstrate efficacy and fairness, those best intentions can result in negative consequences. Involving stakeholders from multiple groups with multiple perspectives can help you promote the positive impact of AI applications while containing any adverse outcomes.
At Summit, we use tools and partners who harness AI to help us accelerate speed to insight for our clients, evaluating the effectiveness of their leaders and how their organizations perform. We think of AI as an “augmented intelligence” by combining its capabilities with our expertise and experience, we get the best of both worlds. As we explore other uses for AI in executive coaching, team development, etc…we will be thoughtful to apply AI in the best way possible – as predicting human behavior is a challenging endeavor – but a challenge worth addressing for our clients.