The stunning rise of AI over the past year has touched just about every sector of the global economy so it’s no surprise that DC plan sponsors are also seeking ways to leverage this technology in their own plans. Jennifer Weeks, vice-president of strategy with BEworks is a leading researcher who is looking at the link between behavioural science and artificial intelligence – and she’s been working to apply this research to the DC pension space. She will be speaking about her work at the upcoming DC Plan Experience Forum from April 16-17, 2024 at the Shangri-La Hotel in Toronto. In advance of her discussion, we had the opportunity to ask her a few questions about her research.
CLC: The topic of your session at the upcoming DC Plan Experience Forum is how to build algorithms that reflect human psychology – can you explain what this means?
Jennifer Weeks: Yes, so this comes out of our work with plan administrators who are looking to build AI algorithms. These clients have actually come to us looking to build these predictive algorithms that will basically predict investor behavior and help them to make better decisions about their investments. What we saw from those clients is that they were using a lot of readily available data points – really easy things like demographics and the number of assets that their investor has and things like the tenure on the plan. And they just weren’t finding any success with those data points in predicting critical investor behaviors. They want to predict things like taking advantage of advice, and they just weren’t able to predict those things with the data points they were using. We help them by uncovering the factors that are actually affecting people’s decision-making and behavior. Those psychological motivators, like how confident are they in their ability to make decisions about their portfolios? Are they overconfident? To what extent are they showing overconfidence bias, for example, or an optimism bias, or information avoidance? Because there are all these biases that we know from the science affect investor behavior, but those were not showing up in the models at all.
We’ve been creating different models that are based on not just readily available data points, but psychological constructs. If we know overconfidence bias is affecting investor decision-making or how likely they are to take advice, for example, then we’ll look at how we can measure or get a read on an investor’s level of overconfidence and build that into the model. We’ve done that over the last year and a half with one of our major clients in this space and found that our psychologically-informed algorithm can predict an investor’s likelihood to seek and take advice way above and beyond these more demographic-based models.
CLC: Let’s look at the employee’s perspective. Can AI help deepen an employee’s experience with their benefits, particularly with their DC pension plan?
JW: With a DC pension plan specifically, we know that to get the most out of their plan experience, they have to be making good decisions. They can’t just set it and forget it. They have to make sure to update their risk profile, for example, or increase the rate of contribution over time. There are a lot of behaviors that we want to see from them and that they really would benefit from, so that’s where behavioral science comes in. If we have an understanding of the psychological factors, the biases that are affecting those behaviors that we need to see from plan members, then we can start delivering them psychologically-informed messages at the right time with the right message that appeals to their psychology – for example, at that point in time when they maybe need to update their risk profile. Those messages are going to be more effective at driving the behaviors we need to see for the plan member to get the most out of their plan experience.
CLC: What pitfalls should DC plan sponsors/employers avoid when it comes to integrating AI into their processes?
Jennifer: That’s a really good question because I think there’s a lot of excitement about AI right now. A lot of the hype can make us feel like, what can’t AI do? I would say there is a risk in overusing AI with your plan members because we know from the research that if you automate too many of the decisions that an investor needs to make, for example, if you set defaults for contributions and they don’t change them, everything is automated, it’s the easiest, smoothest experience for them, then the investor becomes passive. You induce this passivity through over-automation and that can have really undesirable consequences for them. They think that they don’t need to worry about their portfolio anymore, so they don’t check. They don’t update their profile when they need to. They don’t update the contributions at a time when they should, they just become very passive and outsource these important personal decisions to AI. I do think that plan sponsors should be aware of this possibility and not over-automate and make sure that there is some appropriate level of friction in the enrollment process so that the plan member still retains some ownership and some agency that they’ll carry forward with them. That will ultimately lead to long-term benefits for the plan member.
Interested in learning more about the Canadian Leadership Congress and attending the DC Plan Experience Forum? Contact Joanne Boccia – firstname.lastname@example.org