Transcript: Low-Tracking-Error Strategies – Managing Risk Budgets and Driving Excess Returns
Air Date: September 15, 2025 | 12:00 PM, EST
Caroline: Good afternoon, everyone, and thanks for joining us. I'd like to welcome you to this week's Monday Minute Chat hosted by the Canadian Leadership Congress.
The topic today is low-tracking-error strategies and how they can help you manage risk budgets and, of course, drive excess returns.
Here to talk about it is Arup Datta, Head of the Global Quantitative Equity Team at Mackenzie Investments. And here to moderate the conversation is David Wong, Chief Investment Officer and Managing Director, Head of Total Investment Solutions at CIBC Asset Management.
Welcome to both of you. David, I’ll hand things over to you to get us started.
David: Thanks very much, Caroline. Pleasure to be here on the Monday Minute. I’m looking forward to an exciting discussion on a very interesting topic—low-tracking-error investing.
Before we get started, I just want to set the table for the audience on tracking error in general, to make sure we’re on a common platform of understanding. In the media, there can be confusion about what tracking error actually is. Sometimes it’s referenced as “tracking difference,” which is the relative return of a strategy versus the benchmark.
In reality, tracking error is the volatility of that difference. For example, imagine a strategy that consistently outperforms its benchmark by 10%. By media definitions, that might look like “high tracking error.” But if that 10% came very consistently, year in and year out, it would actually be low tracking error—essentially no tracking error.
So, it’s important to contextualize tracking error alongside excess return. All that said, there seems to be a positive connection between tracking error and excess returns.
So, Arup, the first question for you: can you better define the relationship between tracking error and excess returns?
Arup: Thank you, David. My pleasure to be here. As we learned back in business school—many decades ago for me—there’s always a risk/return trade-off. You gave a good stylized example: steady 10% alpha each year results in zero tracking error, which of course rarely happens, but it illustrates the point.
To me, risk-taking must lead to alpha. There’s no point in taking a lot of risk and producing negative alpha. I’ve always seen the two as connected: generally, if you take on more risk and you have the right skill set, you can deliver positive alpha. If you don’t, you end up with negative alpha.
Low-tracking-error strategies have become increasingly popular in recent years. This is my 33rd year working in quantitative equity out of Boston, and I can say allocators who like these strategies value the consistency of alpha delivery.
By definition, if you’re running lower risk versus the benchmark, your deviation from it will also be lower. So, a 0.5% tracking error product won’t see the same swings in alpha as a 5% tracking error product. Ultimately, it depends on what the asset owner wants: more risk with potentially higher alpha, or tighter performance around their allocation targets with less deviation.
David: Great, thanks for that answer. So yes, it’s about trade-offs. You take on more risk, and hopefully you get more return. But beyond a certain point, the additional return tends to fade—information ratios start to decline.
I’d be curious, Arup, are there particular markets—say EAFE, Canada, the U.S., or emerging markets—where you see that relationship hold up longer before information ratios begin to decay?
Arup: Yes, absolutely. Having run strategies across all markets, I’d say the U.S. large-cap space is the most efficient market, while emerging markets and small caps are the least efficient.
So, if you have skill, you want to run more actively in less efficient markets like emerging markets and small caps. EAFE and Canada fall in between, and the U.S. large cap requires tighter tracking error.
One thing I’ve observed: when you run with lower tracking error, the information ratio tends to improve across all markets, because you’re delivering alpha more consistently. Again, it comes down to both the manager’s and client’s preferences in terms of positioning.
David: Okay, excellent. At the outset, I mentioned that tracking error on its own lacks meaning—it needs that excess return number. Ideally, if you’re hiring an active manager, you want that number to be positive.
You’ve spent three-plus decades cultivating quantitative signals and thinking about inefficiencies in markets. Could you walk us through how your signal generation has evolved over time, what inefficiencies you try to exploit, and why you think they’ll persist?
Arup: Thank you, David. I’ve always believed that without consistent alpha, there’s no business. Asset managers need to aim for consistency, because it makes life easier both for themselves and for allocators.
My approach is what I call the All-Weather Core strategy. Over my career, I’ve seen markets rotate among three dominant styles:
● Growth (which dominated in 2024 and early 2025),
● Value (which outperformed in 2022), and
● Quality (which mattered in early 2023 when U.S. banks failed).
Rather than betting on just one style, I build all three into the process. So, there’s always a measured dose of value (cheaper than the benchmark), growth (faster growing than the benchmark), and quality (higher free cash flow margins than the benchmark). That way, no matter which style is in favor, the portfolio can navigate the environment.
Capacity management is also crucial. I cap assets under management for each strategy, because it’s easier to deliver alpha with less capital. Once a strategy reaches its limit, I won’t expand it further.
Over time, we’ve improved by learning from mistakes. For example, I missed Amazon in the 1990s and Starbucks in the 2000s—but in the last couple of years, we correctly held Nvidia.
That’s because we’ve evolved, adding longer-term growth and quality metrics since joining Mackenzie about eight years ago. These helped us capture category leaders like Nvidia and Microsoft.
Finally, quant is about avoiding pure data mining. Any metric we adopt must work conceptually and across all eight of our stock-selection models—Canada, China, U.S. large cap, and others—over two to three decades. If it holds up across time and markets, then I trust it will continue to work.
So, my framework is: All-Weather Core, capacity discipline, and conceptual rigor.
David: That’s excellent. I’ve looked at your U.S. track record—it’s ahead of the benchmark in recent years, which is no small feat. Kudos to you and your team.
Let’s circle back to tracking error. From an asset owner’s perspective, sometimes maximizing returns isn’t the only goal. For example, they may prefer lower drawdowns versus the benchmark, even if it means giving up some return.
How do you actually customize tracking error levels in your portfolios?
Arup: Great question. We’ve done this in practice, including earlier this year for your team.
It starts with agreeing on the client’s risk budget—say, 0.5% or 1%. In my view, low-tracking-error strategies are typically under 2%, and often closer to 1% or below.
To get there, we “tighten the knobs”:
● Smaller size deviations from the benchmark,
● Tighter beta controls,
● Stricter position limits,
● Narrower sector and country ranges,
● Reduced turnover.
Essentially, everything is constrained more tightly, which naturally reduces risk. We backtest to ensure the strategy still delivers the desired alpha profile.
In live practice, it plays out as expected. On strong alpha days, the active strategy outperforms more, while the low-tracking-error version still delivers alpha but to a lesser degree. On bad
alpha days, the active strategy underperforms more, while the low-tracking-error version cushions the impact.
So, it’s a matter of fine-tuning to the client’s comfort zone, while maintaining consistency.
David: That makes sense. It’s about aligning with the purpose of the asset class and avoiding unintended risks.
Can you talk about the tools you use to measure tracking error on an ex ante basis? Are they off-the-shelf, or proprietary?
Arup: Everything we do is proprietary—stock-selection metrics, risk models, transaction-cost models.
For risk, we use two models:
1. Fundamental risk model – defines risk factors like size or momentum.
2. Statistical risk model – based on price co-movement, using Principal Component Analysis (PCA).
We adopted the statistical model after lessons from 2007–08, when GE behaved more like a financial stock than an industrial. The statistical approach helps capture risks fundamental models may miss.
We run both side by side—sometimes one picks up risks earlier than the other. This dual approach gives me more confidence and helps me sleep at night.
For low-tracking-error strategies, the target might be around 1%, while more active ones allow 4–5%.
David: Very interesting. Let’s talk about 2025. We’ve seen many disruptive events, often driven by policy decisions. How have ex ante and realized tracking errors compared this year? And how do you handle unforecastable risks?
Arup: This year has certainly been eventful, though I’d say risks were more extreme during the GFC or the pandemic.
Still, we monitor ex ante versus realized risk closely. For example, after Jerome Powell hinted at potential rate cuts at Jackson Hole, the market reaction was textbook: value and small caps outperformed. Because our process embeds value and quality, we were well positioned.
The key is maintaining discipline, monitoring risks daily, and tightening constraints when necessary. Over decades—including the tech bubble, GFC, pandemic, and now this year—we’ve built strategies resilient enough to handle different shocks.
David: Excellent. Of course, no discussion in 2025 is complete without AI.
How are you incorporating AI—machine learning, NLP—into your signal generation and portfolio construction?
Arup: I can’t be a quant in my fourth decade without addressing AI. Ten years ago, AI/ML/NLP had zero weight in our process. Today, 15–20% of our stock-picking metrics rely on them.
● NLP (Natural Language Processing): We analyze MD&A sections of financial statements and earnings call transcripts in multiple languages—not just English, but also Korean, Chinese, and others. This levels the playing field with fundamental managers, but at scale—20,000 stocks daily.
● Machine Learning: We focus only on high signal-to-noise problems. Predicting short-term returns is too noisy. Instead, we use ML to forecast revenues and other fundamentals, which are more stable and useful. These models have been in live use for three years across all eight of our stock-selection models.
So, AI is now a core component, but used selectively and conceptually.
David: That’s very illuminating. Arup, thank you for sharing your insights today. Caroline, back to you.
Caroline: I thought that was fantastic. I’d love to hear the debates that must happen at your firm between quantitative and fundamental approaches—must be fascinating!
Thank you, David and Arup, for this timely and insightful discussion. It’s especially relevant given today’s global market dynamics.
That wraps this week’s Monday Minute. We’ve got more exciting content and interviews lined up that you won’t want to miss. In the meantime, visit our website, leadershipcongress.ca, for our newsletter.
Thanks again, Arup and David, and thank you all for joining us. See you next time!
Arup: Thank you.