(Excerpted from The Professional Financial Advisor IV by John De Goey)
In the 1990s … it seemed everyone had a take on how to identify top-performing mutual funds. At the time, it seemed no one could get enough information about what mutual funds were, how they worked, and how to build portfolios using them. Annual fund-ranking books were presumably helpful in allowing consumers to make smart investment decisions. No one publishes books that rank funds anymore. Why not? Furthermore, why did most books disagree on what the best funds actually were?
If the research was indeed empirical and predictive, shouldn’t they all have identified the same funds? And if the books were so committed to a long-term perspective, why did so many of the recommended funds change from one year to the next—even from the same authors?
Those authors weren’t selling timeless and useful information at all; they were simply selling books. And books that need to be updated annually have the handy attribute of built-in obsolescence, meaning they could be tweaked, repackaged, and sold anew twelve months later. From the authors’ perspectives, the best thing about these books was their imminent disposability. The second best thing was likely the lack of accountability that the books entailed.
Why would any consumer bother to check the long-term track record of a book from, say, 1996 to see how the recommended funds actually performed by 2016? After all, the thinking goes, whatever was recommended back in 1996 must surely no longer be relevant given all that has happened since. In those days, consumers were always on the lookout for the latest investment idea and could always be counted on to run out and buy the latest version of their favourite rating book the next year. Remember that all the authors told their readers that mutual funds were long-term investments. In reality, many of the recommended funds from a generation ago no longer exist, primarily because their performance was poor.
Perhaps more than anything, these books legitimized stock picking and fund picking as valuable pursuits. These books made no mention of the fact that there was no credible research to support this presumptive value proposition. Specifically, although fund picking had never been done reliably in the past, they implied that it could indeed be done reliably—and people believed them.
In short, the books lent credence to the notion of security selection as an activity that can be reliably used to outperform the markets with no evidence to support it. This lack of reliability is disclosed in prospectuses and advertising campaigns around the world. The books implied that the prospectus disclaimers were worthless when, in fact, it was the other way around.
I took on the challenge of sifting through the most prominent fund-picking books from 1996 (with rankings based on results from June 30, 1995) to see how the ten-year numbers stacked up as of June 30, 2005, and published my findings in The Globe and Mail.
The results were stunning. In all four books, the majority of recommended funds lagged their benchmarks over the ten-year period. In fact, a large proportion of the funds weren’t even good enough to survive the entire ten-year period. Many studies have shown that survivorship bias causes current performance numbers to look considerably better than they really are.
These days, most observers agree that only about 60% of all mutual funds that are launched are around to celebrate their tenth anniversary. Given this attrition rate, it should be obvious that it’s much easier to have a respectable class average if you don’t have to take a massive dropout rate into account. Regardless, the recommended funds had a collective performance record that could only be described as awful. Perhaps even more disconcerting was that the authors generally used only three years’ worth of data to make a pronouncement on a fund’s relative merit. In their minds, thirty-six monthly data points were all that was required to make an informed decision regarding performance.
Of course, having most funds lag their benchmarks after accounting for expenses would make little difference if people could reliably identify the handful that would ultimately outperform. Alas, this can’t be done. In fact, by 1998, a ground-breaking research study led by Mark Carhart showed that superior funds could not be reliably identified in advance and do not persist at any rate. I should also add that near the beginning of some of these books, some authors even included passages saying that they believed good managers could be reliably identified.
However, no rationale was ever given as to why the authors held this opinion. Then, near the end of these books, they sometimes added astonishing admissions such as “research puts the contribution of security selection—that is, choice of specific investments—at only 2% when discussing performance.” In other words, when combining these comments, the translation comes out as something like “we think superior managers can be reliably identified in advance, but we can’t prove it, and it is of almost no consequence anyway.”
MORE ABOUT FEES:
- An investor’s guide to getting what you don’t pay for
- Should my teens be investing with a discount brokerage?
- The case for banning embedded commissions
- Let’s put an end to embedded compensation
- The democratization of financial products
- How active is your fund manager?
- Should I sell my high-fee mutual funds?
- Investors see value, but why won’t they act on it?