(866)-482-2177

sales@alphatheory.com

REQUEST A DEMO

SYSTEM REQUIREMENTS


Please note the following System Requirements. Further, please limit the number of open applications (particularly price streaming applications) while logged in to Alpha Theory™.


Recommended System Specifications
Processor: Dual Core or Quad-Core 2.4GHz or faster
RAM: 4GB+
Browser: Google Chrome 30+
Screen Resolution: 1280 x 1024 or greater
Internet Access: Business Class High-Speed


Minimum System Requirements
Processor: Intel Pentium-M 2.0Ghz or equivalent
RAM: 2GB+
Browser: Google Chrome, Mozilla Firefox, Internet Explorer 9+ (without Compatibility View), Safari
Screen Resolution: 1024 x 768 or greater
Internet Access: High-Speed

Subscribe to Alpha Theory content

Alpha Theory Blog - News and Insights

51 posts categorized "External Articles"

December 29, 2021

A Lack of Freshness Spoils Returns

 

Alpha Theory can’t tell you how to do your research, but it can tell you when. Using insights from the Alpha Theory All-Manager dataset, we can provide guidance on some of the basics in managing your research process. 

 

Managers understand intuitively that producing investment research and updating that research regularly (i.e. freshness) is important. But how frequently? Should I update my research every 60 days? Every two weeks? Do I need to produce scenarios for all my positions?

 

Key conclusions: 

1. Assign price targets and probabilities to every investment

2. Update them once a month

 

To determine the impact of freshness and coverage on returns, we measured the one-year forward return for the optimal long portfolio for each fund in the Alpha Theory All-Manager dataset on a quarterly basis1. We then put each fund into four buckets based on their average freshness (days since the last update or DSLU) and coverage (percentage of positions with price targets). Next, we calculated the return of each quartiled bucket to see if returns correlated to freshness and coverage.

 

We found that funds that were diligent enough to place in the top quartile produced more than four times as much alpha as the bottom quartile, increasing monotonically from bottom to top. The median update frequency for the top quartile was 25 days (once a month updates), meaning the top funds updated more than 10x as often as managers in the bottom quartile. Additionally, managers in the top quartile had research on all active positions.  

 

A Lack of Freshness Spoils Returns

 

As a fundamental manager, you may argue that very rarely does something meaningful happen every 30-days that warrants a forecast update. We would counter that price is an important signal. For example, let’s say you initiated coverage on a position at $100 with a 70% chance of going to $150 and a 30% chance of going to $50. If the price moves from $100 to $125, wouldn’t you say the probability of reaching your bull target has changed? While $150 may still be the price suggested by your model, updating the probabilities of your scenarios to more accurately reflect likely outcomes allows the OPS model to make better sizing recommendations.

 

In addition, Daniel Kahneman’s new book “Noise” describes how the same expert can take the same information and come to different conclusions at different times. And, that the best answer is the average of those forecasts. This means that an analyst may come to a different conclusion for price target and probability on a different day and that the constant refinement (updating once a month) is healthy and leads to more accurate forecasts.

 

Finally, research from our friends at Good Judgement Inc. shows that over the past six years, their top forecasters (orange) update roughly 4x as often (11 updates vs 3 updates per question) as non-Superforecasters. Update frequency has a high correlation with outperformance and incorporating even small additional bits of information (Superforecaster updates were roughly half the size of non-Superforecasters) that either support or detract from the probability of a given outcome lead to better results over time.

 

A Lack of Freshness Spoils Returns Chart 2

 

We are always interested in learning more about your research process and where Alpha Theory can help. Alpha Theory is a process enhancement tool, creating a space that systematizes how you conduct and use research for allocation decisions. Please reach out to us with any questions so we can better optimize your workflow to generate more alpha.

 

1To normalize for different benchmarks, we calculated alpha on an idio+sector basis using the Axioma World-Wide Equity Factor Risk model, which removes performance derived from all their tracked factors, excluding sector. 

 

November 29, 2021

Getting Comfortable with Many, Micro Updates

 

For years we’ve worked closely with the folks at Good Judgement Inc. from “Superforecasting” fame. One of our friends there, Chris Karvetski, recently published a white paper called “Superforecasters: A Decade of Stochastic Dominance” on Superforecasters’ attributes and skills. For analysis, Chris studied 108 forecast questions with 167,000 forecasts to compare the differences between accuracy and approach between Superforecasters and everyone else.

 

From an accuracy perspective, Superforecasters dominate with accuracy that is 36% better (0.166 error for Superforecasters versus 0.259 for general forecasters).

 

Picture1

 

Alpha Theory clients forecast stock price movement. As such, the question we should ask is “how can we be more like Superforecasters?” Well, Chris broke down the frequency and magnitude of updates and I believe the answer is clear.

 

Picture2 Picture2

 

Superforecasters update their forecasts ~4x more often which leads them to adjustments that are about half the size. Imagine steering a boat towards a lighthouse. You can choose to make 3 major adjustments or 11 minor adjustments. Which method is going to get you closer to the lighthouse?

 

As analysts, to gain better forecast accuracy, we should frequently update our price targets and probability forecasts. Obviously, new information warrants updates but we should still make updates even when there is no new information. As The Verve says, “we’re a million different people from one day to the next.” We all have what Daniel Kahneman calls, Occasion Noise, which basically means that we change our opinions without the facts changing. Our mood impacts our forecasts. To get a truer sense of our own opinions, we should ask ourselves the same question at different times.

 

Let’s be like Superforecasters and get comfortable with many, micro updates. In our next blog post, we’ll explore the impact that update frequency has on returns.

 

June 30, 2021

Joe Knows: Why Active Managers Should Codify Their Process

 

A colleague of mine forwarded an article titled “All Active Managers Should Run Systematic Replicas of Their Portfolios” by Joe Wiggins, who is head of portfolio management at Aberdeen Standard. Well Joe, the folks at Alpha Theory agree. I have highlighted a brief section, but the whole article is worth a read:

 

In its most basic form, all that is required is a set of portfolio construction rules (number of positions, position sizes, concentration) and criteria about when to buy or sell securities. This can be as simple or complex as is desired, provided it can be managed and maintained by a computer with minimal human involvement.

There are three key reasons why such an approach should be valuable to active fund managers:

 

Idea Generation: Although not its primary purpose, it can function as a buy and sell idea generation tool that is more sophisticated than a screen or filter. If you continue to hold a stock that the systematic version of our strategy has sold, you should be able to justify why.

 

Noise Cancelling: The most impactful feature of the approach is the ability to observe investment decisions being made absent much of the noise that influences human judgement. There are a multitude of factors that lead us to make inconsistent and erratic choices. Running a systematic version of a fund removes this issue by focusing solely on the rules prescribed.  How much of the potential loss in rigour and detail is compensated for by the removal of noise?

 

Identifying Value-Add: Active fund managers often struggle to convey what their true value-add or edge is. Too often it is overly generic (‘growth at reasonable price’) or suitably vague (some kind of ‘secret sauce’ or ‘art’). This is a problem. If fund managers are attempting to sell a skill at a high price, it would be helpful to know what it is. Running a systematic version of a fund can be incredibly beneficial in this regard. 

 

March 30, 2021

Capital Allocators Book release by Ted Seides

Our friend Ted Seides has recently released a great book titled “Capital Allocators” and WE’RE IN IT! The book distills the learnings and best practices of his 180+ podcasts and is a treasure trove of great insights. There are four things that make the book special:

1. Ted gets amazing people.

2. Many of these people don’t publish their thoughts and this is our only access to them

3. Ted has distilled the best of these learnings into a “toolkit” you can apply to your own investing.

4. ALPHA THEORY IS INCLUDED!

 

See below for the section on Alpha Theory (italicized paragraph is edited to focus on Alpha Theory):

 

Cameron Hight was a frustrated former hedge fund manager at a smaller shop who felt he did not have the requisite tools to improve their own skills. He set aside managing money to create a software company that would help portfolio managers.

 

Cameron Hight had an insight that has helped hedge fund managers big and small optimize portfolio construction. He believed markets move so quickly that a portfolio manager cannot consider all the variables to optimize position sizing in real time. His business, Alpha Theory, strives to make the implicit explicit by putting numbers and probabilities on position sizing decisions.

 

Alpha Theory uses the investment team's research to calculate risk and reward in real time. A thorough analyst already has models and probability scenarios for the potential path a stock might take. Absent new Information, each movement in the stock price changes the attractiveness of risk and reward. Alpha Theory models conviction-weighted sizing based on the investment team's research and compares the result to the actual portfolio position size. Over 15 years of operation, Cameron has teams of data showing that his seemingly simple tool has added substantial returns for clients who employ it in their practice.

 

His data also revealed an important conclusion about many fundamental managers. Good active managers perform far better in their larger positions than they do in smaller names. Alpha Theory wrote “The Concentration Manifesto," preaching that managers and allocators would both be better served if managers focus on more concentrated portfolios of their best ideas.

 

Summary

 

Data analysis almost never gives an allocator the answer, but the tools employed are useful in measuring risk and return at the portfolio and manager level, and in making informed judgements about manager selection. The availability of data and the entrepreneurs at the forefront of assessing it enable CIOs to be more informed. Asking the right questions may reveal managers who eschew modern technology and are a step behind the pack.

 

December 29, 2020

Optimizing Usage for Optimal Returns (Part 2) - Position Level Analysis

 

This article was co-written by Billy Armfield, Data Scientist of Alpha Theory, and Cameron Hight, CEO of Alpha Theory.

 

In our last edition, “Optimizing Usage for Optimal Returns”, we explored the relationship between forecast freshness and portfolio coverage on one-day forward returns at the fund level. Freshness and performance showed a nearly monotonic relationship. Coverage, on the other hand, had a more parabolic shape with the lowest performance coming from the middle of the range. To explore these relationships further in this edition, we are investigating data at the ticker level instead of the fund level. We ask:

1. Does coverage show more predictive power at the ticker level than it did at the fund level?

2. Is freshness correlated with a performance at the ticker level?

 

Coverage

Ten years of data on Alpha theory clients have shown that process-oriented investing yields higher returns. A large part of the process centers around entering and updating scenarios forecasting future stock prices, and the probability of them reaching that price. Implicit in this philosophy is the idea that if you are going to make an investment, it should be supported by research. As part of our exploration of coverage and freshness at the ticker level, we regressed coverage against a one-day forward price change. In this case, coverage is treated as a binary variable based on whether a forecast has been made on the position or not.

 

B1

 

Unlike the results of measuring coverage by the fund, when measured at the ticker level, there is a distinct positive relationship with one-day forward price change. Simply stated, positions with price targets are more likely to outperform positions without price targets. 

 

Freshness

There is more than one way to bake a Christmas cookie, so we first examined which measure of freshness has a stronger relationship with one-day forward price change. We examined two variations. The first measures freshness in terms of the number of days since a forecast was last updated (DSLU). The second treats it as a binary feature, where its value is 1 if price targets were updated in the last ninety days, and zero otherwise. We regressed both features against a one-day forward price change for all positions in Alpha Theory’s historical database. The results for the DSLU method can be found in figure 1, and the results for the binary method in figure 2.

 

Figure 1 (Days Since Last Update)

B2

 

Figure 2 (Binary: Updated in last 90 days)

B3

 

The number of days since the last update has a negative coefficient and t-statistic, which makes sense, given that one might reasonably expect a lower degree of certainty of positive future returns when forecasts are out of date. While intuitive, its lower t-statistic and higher p-value means this relationship deserves further investigation before drawing any conclusions. The binary feature, however, has more conclusive results. Having coverage no older than ninety days does have a positive relationship with the one-day forward price change, with a higher t-statistic and lower p-value.

 

The importance of creating price targets and keeping coverage fresh can be summarized by the annualized price change of fresh positions vs. stale and uncovered positions.

 

B4

 

Scattered, Fresh, and Covered

Based on the analysis above, it is well worth the time to create price targets and keep them fresh as they are both important predictors of future returns. When thinking through how your firm is going to improve in 2021, use these empirical proofs as evidence that the team should seek process as a proven way to improve returns.

 

October 16, 2020

Best Ideas Update

 

The Cohen, Polk, Silli “Best Ideas” paper was first released in 2005 and Alpha Theory incorporated the 2010 draft in the Concentration Manifesto as an empirical proof (#3 to be exact) of why managers should concentrate. An updated version of the “Best Ideas” paper was released in June, it expands the data set from 24 to 37 years and reconfirms the earlier findings that active managers are 1) good at selecting and sizing a few “Best Ideas” and 2) then dilute the “Best Ideas” with a bunch of positions that are basically random noise.

 

The “Best Ideas” portfolio outperforms the rest of the portfolio and benchmarks by 2.8% to 4.5% per year with high statistical significance, across a thousand-plus mutual and hedge fund managers, and with consistency amongst managers and from year-to-year.

 

This abnormal performance appears permanent, showing no evidence of subsequent reversal, even several years later. Interestingly, cross-sectional tests indicate that active managers’ best ideas are most effective in illiquid, growth, momentum stocks, or for funds that have outperformed in the past.

 

Given the strong empirical evidence for concentration, why don’t managers concentrate more on their best ideas? The “Concentration Manifesto” highlights myriad reasons managers should concentrate but does not investigate why they do not. The “Best Ideas” paper does:

 

We identify four reasons managers may overdiversify.

 

1. Regulatory/legal. A number of regulations make it impossible or at least risky for many investment funds to be highly concentrated. Specific regulations bar overconcentration; additionally, vague standards such as the “Prudent man” rule make it more attractive for funds to be better diversified from a regulatory perspective. Managers may well feel that a concentrated portfolio that performs poorly is likely to lead to investor litigation against the manager. Anecdotally, discussions with institutional fund-pickers make their preference for individual funds with low idiosyncratic risk clear. Some attribute the effect to a lack of understanding of portfolio theory by the selectors. Others argue that the selector’s superior (whether inside or outside the organization) will tend to zero in on the worst-performing funds, regardless of portfolio performance. Whatever the cause, we have little doubt that most managers feel pressure to be diversified.

2. Price impact, liquidity, and asset-gathering. Berk and Green (2004) outline a model in which managers attempt to maximize profits by maximizing assets under management. In their model, as in ours, managers mix their positive-alpha ideas with a weighting in the market portfolio. The motivation in their model for the market weight is that investing in an individual stock will affect the stock’s price, each purchase pushing it toward fair value. Thus, there is a maximum number of dollars of alpha that the manager can extract from a given idea. In the Berk and Green model managers collect fees as a fixed percentage of assets under management, and investors react to performance so that in equilibrium each manager will raise assets until the fees are equal to the alpha that can be extracted from their 26 good ideas. This choice leaves the investors with zero after-fee alpha. Clearly in the world of Berk and Green, (and in the real world of mutual funds), managers with one great idea would be foolish to invest their entire fund in that idea, for this would make it impossible for them to capture a very high fraction of the idea’s alpha in their fees. In other words, while investors benefit from concentration as noted above, managers under the most commonly used fee structures are better off with a more diversified portfolio. The distribution of bargaining power between managers and investors may therefore be a key determinant of diversification levels in funds.

3. Manager risk aversion. While the investor is diversified beyond the manager’s portfolio, the manager himself is not. The portfolio’s performance is likely the central determinant of the manager’s wealth, and as such we should expect them to be risk-averse over fund performance. A heavy bet on one or a small number of positions can, in the presence of bad luck, cause the manager to lose their business or their job (and perhaps much of their savings as well, if they are heavily invested in their own fund, as is common practice). If manager talent were fully observable this would not be the case – for a skilled manager, the poor performance would be correctly attributed to luck, and no penalty would be exacted. But when ability is being estimated by investors based on performance, risk-averse managers will have an incentive to overdiversify.

4. Investor irrationality. There is ample reason to believe that many investors – even sophisticated institutional investors – do not fully appreciate portfolio theory and therefore tend to judge individual investments on their expected Sharpe ratio rather than on what those investments are expected to contribute to the Sharpe ratio of their portfolio. For example, Morningstar’s well-known star rating system is based on a risk-return trade-off that is highly correlated with Sharpe ratio. It is very difficult for a highly concentrated fund to get. This behavior is consistent with the general notion of “narrow framing” proposed by Kahneman and Lovallo (1993), Rabin and Thaler (2001), and Barberis, Huang, and Thaler (2006). A top rating even if average returns are very high, as the star methodology heavily penalizes idiosyncratic risk. Since a large majority of all flows to mutual funds are to four- and five-star funds, concentrated funds would appear to be at a significant disadvantage in fundraising. Other evidence of this bias includes the prominence of fund-level Sharpe ratios in the marketing materials of funds, as well as maximum drawdown and other idiosyncratic measures. Both theory and evidence suggest that investors would benefit from managers holding more concentrated portfolios.

Our view is that we fail to see managers focusing on their best ideas for a number of reasons. Most of these relate to benefits to the manager of holding a diversified portfolio. But if those were the only causes, we would be hearing an outcry from investors about overdiversification by managers, while in fact, such complaints are rare. Thus, we speculate that investor irrationality (or at least bounded rationality) in the form of manager-level analytics and heuristics that are not truly appropriate in a portfolio context, play a major role in causing overdiversification.

 

The reasons for diversification (not concentration) are real and will require systematic change and mutual agreement from both funds and LPs. Given the state of flows from active to passive, there may be a strong enough catalyst for that change.

 

February 8, 2020

Doing More with Less – Cliff Asness Illiquidity Discount Article

 

We’re all familiar with controls that point you towards the right decisions because knowing what to do and doing it are not the same thing. This is why our car dings until we put our seatbelt on, why there are signs reminding your server to wash their hands, and why we hire personal trainers. But what about blinders that help you avoid making bad decisions?

 

There are studies that show how a store that offers more options can cause customers to buy less because the extra information confuses the buyer's decision and causes them to make no-decision. More germane to our field, I know of funds where the PM restricts themselves and their analyst teams from checking P&L because they’ve attributed it to poor decision making. In this case, less information is more. Would you pay for less information?

 

In Cliff Asness’s latest piece, The Illiquidity Discount, he discusses that concept in the context of asset pricing. What is it worth to not know the price of an asset if knowing the price caused you to sell and buy at the exact wrong times? Where the artificially smoothed volatility of infrequent pricing was a feature.

 

The preference for illiquid, infrequently-priced assets that don’t smash you in the face with their volatility (even though it’s really there) could be rational in the same sense. Perhaps a levered small-cap portfolio is a rational investment for long-term investors, but there’s little chance they’d stick with it full-cycle. However, they find PE easy to stick with? It’s not hard for me to imagine these are both true for some (or many).

 

Finally, to address our main topic, what’s the next implication of extreme illiquidity and pricing opacity being a feature, not a bug? Well, you pay up in price (and give up in expected return) for features you value (not bugs you can’t stand). Attractive smoothness of returns may not come for free. If illiquidity is more positive than negative to many investors, it could easily mean paying a higher price and accepting a somewhat lower return to obtain it. Sounds really counter-intuitive, I know. But it also sounds, to me, pretty plausible.

 

I appreciate those that question conventional wisdom. I especially appreciate it when it is done in the pursuit of better decision making. There is something beautiful about simple hacks that help us make better decisions (i.e. that’s what we do at Alpha Theory). I think we’ll be spending more time at Alpha Theory in the coming months (years) thinking about if there is the information we present (or maybe the timing of that information) that may lead to sub-optimal decision making and what changes we can make to improve how/when information is delivered.  

 

November 1, 2019

Concentrating on Concentration: New Data on Portfolio Concentration

 

As most of our readers know, we are proponents of more concentrated portfolios. In May of 2017, we released our Concentration Manifesto which was an attempt to get a critical dialogue started between managers and allocators to ultimately improve the active management process. A conversation that requires both sides cast aside outdated thinking and embraces the notion that concentration is in their best interest.

 

And we’re seeing it in external data:

 

Exhibit 19

 

And in our own managers:

 

AveragePositionSize

 

This conversation began well before our Concentration Manifesto. We recently found an April 2014 study by Cambridge Associates outlining the “Hallmarks of Successful Active Equity Managers.

 

Cambridge Associates analyzed a selection of managers to isolate attributes that lead to success. In their findings, active share and concentration were major contributors. Their analysis1 found that concentrated portfolios (US equity less than 30 positions and US Small-Cap & EAFE Equity less than 40 positions) generated between 100bps and 170bps of additional performance over non-Concentrated portfolios.

 

Table-3.-Results-of-Active-Share-Analysis

 

The performance difference for concentrated managers held after fees and worked across various strategies. The fractal nature (it still works when you break it into different strategies) lends additional validation for concentration’s benefits.

 

In the Cambridge article, we found a reference to another concentration study.

 

Baks, Busse, and Green published “Fund Managers Who Take Big Bets: Skilled or Overconfident” in 2006. The abstract says it all:

 

We document a positive relation between mutual fund performance and managers' willingness to take big bets in a relatively small number of stocks. Focused managers outperform their more broadly diversified counterparts by approximately 30 basis points per month or roughly 4% annualized. The results hold for mimicking portfolios based on fund holdings as well as when returns are measured net of expenses. Concentrated managers outperform precisely because their big bets outperform the top holdings of more diversified funds. The evidence suggests that investors may enhance performance by diversifying across focused managers rather than by investing in highly diversified funds.

 

Their sample covers funds from 1979-2003 and the return advantage per month ranges between +1 and +67 basis points depending on the methodology for measuring fund concentration and how many deciles to included. That equates to a range between +0.12% and +8.34% on an annualized basis for concentrated managers.

 

Fund perf vs. portf weight

 

We continue to believe that there is a demonstrable skill in equity managers and that the skill could be harnessed in better ways than is typically demonstrated by the average manager and that concentration is the simplest way to improve a manager who possesses positive stock-picking skill.

 

1 eVestment Alliance Database: September 2007 to June 2013 US large-cap core equity, US large-cap growth equity, US large-cap value equity, US small-cap core equity, US small-cap growth equity, US small-cap value equity, and all EAFE equity

 

Download full version of the Concentration Manifesto

 

August 1, 2019

The Concentration Manifesto for Shorts

 

We were reading the great research that comes from our friends at Novus recently and saw a reference to a paper written by Della Corte, Kosowski, Rapanos (2019). This paper analyzes 1.7 million short positions from 585 managers that contributed to the European Union Short Disclosure Information dataset from 2012-2018. They found that highest quintile conviction shorts (P5 - as measured by position size) outperformed lowest quintile conviction shorts (P1). In fact, the highest conviction shorts were the only cohort that had a mean return that was negative on an absolute basis (positive contribution for shorts).

 

Panel A - Equally-weighted Portfolios

 

After applying a six-factor model, the alpha of a strategy going long the low conviction and short the high conviction had an alpha of 11%. Ideally, the results would show a gradual declination between P1 and P5, but P4 does not follow that trend. Nevertheless, there is a demonstrable skill in short selection for the largest position sizes and provides further support for the Concentration Manifesto.

 

Download full version of The Concentration Manifesto

 

May 3, 2018

Positive Skew…Part 2 – Maybe It’s Not So Bad for Active Managers After All

In my last post, I discussed the negative impact of positive skew for active managers. Basically, that more than 50% of all stocks in a given market underperform the average because there are stocks that go up more than 100% but no stocks that go down more than 100%. This means that if you pick a random portfolio of stocks from the market, you have a greater than 50% chance of underperforming the market because most portfolios will not hold those few stocks that went up more than 100%.

 

Because of the popularity of the last post and TV appearance, we spent time digging further into the data to answer questions posed by readers and viewers. We noticed that there was a tendency for the returns between the average stock return and the index return to be different.

 

And that is the problem with using the average stock return as the hurdle for funds. Investors are not measured against the average stock return, they’re measured against the benchmark, typically the S&P 500. Most indexes are market cap weighted, meaning that the index return and the average stock return are generally different.

 

In the example below, we’ve taken the current S&P 500 constituents and calculated their return since the beginning of 2012 and compared that to an average return (Equal Weighted) and the actual return of the S&P 500. The S&P 500 over that period was up 136% vs 175% for the average stock (this isn’t a perfect analysis because the constituents in the portfolio changed over that time but it is an approximation).

 

Positive Skew-part2

 

The graph above shows the distribution of individual stock returns over that period. You can see the outliers that pull the average stock return (red line) up to a point where 63% of individual securities underperform the average of 176%. But the S&P 500 was up 136% (green line) over that period so only 51% of stocks underperformed the benchmark. Pretty much a coin flip.

 

We brought positive skew up with Andrew Wellington at Lyrical Asset Management. They have done some great analysis comparing the top 1000 stocks by market cap in the US to the S&P 500 each year going back to 1998.

 

Chart2

Source: FactSet and Lyrical Asset Management

 

As you can see in the chart above, the average stock beating the S&P 500 index is a coin flip. For the past 20 years, the likelihood of any individual stock beating the S&P 500 in any given year is 50.2%. If I build random portfolios using the Top 1000 stocks in the US, there is a high likelihood that the portfolio return will be close to the S&P 500 return.

 

Some years are clearly better than others. ’98 and ’99 were horrible stock picking years. If you didn’t own the few stocks that had meteoric rises, you had a high likelihood of underperforming the S&P 500. ’01 and ’02 were good stock picking years. Over 60% of stocks beat the index.

 

What this means, is that any given fund’s batting average should be compared to the batting average of the universe of stocks compared to the benchmark. A 54% batting average in ’98 is heroic, in ’03, 54% is just inline. Take a look at 2017. It was the 3rd hardest stock picking environment in the last 20 years using this metric.

 

But what about other indices? Thankfully, our friend Julien Messias from Quantology Capital Management has done the analysis (1999-2014) comparing the S&P 500 and Russell 2000. Below are thoughts from Julien on the topic:

 

The Russell 2000 components returns exhibit a much more leptokurtic distribution (fat-tailed) than S&P 500, meaning that you have a huge part of the index’s components suffering from huge loss (or even bankruptcies), with an average of more than 60% of the components underperforming the index performance and 2% of the components with huge performance (more than 500% per year). The performance of the index is therefore pulled up by those latter 2%.

Assuming a stock-picker operates at random to choose its investment within the index universe, this means that his performance should be closer to the median performance of the components, than to the index performance itself. Therefore, given that the median performance is almost always lower than the index performance (see chart below), an investor in Russell 2000 securities is very likely to underperform and very unlikely to outperform.

The S&P 500 distribution is much more mean-centered, with very shallow/thin tails, meaning that the average stock picker is much more likely to generate a performance close to the index performance (graph from Lyrical AM) and less likely to underperform.

 

Chart3

Source: Quantology CM

 

The Russell 2000 index more apparently displays the impacts of positive skew because it is less impacted by the contribution of a few very large companies. AAPL, MSFT, GOOG, AMZN make up 12.2% of the S&P 500 while the Russell 2000’s top 4 positions make up 1.7% of the index. The result is that the average of all stocks in the Russell 2000 is much closer to the Russell 2000 index return than the average of all stocks in the S&P 500 (recall the large difference from the 2012 to 2018 analysis that showed the S&P 500 return was 136% vs 175% average of all stocks).

 

This means that the index chosen as the benchmark for your fund has a profound impact on your ability to beat it. More specifically, the probability of beating the S&P 500 with a random portfolio is 50%, for the Russell 2000, it’s 42%.

 

There has been quite a bit of press regarding positive skew. It’s a great conversation but, for the average fund that is measured against the S&P 500, the impact is overblown. Almost every investor is compared against a benchmark. I recommend that you dig a layer into your benchmark and measure its positive skew, the likelihood of beating the average stock return, the likelihood of beating the index return, and compare your hit rate against the hit rate each year to know how difficult or easy it was for you on any given year.

 

Quantology Capital Management Russell 2000 and S&P 500 Analysis:

 ­ Screen Shot 2018-05-03 at 10.03.58 AM Screen Shot 2018-05-03 at 10.05.52 AM

Does not include management fees

Data is cleaned from index turnover, with updates every year