(866)-482-2177

sales@alphatheory.com

REQUEST A DEMO

SYSTEM REQUIREMENTS


Please note the following System Requirements. Further, please limit the number of open applications (particularly price streaming applications) while logged in to Alpha Theory™.


Recommended System Specifications
Processor: Dual Core or Quad-Core 2.4GHz or faster
RAM: 4GB+
Browser: Google Chrome 30+
Screen Resolution: 1280 x 1024 or greater
Internet Access: Business Class High-Speed


Minimum System Requirements
Processor: Intel Pentium-M 2.0Ghz or equivalent
RAM: 2GB+
Browser: Google Chrome, Mozilla Firefox, Internet Explorer 9+ (without Compatibility View), Safari
Screen Resolution: 1024 x 768 or greater
Internet Access: High-Speed

Subscribe to Alpha Theory content

Alpha Theory Blog - News and Insights

27 posts categorized "Superforecasting"

June 19, 2020

The Short End of the Stick

 

Over the years, we have consistently heard that the short portion of clients’ portfolios have been a major drag on returns. The problem is that when we do portfolio performance reviews with our clients we see that the short book, which is consistently negative, is generally less negative than the S&P 500 and MSCI World.

 

To explore this further, we wanted to test a simple strategy of creating an aggregated portfolio of client short positions to see how they performed against the major indices. The absolute result was an average annualized loss of -4.02% which confirms the industry dogma that the short book is a drag. That being said, the short portfolios provided consistent positive alpha (short book return minus negative index return).

 

Short_end

Source: Omega Point

 

The total return for clients’ short portfolio is -23.74% over the 5+ year period or an annualized return of -4.02%. This compares to a 10.20% annualized return for the S&P 500 or 6.16% annualized alpha and 6.35% for MSCI World or 2.33% annualized alpha. If we take the midpoint, that is roughly 4% of annualized alpha that our clients have generated per year for over 5 years.

 

Breaking it down by year, the alpha contribution was consistent except for 2016 where it was roughly breakeven showing that Alpha Theory managers were dependable alpha generators on the short-side.

 

Screen Shot 2020-06-19 at 11.21.34 AM

 

The sustained positive trend of the overall market over the years makes it almost impossible to create absolute returns from shorting. However, for investors looking to generate a less volatile stream of returns, a short book that has a negative correlation with the long book and provides consistent alpha is extremely valuable. Alpha Theory’s clients are consistent short alpha contributors. This is, of course, because of their stock selection skill but I would posit that their process discipline is just as important and is one of the reasons their alpha returns have been so consistent.

 

April 4, 2020

A Fundamentalist’s Attempt to Take Advantage of Factor Moves

 

This article is written in collaboration with Omega Point

  

Introduction

For many fundamental investors, factors play little to no role in their day-to-day portfolio construction process. In fact, many fundamental investors aren’t quite sure what factors are. Below we’ll give a basic primer on factors and a potential strategy for fundamental investors to take advantage of irrational movements in their investments caused by factors.

 

What are Factors?

Factors are tools designed to help explain why stocks move. The simplest factor is the market. If the market is down 15% in a month, you expect the average stock would be down 15% as well. So if a particular stock is down 10% (ignoring Beta for this simplified example), then it has generated 5% of positive alpha (to make it really confusing, it is also called idiosyncratic return and specific risk). Basically, alpha is the fundamental piece left over.

 

But the market is not the only “factor” that could explain why a stock moved. It was just the first. Academics and quants have been coming up with new factors since Fama-French added Size and Value to Market in the early ‘90s. Below are some common ones but there are many more:

 

Picture1

 

How is a Factor Measured?

Let’s use the Value factor as an example. Imagine I take every stock and measure an average Price-to-Book (P/B) of 3.5x and a standard deviation of 1.5x. Then I take every stock and measure its Z-Score which is simply the P/B of the stock minus the average of 3.5x divided by the standard deviation of 1.5x. If a stock has a P/B of 5.0x then it has a Z-Score of 1.0x ((5.0x-3.5x)/1.5). To drive the point home, a stock with a P/B of 3.5x (equal to the average) would have a Z-Score of 0.0x and one that had a P/B of 2.0 would have a Z-Score of -1.0x. This is simply how many standard deviations away from the average a particular stock sits.

 

To drive the point home, if you hold a 5% position in a stock with Value Z-Score of 2.0x and another 5% position in a stock with a Value Z-Score of -2.0x, your portfolio has no exposure to the Value factor. This works the same if you are long and short equal amounts of securities that have the same Value Z-Score.

 

How Have Factors Been Moving Recently?

Market turbulence spurred by the COVID-19 outbreak has pressed many factors to multiples far outside of their historical averages. To illustrate this dispersion, the table below lists 12 of the most common style factors used by investors and ranks them based on the number of standard deviations that last month’s performance represented compared to their 10-year average (Jan 2010 – Jan 2020).

 

Picture2

Source: Omega Point

 

If we highlight a few of the factors that typically see minimal month-to-month movement, we see that Profitability sits at the top of the list with a 7.2x standard deviation. Profitability is typically referred to as the ‘Quality’ factor and comprises quality-related metrics such as return-on-equity, return-on-assets, cash flow to assets, cash flow to income, gross margin, and sales-to-assets. It’s abundantly clear that in the present environment, investors have been flocking in droves to quality names.

 

Not at all dissimilar to what occurred during the Financial Markets Crisis of 2008, investors have been dumping Leverage. Leverage is composed of metrics related to total debt-to-total assets and total debt-to-equity and is a factor that moves glacially in a normal market environment.  It ended down 3.44% last month, which actually represented a pull-back as its nadir for the month was -4.13%. Investors are seeing highly levered companies as much riskier than normal in the current environment.

 

Investors are also punishing companies that have supply and distribution chains more exposed to global foreign exchange movements (Exchange Rate Sensitivity) but are more prone to flock to larger market cap names (Size). 

 

These large factor moves may be presenting great buying opportunities for fundamental investors who have seen their stocks move for non-fundamental reasons. But what are some practical methods that you can use to measure it?

 

Factors Movements Create Fundamental Investment Opportunities

As the markets become less and less rational, this may represent a golden opportunity for fundamental investors who track recent factor movements. When looking at their particular universe of stocks, fundamental investors should ask themselves, how much of recent moves are fundamentally-driven, and how much are non-fundamental? This environment presents potential arbitrage opportunities that they can dig deeper on.

 

Using the Omega Point platform (a tool to measure portfolio risk - Alpha Theory has a partnership driven by clients that use both products), below we will go through three examples of individual stocks that represent various sides of the factor opportunity spectrum for fundamental investors:

 

Moderna (MRNA)

Picture3

Source: Omega Point

 

Moderna is a biotechnology company focused on drug discovery and drug development. In January, Moderna announced the development of a vaccine to inhibit COVID-19 coronavirus with a subsequent announcement of an ETA in 2021. If you look more closely at its recent performance, it’s clear that the most significant component of this stock’s move is related to alpha, and much less so for irrational, non-fundamental reasons. The fundamental investors using a factor-based lens to uncover opportunities should skip this one and look at other names in their universe.

 

Avis Budget Group (AVIS)

Picture4

Source: Omega Point

 

Avis represents a more middle-of-the-road example based on a mix of factors and alpha driving its recent movement. Approximately 50% of its move has been related to fundamental factors, while the other half is alpha related. This makes sense, as Avis is in an especially difficult situation right now based on the global travel environment that impacts its core business. Some further analysis may be in order for Avis, but better opportunities may lie with names barely being driven by alpha.

 

Dupont (D)

Picture5

Source: Omega Point

 

As shown in the Omega Point screenshot above, Dupont is down over 20% in March but what interests us most here is that 95% of that movement is purely factor related. If we hone in on the sidebar, we can see the breakout of the different factor components. While the market and sector moves seem plausible, there is a 10% downward movement in style factors that may represent instant upside once the factor effect is neutralized. Dupont’s price has been driven almost entirely by factors (i.e. non-fundamentals) and may be a strong candidate for a buy if you agree the move is largely for non-fundamental reasons.

 

There may be several names in your coverage universe that share the factor characteristics of a Dupont, but we need to remind readers that fundamental investors shouldn’t take this type of analysis at face value. Although the increasing precision of factors to describe stock movements have been a huge boon to many investors, it’s still an imperfect science but in our view can give fundamental investors a powerful punch list of ideas which they can pursue.

 

Identifying Good Buy Candidates in Your Portfolio

The table below sorts a group of stocks by the impact of factors vs. their total overall return. Names higher on the list such as Dupont exhibit returns that are much less fundamental, and may warrant additional research by you and your team.

 

Picture6

 

Coupled with a strong fundamental story, a likely good bet in the long term:

- A buy: large negative factor move as a percentage of the total move

- A sell: large positive factor move as a percentage of the total move

 

We encourage you to perform this type of analysis to highlight names that have moved for non-fundamental reasons and compare them to where your position-sizing system is suggesting you make the biggest adds (see below).

Picture7

 

While the current depressed market has been more favorable to uncovering potential buying opportunities, this analysis can be effective for finding both buys and sells in a more normal market environment.

 

Special Offer

While uncovering, researching, and selecting superior stocks will always remain the core focus of fundamental investors, a better grasp of how factors are impacting our portfolios can help us take advantage of irrational behavior. 

 

And to that end, our friends at Omega Point are offering to provide Alpha Theory clients a customized factor-based analysis including the Factor Move for each security in your portfolio. Reach out to support@alphatheory.com or your Customer Success representative to get the details.

 

 

March 6, 2020

Alpha Theory 2019 Year in Review

 

Alpha Theory clients continue to outperform! Over the past eight years, Alpha Theory clients have outperformed their peers seven times, leading to an almost 3% per year performance improvement over the average hedge fund. Over that same period, Alpha Theory’s suggested optimal return outperformed our clients’ actual return every year by an average of 5.5%!

 

Screen Shot 2020-03-06 at 4.24.13 PM

 

What does this mean? Our clients are self-selecting, better-than-average managers that would be world-class if they more closely followed the models they built in Alpha Theory.

 

In fact, over the period, the compound return is twice that of their actual performance (174.8% vs 85.6%) and three times that of the average hedge fund (174.8% vs 51.3%). *Side note: Isn’t compounding amazing?

 

2019 was a really good year for clients as they beat the primary Equity Hedge index by 5.9% despite missing out on 3.4% of return if they would have more closely followed Alpha Theory. 

 

Chart1

 

Note that the difference in returns between the charts is due to leverage. The chart above is total return (varying leverage per manager) and the chart below is based on 100% gross exposure per manager (ROIC) and is thus a better apples-to-apples comparison.

 

Chart2

 

PROCESS ENHANCES PERFORMANCE

Alpha Theory clients use the process to reduce the impacts from emotion and guesswork as they make position sizing decisions. Alpha Theory highlights when good ideas coincide with the largest position sizes in the portfolio. This rules engine codifies a discipline that:

 

1.    Centralizes price targets and archives them in a database

2.    Provides notifications of price target updates and anomalies

3.    Calculates probability-weighted returns (PWR) for assets and the portfolio as a whole.

4.    Enhances returns

5.    Mitigates portfolio risk 

6.    Saves time

7.    Adds precision and rigor to the sizing process

8.    Enables real-time incorporation of the market and individual asset moves into sizing decisions.

 

DISCIPLINED USAGE REDUCES RESEARCH SLIPPAGE

Our clients are a self-selecting cohort who believe in process and discipline; process orientation goes together with Alpha Theory software that serves as a disciplining mechanism to align the best risk/reward ideas with rankings in the portfolio. Shown below, the most active users as measured by frequency of updates, research coverage, and model correlation have the highest ROIC.

 

Alpha Theory’s research not only suggests that the adoption of the Alpha Theory application by itself leads to improved performance, but actual usage intensity further enhances results.

 

Usage intensity is determined by:

1. Percent of Positions with Research

2. Correlation with Optimal Position Size

3. Login Frequency

 

Screen Shot 2020-03-06 at 5.10.04 PM

1.    Measured as the annualized ROIC where data was available, for a sample of 48 clients, 12 for each quartile

 

OPTIMAL POSITION SIZING REDUCES RESEARCH SLIPPAGE

Comparing clients’ actual versus optimal returns shows:

 

HIGHER TOTAL RETURNS

ROIC is 4% higher.

 

IMPROVED BATTING AVERAGE

Batting Average is 9% higher. Explanation: many of the assets that don’t have price targets or have negative probability-weighted returns (PWR) are held by the fund but recommended as 0% positions by Alpha Theory. Those positions underperform and allow Alpha Theory’s batting average to prevail.

 

Screen Shot 2020-03-06 at 4.25.21 PM

1.    Measured as the average full-year return for clients where full-year data was available, adjusted for differences in exposure, net of trading costs

2.    Before trading costs

 

PRICE TARGETS REDUCES RESEARCH SLIPPAGE

Alpha Theory has further found that ROIC for assets with price targets is 4.8% higher than for those without price targets. Some investors chafe at price targets because they smack of “false precision.” These investors are missing the point because the key to price targets is not their absolute validity but their explicit nature which allows for an objective conversation of the assumptions that went into them. Said another way, the requirements of calculating a price target and the questions that price targets foster are central to any good process.

Screen Shot 2020-03-06 at 4.25.28 PM

Finding alpha will not become easier. It is imperative that the funds of the 21st century develop plans to evolve to new realities. Data and process are critical to that evolution. Let Alpha Theory help you and your team grow to meet the challenges of tomorrow.

 

December 1, 2019

Capturing Alpha in Risk Rewards - Morgan Stanley

 

Morgan Stanley has one of the most robust sets of scenario-based price target forecasts in the world with around 70,000 forecasts over 10 years. Naturally, they decided to evaluate the quality of their analysts’ forecasts and the results were positive. In the chart below, their price target, scenario-based strategies consistently created positive alpha.  

 

Screen Shot 2019-11-25 at 2.18.00 PM

 

The model was built by evaluating analysts’ scenario analysis to determine buy and sell signals by using measurements and trends on the variables of Downside, Tilt, and Uncertainty. The magnitude and number of those signals determined the weighting in the hypothetical portfolio.

 

Screen Shot 2019-11-25 at 2.22.41 PM

 

They determined that there was a demonstrable benefit in using a scenario-analysis instead of a single price target.

 

Screen Shot 2019-11-25 at 2.22.56 PM

Screen Shot 2019-11-25 at 2.23.12 PM

 

Breaking the analysis down to its components (individual scenario analyses) showed consistent predictive quality from the scenarios analysis as measured by the pre-cost hit ratio (the percentage of long/short signals that generate higher/lower returns than the total return of the equity index). While a mid-50s hit rate may seem marginal, it is substantial. It is enough to create consistent outperformance, as we measured by observing our managers with consistent hit rates above 50%.

 

Screen Shot 2019-11-25 at 2.46.37 PM

 

The Morgan Stanley analysis is substantive in two ways. First, it supports our research that scenario analyses have predictive power that can be utilized to create positive alpha strategies. The second is our suspicion that buy-side manager scenario analyses are superior to sell-side forecasts because of their real-world application, their lack of administrative constraints, and lack of investment-banking conflict. If that is the case, Alpha Theory forecast dataset should have predictive power superior to that in the Morgan Stanley analysis.

 

This article is one of a long series of “Empirical Proofs” of active manager skill that we’ve been collecting. To see the full list, download a full version of the Concentration Manifesto.

 

November 1, 2019

Concentrating on Concentration: New Data on Portfolio Concentration

 

As most of our readers know, we are proponents of more concentrated portfolios. In May of 2017, we released our Concentration Manifesto which was an attempt to get a critical dialogue started between managers and allocators to ultimately improve the active management process. A conversation that requires both sides cast aside outdated thinking and embraces the notion that concentration is in their best interest.

 

And we’re seeing it in external data:

 

Exhibit 19

 

And in our own managers:

 

AveragePositionSize

 

This conversation began well before our Concentration Manifesto. We recently found an April 2014 study by Cambridge Associates outlining the “Hallmarks of Successful Active Equity Managers.

 

Cambridge Associates analyzed a selection of managers to isolate attributes that lead to success. In their findings, active share and concentration were major contributors. Their analysis1 found that concentrated portfolios (US equity less than 30 positions and US Small-Cap & EAFE Equity less than 40 positions) generated between 100bps and 170bps of additional performance over non-Concentrated portfolios.

 

Table-3.-Results-of-Active-Share-Analysis

 

The performance difference for concentrated managers held after fees and worked across various strategies. The fractal nature (it still works when you break it into different strategies) lends additional validation for concentration’s benefits.

 

In the Cambridge article, we found a reference to another concentration study.

 

Baks, Busse, and Green published “Fund Managers Who Take Big Bets: Skilled or Overconfident” in 2006. The abstract says it all:

 

We document a positive relation between mutual fund performance and managers' willingness to take big bets in a relatively small number of stocks. Focused managers outperform their more broadly diversified counterparts by approximately 30 basis points per month or roughly 4% annualized. The results hold for mimicking portfolios based on fund holdings as well as when returns are measured net of expenses. Concentrated managers outperform precisely because their big bets outperform the top holdings of more diversified funds. The evidence suggests that investors may enhance performance by diversifying across focused managers rather than by investing in highly diversified funds.

 

Their sample covers funds from 1979-2003 and the return advantage per month ranges between +1 and +67 basis points depending on the methodology for measuring fund concentration and how many deciles to included. That equates to a range between +0.12% and +8.34% on an annualized basis for concentrated managers.

 

Fund perf vs. portf weight

 

We continue to believe that there is a demonstrable skill in equity managers and that the skill could be harnessed in better ways than is typically demonstrated by the average manager and that concentration is the simplest way to improve a manager who possesses positive stock-picking skill.

 

1 eVestment Alliance Database: September 2007 to June 2013 US large-cap core equity, US large-cap growth equity, US large-cap value equity, US small-cap core equity, US small-cap growth equity, US small-cap value equity, and all EAFE equity

 

Download full version of the Concentration Manifesto

 

September 19, 2019

Superforecasting and Noise Reduction in Financial Analysis

 

Alpha Theory and Good Judgement Inc. hosted a Superforecasting workshop this week with several Alpha Theory clients attending and learning about noise reduction techniques. Warren Hatch, President of GJI, led the discussion on how to reduce noise in forecasting. Warren began the discussion with an overview of the characteristics of Superforecasters and what leads to good individual forecasts. We then shifted to how we can use team dynamics to improve forecast accuracy.

 

Warren started with examples pulled from other noise reduction workshops and showed how the team methods reduced noise (measured by the standard deviation of estimate) and increased accuracy (measured by Brier Score). We did our own example using Good Judgement Inc. software to ask questions of our group that led to a valuation of NFLX:

How many subscribers will Netflix have at the end of 2020?

What will be Netflix's revenue per subscriber in 2020?

What will be Netflix's net margin in 2020?

What will be Netflix's PE multiple in 2020?   

 

We compiled the initial results and compared them to current. We then had a chance to review other contributors forecasts and rationales and vote on the ones we thought were best. Next, the “team” discussed the highest vote-getting rationales and quickly identified an expert in the room. Through the noise reduction exercises and discussion, we narrowed our forecast range (reduced noise) and hopefully improved accuracy. We’ll know in a year when we see if NFLX is at $296.00.

 

Slide1

 

Thanks to Warren and team for putting on a great workshop for Alpha Theory clients. Please contact us with any questions.

info@alphatheory.com

 

September 5, 2019

Noise: The Maker of Markets and Madness

This article is written in collaboration with Warren Hatch, CFA (Good Judgment Inc.).

 

Reflect on your own investment process and ask this question: “Given identical sets of information, would two different analysts come to the same conclusion? Would an analyst come to the same conclusion on different days?” If not, there is noise. Where there is noise, there is room for improvement.

 

As investors, we accept that market participants have different opinions about the value of securities. That’s what makes a market. Even within investment teams, a diversity of views is a good thing.

 

The problem is how we combine those views. All too often, different people will use the same process and information to come up with the same (often wrong) answer. You can even ask the same person the same question at different times and get wildly different answers. All this noise is a bad thing. It adds volatility to the data, wastes our time by drawing attention away from the underlying drivers of value, and impairs the quality of our investment decisions. 

 

One option is to ignore the noise, take an average of everyone’s different views, and call it a consensus. That’s what we do with consensus earnings estimates. Or GDP forecasts. Or Wall Street dinner surveys. The wisdom of the crowd works, and over time this is certainly better than relying on any one individual.

 

A better option is to reduce the noise, not just ignore it. New and exciting research findings by Nobel laureate Daniel Kahneman and others show us concrete steps we can take to identify and reduce noise.

 

Kahneman often describes noise by citing work he performed for a large insurance company. In the example, two different insurance adjusters were given the same insurance claim and came up with the two substantially different payouts. The difference in those claims is noise. The question becomes, what level of subjectivity should there be in an insurance claim?

 

The problem is that humans are unreliable decision makers; their judgments are strongly influenced by irrelevant factors, such as their current mood, the time since their last meal, and the weather. We call the chance variability of judgments noise. It is an invisible tax on the bottom line of many companies.

 

A firm whose employees exercise judgment does not expect decisions to be entirely free of noise. But often noise is far above the level that executives would consider tolerable—and they are completely unaware of it.

 

Obviously, investors are different than insurance adjusters. They don’t get an insurance claim form with all relevant information. But noise has been studied and repeatedly demonstrated to impact decision making across fields. Kahneman further distinguishes noise from cognitive bias:

Screen Shot 2019-09-05 at 9.35.38 AM

To appreciate the distinction, think of your bathroom scale. We would say that the scale is biased if its readings are generally either too high or too low. If your weight appears to depend on where you happen to place your feet, the scale is noisy. A scale that consistently underestimates true weight by exactly four pounds is seriously biased but free of noise. A scale that gives two different readings when you step on it twice is noisy. Many errors of measurement arise from a combination of bias and noise. Most inexpensive bathroom scales are somewhat biased and quite noisy. – Noise: How to Overcome the High, Hidden Costs of Inconsistent Decision Making” – Kahneman, Rosenfield, Gandhi, Blaser (HBR October 2016)

 

So how should an investor reduce noise? By increasing process. Kahneman’s “radical solution” is to go all the way and replace human judgment with algorithms. These models can be straightforward: select 6–8 key variables, assign equal weights, and devise the simplest possible rule to combine them. Where replacing human judgment completely with algorithms isn’t viable or even desirable, there are other solutions.

 

Alpha Theory’s work with clients shows how this process can work. In a matter of 30 minutes, a “simple model” can be created to generally codify the 5-10 most important attributes of a firm’s process. Backed up by experience and empirical results, these simple models can reduce noise and increase returns—and the improvement is immediate and lasting.

 

Good Judgment Inc, the commercial company built on the research findings by Professor Philip Tetlock and “Superforecasting,” helps clients boost forecast accuracy with a suite of tools to cut through the noise. At recent Superforecasting workshops, for instance, group accuracy improved by an average of at least 20% after just 15 minutes of structured discussion.

 

Noise is the enemy of good decision making. The separate but complementary work at Good Judgment Inc and Alpha Theory show there are steps you can take now that can reduce noise and boost your bottom line.

 

August 1, 2019

The Concentration Manifesto for Shorts

 

We were reading the great research that comes from our friends at Novus recently and saw a reference to a paper written by Della Corte, Kosowski, Rapanos (2019). This paper analyzes 1.7 million short positions from 585 managers that contributed to the European Union Short Disclosure Information dataset from 2012-2018. They found that highest quintile conviction shorts (P5 - as measured by position size) outperformed lowest quintile conviction shorts (P1). In fact, the highest conviction shorts were the only cohort that had a mean return that was negative on an absolute basis (positive contribution for shorts).

 

Panel A - Equally-weighted Portfolios

 

After applying a six-factor model, the alpha of a strategy going long the low conviction and short the high conviction had an alpha of 11%. Ideally, the results would show a gradual declination between P1 and P5, but P4 does not follow that trend. Nevertheless, there is a demonstrable skill in short selection for the largest position sizes and provides further support for the Concentration Manifesto.

 

Download full version of The Concentration Manifesto

 

June 1, 2019

Increasing the Probability of Success - Part 2

 

This article is a continuation of Increasing the Probability of Success - Part 1.

 

2. PROBABILITY BUCKETS

In many ways, this is the easiest of all the methods to implement. Predetermine as a firm how many price target scenarios you’re going to forecast per position. For example, let’s say you’re going to do three: Reward, Base, Risk. And for each position, the analyst can choose Low, Medium, or High Probability and you preset the probability distribution. For instance:

 

Screen Shot 2019-05-29 at 9.12.06 AM

 

In this case, probabilities are constrained within a range but allow for some flexibility. What you should expect from your analysts is a normal distribution of probability ranges. Mostly Mediums with a few Lows and Highs. No analyst should have more Highs than anything else. The way to explain this to your team is that Low isn’t bad. It’s just less likely than the average (Medium) name. And vice versa for a High.

 

Probability Buckets are the most common probability recommendation we’ve made for clients. They are a good combination of flexibility and practicability, easy to explain to the team, and an effective audit to determine if analysts are being too aggressive.

 

3. FLEXIBLE PROBABILITIES

 

The book Superforecasting explains how individuals can improve their forecasting skill (if you are a forecasting practitioner you should read the book – see our blog post about it here). One primary trait of Superforecasters is micro-updates. Superforecasters change their probability estimates in small increments with new information. For example, changing their probability of Trump winning the 2016 election from 43% to 45% after getting the latest polling data. These small updates accrue to better forecasting accuracy.

 

Given that fact, you’d think my recommendation would be for Alpha Theory clients to make micro-updates and have lots of flexibility with price targets. My answer is, well, it depends. The incentive for Superforecasters was to maximize their accuracy. The incentive for most analysts is P&L, not accuracy, making them reward seekers. This encourages behavior that increases position size (ie. inflated price targets and probabilities – case in point, the average hit rate for all Alpha Theory clients is 51% however analyst assume they're going to make money 72% of the time).

 

The culture and workflow of the firm determine what makes the most sense. If analysts come up with price targets and probabilities with little to no input from the Portfolio Manager then use Fixed Probabilities and Probability Buckets to reduce the likelihood of them “gaming the system.” If the opposite is true and price targets and probabilities are a collaborative exercise with the Portfolio Manager, then Flexible Probabilities is more than likely ideal.

 

One last suggestion as it relates to setting probabilities: know your funds' historical hit rate. What percentage of positions made money on an absolute basis? And on an alpha basis? What percentage hit their upside price target? What percentage hit their downside price target? Use these to set a baseline for the probabilities of the firm. For example, if the historical batting average of the firm is 51%, then the average probability of hitting the reward target should not be 72%. Keep your probabilities realistic and the portfolio you build will be a more accurate forecast of what you will receive (see the “Probability Inflation” blog post).

 

There is no question that setting probabilities is one of the trickiest parts of scenario-based forecasting. We hope this overview gives you a starting point for how to implement an effective probability setting framework.

 

May 1, 2019

Increasing the Probability of Success - Part 1

 

I was discussing with a new client how analysts should approach probabilities. Probabilities are used in calculating probability-weighted returns by multiplying them by the client’s scenarios of price forecasts to come up with a probability-weighted return.

 

The probability piece is the most subjective part of the probability-weighted return (see our “False Precision” blog post that discusses why it is important to set probabilities), so we came up with several approaches to see what fit best for their firm. I thought I’d share them with anyone that may be struggling with probabilities:

 

1. Fixed Probabilities (Distribution)

Analysts come up with price targets that match the part of the forecast distribution associated with the probabilities. In this example, all positions have a “fixed” 20%/60%/20% probability framework. The goal is to come up with price targets that match those buckets (i.e. what is the 20% risk price target?).

Probability of Success-1

 

This method pulls price targets associated that reflect the probability-weighted outcomes associated with a broad range of outcomes associated with different probability “buckets”. An analyst would iterate the assumptions in their financial model to estimate the extreme outcomes (two 20% probability buckets at the end) and the higher probability outcomes (60% probability bucket in the middle). The result is a price target that blends the possible outcomes in each bucket by their associated probability. Another way to think of this is a cumulative probability distribution.

Normal CDF

 

For example, the analyst may associate-5% sales growth and 10% EBITDA margins as the 20% cumulative probability outcome, 25% sales growth and 40% EBITDA margins as the 80% cumulative probability outcome, and 60% growth and 55% margins as the 99% cumulative probability. There would be many other points in between (represented by the green dots) where the analyst would apply different assumptions in their model.

 

The benefits of this method are that the probabilities are fixed and require no subjective assessment. This method also allows for highly-sensitive models with extreme outcomes to be reflected in the resultant probability-weighted return. The downside of this method is that it is time-intensive and allows no flexibility in the probabilities.