(866)-482-2177

sales@alphatheory.com

## SYSTEM REQUIREMENTS

Please note the following System Requirements. Further, please limit the number of open applications (particularly price streaming applications) while logged in to Alpha Theory™.

Recommended System Specifications
Processor: Dual Core or Quad-Core 2.4GHz or faster
RAM: 4GB+
Screen Resolution: 1280 x 1024 or greater

Minimum System Requirements
Processor: Intel Pentium-M 2.0Ghz or equivalent
RAM: 2GB+
Browser: Google Chrome, Mozilla Firefox, Internet Explorer 9+ (without Compatibility View), Safari
Screen Resolution: 1024 x 768 or greater
Internet Access: High-Speed

Subscribe to Alpha Theory content

## April 4, 2020

### A Fundamentalist’s Attempt to Take Advantage of Factor Moves

Introduction

For many fundamental investors, factors play little to no role in their day-to-day portfolio construction process. In fact, many fundamental investors aren’t quite sure what factors are. Below we’ll give a basic primer on factors and a potential strategy for fundamental investors to take advantage of irrational movements in their investments caused by factors.

What are Factors?

Factors are tools designed to help explain why stocks move. The simplest factor is the market. If the market is down 15% in a month, you expect the average stock would be down 15% as well. So if a particular stock is down 10% (ignoring Beta for this simplified example), then it has generated 5% of positive alpha (to make it really confusing, it is also called idiosyncratic return and specific risk). Basically, alpha is the fundamental piece left over.

But the market is not the only “factor” that could explain why a stock moved. It was just the first. Academics and quants have been coming up with new factors since Fama-French added Size and Value to Market in the early ‘90s. Below are some common ones but there are many more:

How is a Factor Measured?

Let’s use the Value factor as an example. Imagine I take every stock and measure an average Price-to-Book (P/B) of 3.5x and a standard deviation of 1.5x. Then I take every stock and measure its Z-Score which is simply the P/B of the stock minus the average of 3.5x divided by the standard deviation of 1.5x. If a stock has a P/B of 5.0x then it has a Z-Score of 1.0x ((5.0x-3.5x)/1.5). To drive the point home, a stock with a P/B of 3.5x (equal to the average) would have a Z-Score of 0.0x and one that had a P/B of 2.0 would have a Z-Score of -1.0x. This is simply how many standard deviations away from the average a particular stock sits.

To drive the point home, if you hold a 5% position in a stock with Value Z-Score of 2.0x and another 5% position in a stock with a Value Z-Score of -2.0x, your portfolio has no exposure to the Value factor. This works the same if you are long and short equal amounts of securities that have the same Value Z-Score.

How Have Factors Been Moving Recently?

Market turbulence spurred by the COVID-19 outbreak has pressed many factors to multiples far outside of their historical averages. To illustrate this dispersion, the table below lists 12 of the most common style factors used by investors and ranks them based on the number of standard deviations that last month’s performance represented compared to their 10-year average (Jan 2010 – Jan 2020).

Source: Omega Point

If we highlight a few of the factors that typically see minimal month-to-month movement, we see that Profitability sits at the top of the list with a 7.2x standard deviation. Profitability is typically referred to as the ‘Quality’ factor and comprises quality-related metrics such as return-on-equity, return-on-assets, cash flow to assets, cash flow to income, gross margin, and sales-to-assets. It’s abundantly clear that in the present environment, investors have been flocking in droves to quality names.

Not at all dissimilar to what occurred during the Financial Markets Crisis of 2008, investors have been dumping Leverage. Leverage is composed of metrics related to total debt-to-total assets and total debt-to-equity and is a factor that moves glacially in a normal market environment.  It ended down 3.44% last month, which actually represented a pull-back as its nadir for the month was -4.13%. Investors are seeing highly levered companies as much riskier than normal in the current environment.

Investors are also punishing companies that have supply and distribution chains more exposed to global foreign exchange movements (Exchange Rate Sensitivity) but are more prone to flock to larger market cap names (Size).

These large factor moves may be presenting great buying opportunities for fundamental investors who have seen their stocks move for non-fundamental reasons. But what are some practical methods that you can use to measure it?

Factors Movements Create Fundamental Investment Opportunities

As the markets become less and less rational, this may represent a golden opportunity for fundamental investors who track recent factor movements. When looking at their particular universe of stocks, fundamental investors should ask themselves, how much of recent moves are fundamentally-driven, and how much are non-fundamental? This environment presents potential arbitrage opportunities that they can dig deeper on.

Using the Omega Point platform (a tool to measure portfolio risk - Alpha Theory has a partnership driven by clients that use both products), below we will go through three examples of individual stocks that represent various sides of the factor opportunity spectrum for fundamental investors:

Moderna (MRNA)

Source: Omega Point

Moderna is a biotechnology company focused on drug discovery and drug development. In January, Moderna announced the development of a vaccine to inhibit COVID-19 coronavirus with a subsequent announcement of an ETA in 2021. If you look more closely at its recent performance, it’s clear that the most significant component of this stock’s move is related to alpha, and much less so for irrational, non-fundamental reasons. The fundamental investors using a factor-based lens to uncover opportunities should skip this one and look at other names in their universe.

Avis Budget Group (AVIS)

Source: Omega Point

Avis represents a more middle-of-the-road example based on a mix of factors and alpha driving its recent movement. Approximately 50% of its move has been related to fundamental factors, while the other half is alpha related. This makes sense, as Avis is in an especially difficult situation right now based on the global travel environment that impacts its core business. Some further analysis may be in order for Avis, but better opportunities may lie with names barely being driven by alpha.

Dupont (D)

Source: Omega Point

As shown in the Omega Point screenshot above, Dupont is down over 20% in March but what interests us most here is that 95% of that movement is purely factor related. If we hone in on the sidebar, we can see the breakout of the different factor components. While the market and sector moves seem plausible, there is a 10% downward movement in style factors that may represent instant upside once the factor effect is neutralized. Dupont’s price has been driven almost entirely by factors (i.e. non-fundamentals) and may be a strong candidate for a buy if you agree the move is largely for non-fundamental reasons.

There may be several names in your coverage universe that share the factor characteristics of a Dupont, but we need to remind readers that fundamental investors shouldn’t take this type of analysis at face value. Although the increasing precision of factors to describe stock movements have been a huge boon to many investors, it’s still an imperfect science but in our view can give fundamental investors a powerful punch list of ideas which they can pursue.

The table below sorts a group of stocks by the impact of factors vs. their total overall return. Names higher on the list such as Dupont exhibit returns that are much less fundamental, and may warrant additional research by you and your team.

Coupled with a strong fundamental story, a likely good bet in the long term:

- A buy: large negative factor move as a percentage of the total move

- A sell: large positive factor move as a percentage of the total move

We encourage you to perform this type of analysis to highlight names that have moved for non-fundamental reasons and compare them to where your position-sizing system is suggesting you make the biggest adds (see below).

While the current depressed market has been more favorable to uncovering potential buying opportunities, this analysis can be effective for finding both buys and sells in a more normal market environment.

Special Offer

While uncovering, researching, and selecting superior stocks will always remain the core focus of fundamental investors, a better grasp of how factors are impacting our portfolios can help us take advantage of irrational behavior.

And to that end, our friends at Omega Point are offering to provide Alpha Theory clients a customized factor-based analysis including the Factor Move for each security in your portfolio. Reach out to support@alphatheory.com or your Customer Success representative to get the details.

## March 6, 2020

### Alpha Theory 2019 Year in Review

Alpha Theory clients continue to outperform! Over the past eight years, Alpha Theory clients have outperformed their peers seven times, leading to an almost 3% per year performance improvement over the average hedge fund. Over that same period, Alpha Theory’s suggested optimal return outperformed our clients’ actual return every year by an average of 5.5%!

What does this mean? Our clients are self-selecting, better-than-average managers that would be world-class if they more closely followed the models they built in Alpha Theory.

In fact, over the period, the compound return is twice that of their actual performance (174.8% vs 85.6%) and three times that of the average hedge fund (174.8% vs 51.3%). *Side note: Isn’t compounding amazing?

2019 was a really good year for clients as they beat the primary Equity Hedge index by 5.9% despite missing out on 3.4% of return if they would have more closely followed Alpha Theory.

Note that the difference in returns between the charts is due to leverage. The chart above is total return (varying leverage per manager) and the chart below is based on 100% gross exposure per manager (ROIC) and is thus a better apples-to-apples comparison.

PROCESS ENHANCES PERFORMANCE

Alpha Theory clients use the process to reduce the impacts from emotion and guesswork as they make position sizing decisions. Alpha Theory highlights when good ideas coincide with the largest position sizes in the portfolio. This rules engine codifies a discipline that:

1.    Centralizes price targets and archives them in a database

3.    Calculates probability-weighted returns (PWR) for assets and the portfolio as a whole.

4.    Enhances returns

5.    Mitigates portfolio risk

6.    Saves time

7.    Adds precision and rigor to the sizing process

8.    Enables real-time incorporation of the market and individual asset moves into sizing decisions.

DISCIPLINED USAGE REDUCES RESEARCH SLIPPAGE

Our clients are a self-selecting cohort who believe in process and discipline; process orientation goes together with Alpha Theory software that serves as a disciplining mechanism to align the best risk/reward ideas with rankings in the portfolio. Shown below, the most active users as measured by frequency of updates, research coverage, and model correlation have the highest ROIC.

Alpha Theory’s research not only suggests that the adoption of the Alpha Theory application by itself leads to improved performance, but actual usage intensity further enhances results.

Usage intensity is determined by:

1. Percent of Positions with Research

2. Correlation with Optimal Position Size

1.    Measured as the annualized ROIC where data was available, for a sample of 48 clients, 12 for each quartile

OPTIMAL POSITION SIZING REDUCES RESEARCH SLIPPAGE

Comparing clients’ actual versus optimal returns shows:

HIGHER TOTAL RETURNS

ROIC is 4% higher.

IMPROVED BATTING AVERAGE

Batting Average is 9% higher. Explanation: many of the assets that don’t have price targets or have negative probability-weighted returns (PWR) are held by the fund but recommended as 0% positions by Alpha Theory. Those positions underperform and allow Alpha Theory’s batting average to prevail.

1.    Measured as the average full-year return for clients where full-year data was available, adjusted for differences in exposure, net of trading costs

PRICE TARGETS REDUCES RESEARCH SLIPPAGE

Alpha Theory has further found that ROIC for assets with price targets is 4.8% higher than for those without price targets. Some investors chafe at price targets because they smack of “false precision.” These investors are missing the point because the key to price targets is not their absolute validity but their explicit nature which allows for an objective conversation of the assumptions that went into them. Said another way, the requirements of calculating a price target and the questions that price targets foster are central to any good process.

Finding alpha will not become easier. It is imperative that the funds of the 21st century develop plans to evolve to new realities. Data and process are critical to that evolution. Let Alpha Theory help you and your team grow to meet the challenges of tomorrow.

## December 1, 2019

### Capturing Alpha in Risk Rewards - Morgan Stanley

Morgan Stanley has one of the most robust sets of scenario-based price target forecasts in the world with around 70,000 forecasts over 10 years. Naturally, they decided to evaluate the quality of their analysts’ forecasts and the results were positive. In the chart below, their price target, scenario-based strategies consistently created positive alpha.

The model was built by evaluating analysts’ scenario analysis to determine buy and sell signals by using measurements and trends on the variables of Downside, Tilt, and Uncertainty. The magnitude and number of those signals determined the weighting in the hypothetical portfolio.

They determined that there was a demonstrable benefit in using a scenario-analysis instead of a single price target.

Breaking the analysis down to its components (individual scenario analyses) showed consistent predictive quality from the scenarios analysis as measured by the pre-cost hit ratio (the percentage of long/short signals that generate higher/lower returns than the total return of the equity index). While a mid-50s hit rate may seem marginal, it is substantial. It is enough to create consistent outperformance, as we measured by observing our managers with consistent hit rates above 50%.

The Morgan Stanley analysis is substantive in two ways. First, it supports our research that scenario analyses have predictive power that can be utilized to create positive alpha strategies. The second is our suspicion that buy-side manager scenario analyses are superior to sell-side forecasts because of their real-world application, their lack of administrative constraints, and lack of investment-banking conflict. If that is the case, Alpha Theory forecast dataset should have predictive power superior to that in the Morgan Stanley analysis.

## November 1, 2019

### Concentrating on Concentration: New Data on Portfolio Concentration

As most of our readers know, we are proponents of more concentrated portfolios. In May of 2017, we released our Concentration Manifesto which was an attempt to get a critical dialogue started between managers and allocators to ultimately improve the active management process. A conversation that requires both sides cast aside outdated thinking and embraces the notion that concentration is in their best interest.

And we’re seeing it in external data:

And in our own managers:

This conversation began well before our Concentration Manifesto. We recently found an April 2014 study by Cambridge Associates outlining the “Hallmarks of Successful Active Equity Managers.

Cambridge Associates analyzed a selection of managers to isolate attributes that lead to success. In their findings, active share and concentration were major contributors. Their analysis1 found that concentrated portfolios (US equity less than 30 positions and US Small-Cap & EAFE Equity less than 40 positions) generated between 100bps and 170bps of additional performance over non-Concentrated portfolios.

The performance difference for concentrated managers held after fees and worked across various strategies. The fractal nature (it still works when you break it into different strategies) lends additional validation for concentration’s benefits.

In the Cambridge article, we found a reference to another concentration study.

Baks, Busse, and Green published “Fund Managers Who Take Big Bets: Skilled or Overconfident” in 2006. The abstract says it all:

We document a positive relation between mutual fund performance and managers' willingness to take big bets in a relatively small number of stocks. Focused managers outperform their more broadly diversified counterparts by approximately 30 basis points per month or roughly 4% annualized. The results hold for mimicking portfolios based on fund holdings as well as when returns are measured net of expenses. Concentrated managers outperform precisely because their big bets outperform the top holdings of more diversified funds. The evidence suggests that investors may enhance performance by diversifying across focused managers rather than by investing in highly diversified funds.

Their sample covers funds from 1979-2003 and the return advantage per month ranges between +1 and +67 basis points depending on the methodology for measuring fund concentration and how many deciles to included. That equates to a range between +0.12% and +8.34% on an annualized basis for concentrated managers.

We continue to believe that there is a demonstrable skill in equity managers and that the skill could be harnessed in better ways than is typically demonstrated by the average manager and that concentration is the simplest way to improve a manager who possesses positive stock-picking skill.

1 eVestment Alliance Database: September 2007 to June 2013 US large-cap core equity, US large-cap growth equity, US large-cap value equity, US small-cap core equity, US small-cap growth equity, US small-cap value equity, and all EAFE equity

## September 19, 2019

### Superforecasting and Noise Reduction in Financial Analysis

Alpha Theory and Good Judgement Inc. hosted a Superforecasting workshop this week with several Alpha Theory clients attending and learning about noise reduction techniques. Warren Hatch, President of GJI, led the discussion on how to reduce noise in forecasting. Warren began the discussion with an overview of the characteristics of Superforecasters and what leads to good individual forecasts. We then shifted to how we can use team dynamics to improve forecast accuracy.

Warren started with examples pulled from other noise reduction workshops and showed how the team methods reduced noise (measured by the standard deviation of estimate) and increased accuracy (measured by Brier Score). We did our own example using Good Judgement Inc. software to ask questions of our group that led to a valuation of NFLX:

How many subscribers will Netflix have at the end of 2020?

What will be Netflix's revenue per subscriber in 2020?

What will be Netflix's net margin in 2020?

What will be Netflix's PE multiple in 2020?

We compiled the initial results and compared them to current. We then had a chance to review other contributors forecasts and rationales and vote on the ones we thought were best. Next, the “team” discussed the highest vote-getting rationales and quickly identified an expert in the room. Through the noise reduction exercises and discussion, we narrowed our forecast range (reduced noise) and hopefully improved accuracy. We’ll know in a year when we see if NFLX is at \$296.00.

info@alphatheory.com

## September 5, 2019

### Noise: The Maker of Markets and Madness

Reflect on your own investment process and ask this question: “Given identical sets of information, would two different analysts come to the same conclusion? Would an analyst come to the same conclusion on different days?” If not, there is noise. Where there is noise, there is room for improvement.

As investors, we accept that market participants have different opinions about the value of securities. That’s what makes a market. Even within investment teams, a diversity of views is a good thing.

The problem is how we combine those views. All too often, different people will use the same process and information to come up with the same (often wrong) answer. You can even ask the same person the same question at different times and get wildly different answers. All this noise is a bad thing. It adds volatility to the data, wastes our time by drawing attention away from the underlying drivers of value, and impairs the quality of our investment decisions.

One option is to ignore the noise, take an average of everyone’s different views, and call it a consensus. That’s what we do with consensus earnings estimates. Or GDP forecasts. Or Wall Street dinner surveys. The wisdom of the crowd works, and over time this is certainly better than relying on any one individual.

A better option is to reduce the noise, not just ignore it. New and exciting research findings by Nobel laureate Daniel Kahneman and others show us concrete steps we can take to identify and reduce noise.

Kahneman often describes noise by citing work he performed for a large insurance company. In the example, two different insurance adjusters were given the same insurance claim and came up with the two substantially different payouts. The difference in those claims is noise. The question becomes, what level of subjectivity should there be in an insurance claim?

The problem is that humans are unreliable decision makers; their judgments are strongly influenced by irrelevant factors, such as their current mood, the time since their last meal, and the weather. We call the chance variability of judgments noise. It is an invisible tax on the bottom line of many companies.

A firm whose employees exercise judgment does not expect decisions to be entirely free of noise. But often noise is far above the level that executives would consider tolerable—and they are completely unaware of it.

Obviously, investors are different than insurance adjusters. They don’t get an insurance claim form with all relevant information. But noise has been studied and repeatedly demonstrated to impact decision making across fields. Kahneman further distinguishes noise from cognitive bias:

To appreciate the distinction, think of your bathroom scale. We would say that the scale is biased if its readings are generally either too high or too low. If your weight appears to depend on where you happen to place your feet, the scale is noisy. A scale that consistently underestimates true weight by exactly four pounds is seriously biased but free of noise. A scale that gives two different readings when you step on it twice is noisy. Many errors of measurement arise from a combination of bias and noise. Most inexpensive bathroom scales are somewhat biased and quite noisy. – Noise: How to Overcome the High, Hidden Costs of Inconsistent Decision Making” – Kahneman, Rosenfield, Gandhi, Blaser (HBR October 2016)

So how should an investor reduce noise? By increasing process. Kahneman’s “radical solution” is to go all the way and replace human judgment with algorithms. These models can be straightforward: select 6–8 key variables, assign equal weights, and devise the simplest possible rule to combine them. Where replacing human judgment completely with algorithms isn’t viable or even desirable, there are other solutions.

Alpha Theory’s work with clients shows how this process can work. In a matter of 30 minutes, a “simple model” can be created to generally codify the 5-10 most important attributes of a firm’s process. Backed up by experience and empirical results, these simple models can reduce noise and increase returns—and the improvement is immediate and lasting.

Good Judgment Inc, the commercial company built on the research findings by Professor Philip Tetlock and “Superforecasting,” helps clients boost forecast accuracy with a suite of tools to cut through the noise. At recent Superforecasting workshops, for instance, group accuracy improved by an average of at least 20% after just 15 minutes of structured discussion.

Noise is the enemy of good decision making. The separate but complementary work at Good Judgment Inc and Alpha Theory show there are steps you can take now that can reduce noise and boost your bottom line.

## August 1, 2019

### The Concentration Manifesto for Shorts

We were reading the great research that comes from our friends at Novus recently and saw a reference to a paper written by Della Corte, Kosowski, Rapanos (2019). This paper analyzes 1.7 million short positions from 585 managers that contributed to the European Union Short Disclosure Information dataset from 2012-2018. They found that highest quintile conviction shorts (P5 - as measured by position size) outperformed lowest quintile conviction shorts (P1). In fact, the highest conviction shorts were the only cohort that had a mean return that was negative on an absolute basis (positive contribution for shorts).

After applying a six-factor model, the alpha of a strategy going long the low conviction and short the high conviction had an alpha of 11%. Ideally, the results would show a gradual declination between P1 and P5, but P4 does not follow that trend. Nevertheless, there is a demonstrable skill in short selection for the largest position sizes and provides further support for the Concentration Manifesto.

## June 1, 2019

### Increasing the Probability of Success - Part 2

2. PROBABILITY BUCKETS

In many ways, this is the easiest of all the methods to implement. Predetermine as a firm how many price target scenarios you’re going to forecast per position. For example, let’s say you’re going to do three: Reward, Base, Risk. And for each position, the analyst can choose Low, Medium, or High Probability and you preset the probability distribution. For instance:

In this case, probabilities are constrained within a range but allow for some flexibility. What you should expect from your analysts is a normal distribution of probability ranges. Mostly Mediums with a few Lows and Highs. No analyst should have more Highs than anything else. The way to explain this to your team is that Low isn’t bad. It’s just less likely than the average (Medium) name. And vice versa for a High.

Probability Buckets are the most common probability recommendation we’ve made for clients. They are a good combination of flexibility and practicability, easy to explain to the team, and an effective audit to determine if analysts are being too aggressive.

3. FLEXIBLE PROBABILITIES

The book Superforecasting explains how individuals can improve their forecasting skill (if you are a forecasting practitioner you should read the book – see our blog post about it here). One primary trait of Superforecasters is micro-updates. Superforecasters change their probability estimates in small increments with new information. For example, changing their probability of Trump winning the 2016 election from 43% to 45% after getting the latest polling data. These small updates accrue to better forecasting accuracy.

Given that fact, you’d think my recommendation would be for Alpha Theory clients to make micro-updates and have lots of flexibility with price targets. My answer is, well, it depends. The incentive for Superforecasters was to maximize their accuracy. The incentive for most analysts is P&L, not accuracy, making them reward seekers. This encourages behavior that increases position size (ie. inflated price targets and probabilities – case in point, the average hit rate for all Alpha Theory clients is 51% however analyst assume they're going to make money 72% of the time).

The culture and workflow of the firm determine what makes the most sense. If analysts come up with price targets and probabilities with little to no input from the Portfolio Manager then use Fixed Probabilities and Probability Buckets to reduce the likelihood of them “gaming the system.” If the opposite is true and price targets and probabilities are a collaborative exercise with the Portfolio Manager, then Flexible Probabilities is more than likely ideal.

One last suggestion as it relates to setting probabilities: know your funds' historical hit rate. What percentage of positions made money on an absolute basis? And on an alpha basis? What percentage hit their upside price target? What percentage hit their downside price target? Use these to set a baseline for the probabilities of the firm. For example, if the historical batting average of the firm is 51%, then the average probability of hitting the reward target should not be 72%. Keep your probabilities realistic and the portfolio you build will be a more accurate forecast of what you will receive (see the “Probability Inflation” blog post).

There is no question that setting probabilities is one of the trickiest parts of scenario-based forecasting. We hope this overview gives you a starting point for how to implement an effective probability setting framework.

## May 1, 2019

### Increasing the Probability of Success - Part 1

I was discussing with a new client how analysts should approach probabilities. Probabilities are used in calculating probability-weighted returns by multiplying them by the client’s scenarios of price forecasts to come up with a probability-weighted return.

The probability piece is the most subjective part of the probability-weighted return (see our “False Precision” blog post that discusses why it is important to set probabilities), so we came up with several approaches to see what fit best for their firm. I thought I’d share them with anyone that may be struggling with probabilities:

1. Fixed Probabilities (Distribution)

Analysts come up with price targets that match the part of the forecast distribution associated with the probabilities. In this example, all positions have a “fixed” 20%/60%/20% probability framework. The goal is to come up with price targets that match those buckets (i.e. what is the 20% risk price target?).

This method pulls price targets associated that reflect the probability-weighted outcomes associated with a broad range of outcomes associated with different probability “buckets”. An analyst would iterate the assumptions in their financial model to estimate the extreme outcomes (two 20% probability buckets at the end) and the higher probability outcomes (60% probability bucket in the middle). The result is a price target that blends the possible outcomes in each bucket by their associated probability. Another way to think of this is a cumulative probability distribution.

For example, the analyst may associate-5% sales growth and 10% EBITDA margins as the 20% cumulative probability outcome, 25% sales growth and 40% EBITDA margins as the 80% cumulative probability outcome, and 60% growth and 55% margins as the 99% cumulative probability. There would be many other points in between (represented by the green dots) where the analyst would apply different assumptions in their model.

The benefits of this method are that the probabilities are fixed and require no subjective assessment. This method also allows for highly-sensitive models with extreme outcomes to be reflected in the resultant probability-weighted return. The downside of this method is that it is time-intensive and allows no flexibility in the probabilities.

## April 1, 2019

### Why Price Targets are Broken and an Easy Method to Fix Them

I used to carefully calculate a price target for every asset I invested in. I was, after all, a sell-side analyst for many years and the price target was a staple. But when I used price targets to actually deploy capital, I was less than satisfied. I always had a nagging feeling that something was missing. It took reading a book on poker theory to wake me up.

Great poker players calculate price targets by determining the amount of money in the pot. But they do not stop there. Great poker players determine the probability of winning the pot and combine that with pot size and the amount they have to bet (risk) to determine a probability-weighted return (i.e. if they were to play the exact same hand situation 1,000 times, the return they would expect). Why wouldn’t I do the same thing for every investment I make? Take my price target and combine it with my estimate of downside risk and multiply each times my best guess of the probability of each event occurring.

Yes, the probability of winning a hand of poker is different than determining the probability of a stock going from \$20 to \$40. Poker has aleatory probabilities, which means they are defined by observable statistics and investing has epistemic probabilities meaning that probabilities cannot be determined by historical observation (these are words learned from listening to Ronald Howard, Stanford Business School professor that has studied decision making for the last 40 years). Investors describe the same aleatory and epistemic probabilities with different definitions. Definable probability is called risk, and an indefinable probability is called uncertainty. Uncertainty does not mean we should not use probability, because we are using our “confidence” to influence the investment decision anyway.

Gene Gigerenzer describes it like this in his book “Calculated Risk”, “Degrees of belief are subjective probabilities and are the most liberal means to translate uncertainty into a probability. The point here is that investors can translate even one-time events into probabilities provided they satisfy the laws of probability – the exhaustive and exclusive set of alternatives adds up to 100%. Also, investors can frequently update probabilities based on degrees of belief when new, relevant information becomes available.”

Many firms have a spreadsheet with price targets for each stock in their portfolio. Their price target represents the value the stock should achieve assuming their thesis is correct. But what if their thesis is wrong? Their price target assumes a 100% probability that their thesis comes true. If that is not the case, then downside risk has to be part of the equation. And if the chance of upside or downside is not a coin-flip, then probability must be assessed. These are the metrics that an analyst should be trying to tease out of their fundamental research because they describe the true expected payoff from the investment. Price target does not give you a probability-weighted return! Anything less than a probability-weighted return requires you to rely on your mental calculator to combine profit, risk, and conviction level.

Price Target is the most common measurement used by fundamental money managers to evaluate asset quality. The Price Target represents an analyst’s best estimate of value and is a synthesis of their research. The Price Target is then compared to the assets current price to determine if there is a significant enough dislocation of value to provide the fund an opportunity to profit. The Price Target is dynamic because it can be adjusted as the analyst receives new fundamental data. It can also be used as a trading tool that notifies the fund when to enter and exit positions.

For all of its attributes, one inherent flaw has made Price Targets impractical for money managers. Price Target only explains the most likely scenario and thus assumes a 100% probability of its outcome being true. For example, Company ABC is trading at \$20 and has just hired a new CEO who is known for cutting costs and improving gross margins. Company ABC has historically had margins and multiples below industry norms. So, in your Price Target, you give the company the benefit of industry margins and multiples and determine the company should be worth \$30. This is an implied return of 50% and sounds like a solid story. However, the company is a generation behind in product development, so it may be difficult to generate equivalent margins and the company may have to spend on R&D to catch up with the industry. Additionally, there is an even riskier scenario that the industry continues to outpace Company ABC in product development and their competitive position actually disproves to a point where margins are severely impaired. Maybe these other outcomes are not as likely, but they must be accounted for in the measurement of asset quality.

This is where many portfolio managers will assess that the asset has great potential upside to \$30 from \$20, but there are substantial risks that prevent the fund from taking the exposure that generally would be given to an asset with 50% potential return. In this method, the portfolio manager was forced to use heuristics and mental calculation to adjust for risk. But why force yourself to be a human computer when you have all of the relevant information to make a more accurate decision? The first step is to appreciate that the firm’s thesis does not have a 100% probability of occurring. Once a firm indoctrinates that tenet, it is easy to see that all 50% return assets are not created equal.

Let’s use our Company ABC example to fully describe the asset using the analyst’s research. The thesis is that the new CEO will improve margins and the company will receive a multiple in line with the industry. The analyst believes there is a reasonable chance that this occurs, say 50%. The analyst also calculates that if the company was forced to spend more on research and development to catch up from a product standpoint, the stock would be worth \$22 and this has a lower chance of occurring, say 30%. Lastly, the chance that the industry advances their lead on Company ABC’s product, severely impacting margins, is about 20% and would probably take the stock down to \$10. We have now described the full breadth of our research and we can synthesize it without heuristics or mental calculation:

The 18% Probability-Weighted Return (PWR) explains the full breadth of your research, is an accurate measurement to make portfolio decisions and is an apples-to-apples comparison of all assets. But this is just the beginning of the benefits of PWR because assets are now being measured by how much return you gain for a given level of risk.

We will continue with our analysis of Company ABC. The analyst is in San Francisco and has just exited a one-on-one with Company ABC’s CEO. He finds out that product development is going ahead of plan at a lower cost and the product should be industry-competitive in the next few months. Let’s evaluate how Price Target and PWR would each deal with this new information. For Price Target, things are going according to plan so we wouldn’t raise our target above \$30. With PWR we can confidently assess that our probability of success has increased and our probability of overspending on R&D and falling behind the competition has decreased. We would quickly adjust our assumptions:

Using PWR we see exactly how much better our idea is given the new fundamental data and can easily see how much better the position is than it was before. The portfolio manager can be confident in adding to the position and have a sense for how much.

The process is so much better using Probability-Weighted Returns. Using Price Targets is certainly better than guessing, but it leaves important information needed to make the right decision. If you’re using Price Targets today, take the time to reassess your approach. Start with downside targets with fixed probabilities (50%/50%) and then, over time, add differentiated probabilities to get the full impact of Probability-Weighted Returns on your portfolio.