(866)-482-2177

sales@alphatheory.com

REQUEST A DEMO

SYSTEM REQUIREMENTS


Please note the following System Requirements. Further, please limit the number of open applications (particularly price streaming applications) while logged in to Alpha Theory™.


Recommended System Specifications
Processor: Dual Core or Quad-Core 2.4GHz or faster
RAM: 4GB+
Browser: Google Chrome 30+
Screen Resolution: 1280 x 1024 or greater
Internet Access: Business Class High-Speed


Minimum System Requirements
Processor: Intel Pentium-M 2.0Ghz or equivalent
RAM: 2GB+
Browser: Google Chrome, Mozilla Firefox, Internet Explorer 9+ (without Compatibility View), Safari
Screen Resolution: 1024 x 768 or greater
Internet Access: High-Speed

Subscribe to Alpha Theory content

Alpha Theory Blog - News and Insights

58 posts categorized "Risk Management"

November 1, 2019

Concentrating on Concentration: New Data on Portfolio Concentration

 

As most of our readers know, we’re proponents of more concentrated portfolios. In May of 2017, we released our Concentration Manifesto which was an attempt to get a critical dialogue started between managers and allocators to ultimately improve the active management process. A conversation that requires both sides cast aside outdated thinking and embraces the notion that concentration is in their best interest.

 

And we’re seeing it in external data:

 

Exhibit 19

 

And in our own managers:

 

AveragePositionSize

 

This conversation began well before our Concentration Manifesto. We recently found an April 2014 study by Cambridge Associates outlining the “Hallmarks of Successful Active Equity Managers.

 

Cambridge Associates analyzed a selection of managers to isolate attributes that lead to success. In their findings, active share and concentration were major contributors. Their analysis1 found that concentrated portfolios (US equity less than 30 positions and US Small-Cap & EAFE Equity less than 40 positions) generated between 100bps and 170bps of additional performance over non-Concentrated portfolios.

 

Table-3.-Results-of-Active-Share-Analysis

 

The performance difference for concentrated managers held after fees and worked across various strategies. The fractal nature (it still works when you break it into different strategies) lends additional validation for concentration’s benefits.

 

In the Cambridge article, we found a reference to another concentration study.

 

Baks, Busse, and Green published “Fund Managers Who Take Big Bets: Skilled or Overconfident” in 2006. The abstract says it all:

 

We document a positive relation between mutual fund performance and managers' willingness to take big bets in a relatively small number of stocks. Focused managers outperform their more broadly diversified counterparts by approximately 30 basis points per month or roughly 4% annualized. The results hold for mimicking portfolios based on fund holdings as well as when returns are measured net of expenses. Concentrated managers outperform precisely because their big bets outperform the top holdings of more diversified funds. The evidence suggests that investors may enhance performance by diversifying across focused managers rather than by investing in highly diversified funds.

 

Their sample covers funds from 1979-2003 and the return advantage per month ranges between +1 and +67 basis points depending on the methodology for measuring fund concentration and how many deciles to included. That equates to a range between +0.12% and +8.34% on an annualized basis for concentrated managers.

 

Fund perf vs. portf weight

 

We continue to believe that there is a demonstrable skill in equity managers and that the skill could be harnessed in better ways than is typically demonstrated by the average manager and that concentration is the simplest ways to improve a manager who possesses positive stock-picking skill.

 

1 eVestment Alliance Database: September 2007 to June 2013 US large-cap core equity, US large-cap growth equity, US large-cap value equity, US small-cap core equity, US small-cap growth equity, US small-cap value equity, and all EAFE equity

 

October 4, 2019

The Difference between Intrinsic and Extrinsic Value – A Case Against WACC

 

In one of our blogs, we highlighted how our clients’ returns would have enhanced returns by more closely following a position-sizing optimization based on probability-weighted return. We also noted how the quality of the probability-weighted returns impacted the improvement generated by the optimization (garbage-in garbage-out). The return our clients calculate is the difference between the market’s valuation and the manager’s calculation of intrinsic value (probability-weighted return). Said another way, gaining a sense of the intrinsic value is the core task of a portfolio manager.

 

For managers that use discounted cash flow analysis to determine intrinsic value, the discount rate is one of the most subjective, yet important, inputs. For managers that do a scenarios analysis, we pointed out a straightforward approach that dramatically reduces subjectivity (June ‘18 blog). However, for those that don’t do scenario analysis, determining the discount rate can be substantially more complicated.

 

We’ve recently spent some time with Ryan Guttridge and his colleague Corry Bedwell of the University of Maryland and they have some interesting ideas on setting discount rates that I thought were worth sharing:

 

Let’s Think about What Intrinsic Actually Means

 

An intrinsic property is well defined in sciences such as chemistry. An intrinsic property is an essential or inherent property of a system. Said another way, an intrinsic property is internal to the system being evaluated like specific density is an intrinsic property of water.  In contrast, an extrinsic property is not internally defined by the entity being evaluated. So, think about them this way:

 

Intrinsic – Totally independent of outside influence

Extrinsic – An influence outside the system

 

Why We Discount – The Marshmallow Test

 

Put a group of kids with a plate of marshmallow’s (or any other tasty treat) in a room and leave. What will happen? Chances are by the time you come back the treats will be gone. Why? Well, something now is better than something later. Ok so change the rules, offer the kids a deal. If they don’t eat the treat right away they will get another one at some later time, growing their supply of marshmallows. So, the deal from the kids perspective, I give up eating the marshmallow now and get two later. What makes it worth it? First, he has trust that the marshmallows will be delivered as promised. Second, the time he must wait for a reward can’t be too long, given the fact he can always eat the marshmallow now. In other words, the reward has to be large enough to overcome the opportunity cost of eating the marshmallow now. This is the classic economic definition of the logic of discounting. Forgo today’s marshmallow for two tomorrow.

 

Investing – Bringing two separate but intrinsic concepts together

 

Think about what goes through your head when you make an investment. You are going through the same steps described above. First, you will estimate the series of cash flows you expect the asset to provide (when are and how often are the marshmallows arriving). Second, you are going to decide how much you will pay for those cash flows (how badly do you want that marshmallow now). Each of these steps are independent but intrinsic.

However, according to the financial literature, determining an appropriate discount rate (i.e. the opportunity cost) isn’t straight forward. The efficient market hypothesis logically implies the correct discount rate for our intrinsic valuation models is the company’s weighted average cost of capital, WACC. Which is defined in the following way:

 

Screen Shot 2019-09-16 at 3.19.54 PM

 

That is WACC is the portion of capital from equity plus the portion of capital from debt.

 

WACC – A Tale of an Extrinsic Rate

 

So, let’s take this apart -- our recommended discount rate is a function of capital ‘’we” have from equity and the amount of capital “we” have from debt. Right? This has at least two fundamental problems.

 

First, if the market is efficient, there is no reason for active management. However, there is a growing body of literature showing that while large swarths of the market can be considered efficient, there are pockets of inefficiency. So, if these inefficiencies are going to be taken advantage of, logically a discount rate that requires efficiency cannot be used. 

 

Second, an active manager is required to have an independent sense of value. He is hired to find “mis-priced” assets. The only way to do that is to develop an independent view of the correct valuation (i.e. outside the market). So, what do we have that is independent of the Market? Think back to the marshmallow test.  The value of the deal is a function of the opportunity cost (compensation for not buying something else) of your capital, and the intrinsic productivity (read cash flows) of the asset.  

 

What to do about Risk?

 

One way to deal with this problem is to use scenario analysis as mentioned above. For those who don’t use scenario analysis, you need to focus on these three issues when calculating a discount rate: 1) Opportunity cost of capital (possibly provided by your benchmark), 2) the average cash flow level, and 3) its variation. Not only are these intrinsic to the system, but your analysis offers an independent sense of value. This, in turn, allows for the calculation of “edge” (the difference between Intrinsic Value and Market Value) and proper optimization. In our next post, we will expand on the calculation of an intrinsic discount rate using this method. In the meantime, feel free to check out our paper on the topic: https://arxiv.org/abs/1903.09683

 

September 19, 2019

Superforecasting and Noise Reduction in Financial Analysis

 

Alpha Theory and Good Judgement Inc. hosted a Superforecasting workshop this week with several Alpha Theory clients attending and learning about noise reduction techniques. Warren Hatch, President of GJI, led the discussion on how to reduce noise in forecasting. Warren began the discussion with an overview of the characteristics of Superforecasters and what leads to good individual forecasts. We then shifted to how we can use team dynamics to improve forecast accuracy.

 

Warren started with examples pulled from other noise reduction workshops and showed how the team methods reduced noise (measured by the standard deviation of estimate) and increased accuracy (measured by Brier Score). We did our own example using Good Judgement Inc. software to ask questions of our group that led to a valuation of NFLX:

How many subscribers will Netflix have at the end of 2020?

What will be Netflix's revenue per subscriber in 2020?

What will be Netflix's net margin in 2020?

What will be Netflix's PE multiple in 2020?   

 

We compiled the initial results and compared them to current. We then had a chance to review other contributors forecasts and rationales and vote on the ones we thought were best. Next, the “team” discussed the highest vote-getting rationales and quickly identified an expert in the room. Through the noise reduction exercises and discussion, we narrowed our forecast range (reduced noise) and hopefully improved accuracy. We’ll know in a year when we see if NFLX is at $296.00.

 

Slide1

 

Thanks to Warren and team for putting on a great workshop for Alpha Theory clients. Please contact us with any questions.

info@alphatheory.com

 

September 5, 2019

Noise: The Maker of Markets and Madness

This article is written in collaboration with Warren Hatch, CFA (Good Judgment Inc.).

 

Reflect on your own investment process and ask this question: “Given identical sets of information, would two different analysts come to the same conclusion? Would an analyst come to the same conclusion on different days?” If not, there is noise. Where there is noise, there is room for improvement.

 

As investors, we accept that market participants have different opinions about the value of securities. That’s what makes a market. Even within investment teams, a diversity of views is a good thing.

 

The problem is how we combine those views. All too often, different people will use the same process and information to come up with the same (often wrong) answer. You can even ask the same person the same question at different times and get wildly different answers. All this noise is a bad thing. It adds volatility to the data, wastes our time by drawing attention away from the underlying drivers of value, and impairs the quality of our investment decisions. 

 

One option is to ignore the noise, take an average of everyone’s different views, and call it a consensus. That’s what we do with consensus earnings estimates. Or GDP forecasts. Or Wall Street dinner surveys. The wisdom of the crowd works, and over time this is certainly better than relying on any one individual.

 

A better option is to reduce the noise, not just ignore it. New and exciting research findings by Nobel laureate Daniel Kahneman and others show us concrete steps we can take to identify and reduce noise.

 

Kahneman often describes noise by citing work he performed for a large insurance company. In the example, two different insurance adjusters were given the same insurance claim and came up with the two substantially different payouts. The difference in those claims is noise. The question becomes, what level of subjectivity should there be in an insurance claim?

 

The problem is that humans are unreliable decision makers; their judgments are strongly influenced by irrelevant factors, such as their current mood, the time since their last meal, and the weather. We call the chance variability of judgments noise. It is an invisible tax on the bottom line of many companies.

 

A firm whose employees exercise judgment does not expect decisions to be entirely free of noise. But often noise is far above the level that executives would consider tolerable—and they are completely unaware of it.

 

Obviously, investors are different than insurance adjusters. They don’t get an insurance claim form with all relevant information. But noise has been studied and repeatedly demonstrated to impact decision making across fields. Kahneman further distinguishes noise from cognitive bias:

Screen Shot 2019-09-05 at 9.35.38 AM

To appreciate the distinction, think of your bathroom scale. We would say that the scale is biased if its readings are generally either too high or too low. If your weight appears to depend on where you happen to place your feet, the scale is noisy. A scale that consistently underestimates true weight by exactly four pounds is seriously biased but free of noise. A scale that gives two different readings when you step on it twice is noisy. Many errors of measurement arise from a combination of bias and noise. Most inexpensive bathroom scales are somewhat biased and quite noisy. – Noise: How to Overcome the High, Hidden Costs of Inconsistent Decision Making” – Kahneman, Rosenfield, Gandhi, Blaser (HBR October 2016)

 

So how should an investor reduce noise? By increasing process. Kahneman’s “radical solution” is to go all the way and replace human judgment with algorithms. These models can be straightforward: select 6–8 key variables, assign equal weights, and devise the simplest possible rule to combine them. Where replacing human judgment completely with algorithms isn’t viable or even desirable, there are other solutions.

 

Alpha Theory’s work with clients shows how this process can work. In a matter of 30 minutes, a “simple model” can be created to generally codify the 5-10 most important attributes of a firm’s process. Backed up by experience and empirical results, these simple models can reduce noise and increase returns—and the improvement is immediate and lasting.

 

Good Judgment Inc, the commercial company built on the research findings by Professor Philip Tetlock and “Superforecasting,” helps clients boost forecast accuracy with a suite of tools to cut through the noise. At recent Superforecasting workshops, for instance, group accuracy improved by an average of at least 20% after just 15 minutes of structured discussion.

 

Noise is the enemy of good decision making. The separate but complementary work at Good Judgment Inc and Alpha Theory show there are steps you can take now that can reduce noise and boost your bottom line.

 

August 1, 2019

The Concentration Manifesto for Shorts

 

We were reading the great research that comes from our friends at Novus recently and saw a reference to a paper written by Della Corte, Kosowski, Rapanos (2019). This paper analyzes 1.7 million short positions from 585 managers that contributed to the European Union Short Disclosure Information dataset from 2012-2018. They found that highest quintile conviction shorts (P5 - as measured by position size) outperformed lowest quintile conviction shorts (P1). In fact, the highest conviction shorts were the only cohort that had a mean return that was negative on an absolute basis (positive contribution for shorts).

 

Panel A - Equally-weighted Portfolios

 

After applying a six-factor model, the alpha of a strategy going long the low conviction and short the high conviction had an alpha of 11%. Ideally, the results would show a gradual declination between P1 and P5, but P4 does not follow that trend. Nevertheless, there is a demonstrable skill in short selection for the largest position sizes and provides further support for the Concentration Manifesto.

 

Download full version of The Concentration Manifesto

 

July 1, 2019

Brief Buys…Slow Sells

 

Sean Stannard-Stockton, who runs Ensemble Capital, sent us a link to a recent paper titled “Selling Fast and Buying Slow on return contribution of buy and sell decisions on over 4 million trades across 700+ portfolios. The result is that the portfolio managers in this study were great buyers and lousy sellers. On top of that, they compounded the problem by slowly buying, when they would have made more money by quickly buying, and quickly selling when they would have lost less money if they would have slowly sold.

 

Alpha Theory does this by design. It starts (buys) positions quickly and sells them slowly. We’ve measured a 4% difference between our clients’ ROIC of 6% and the 10% ROIC if they’d have traded according to Alpha Theory’s recommendations. We further break that down into initial position sizing and trading, where we find that about half, or 2%, comes from better trading. The “Selling Fast and Buying Slow” paper finds 170bps from better buying and selling decisions, which is close to the trading difference we’ve measured for our clients*.

 

Excerpt from “Selling Fast and Buying Slow” - Is there skill in buying and selling?

 

We examine this question using a unique data set containing the daily holdings and trades of sophisticated market experts—experienced institutional portfolio managers (PMs). Our data is comprised of 783 portfolios, with an average portfolio valued at approximately $573 million. More than 89 million fund-security-trading dates and 4.4 million high-stakes trades (2.0 and 2.4 million sells and buys, respectively) are observed between 2000 and 2016.

 

While the investors display clear skill in buying, their selling decisions underperform substantially. Positions added to the portfolio outperform both the benchmark and a strategy which randomly buys more shares of assets already held in the portfolio by over 100 basis points per year. In contrast, selling decisions not only fail to beat a no-skill random selling strategy, they consistently underperform it by substantial amounts. In our preferred specification, PMs forgo 70 basis points per year in raw returns.

 

Why would a majority of portfolio managers appear to exhibit skill in buying while at the same time underperforming substantially in selling? At face value, the fundamentals of buying and selling to optimize portfolio performance are similar: Both require incorporating information to forecast the distribution of future returns of an asset. Skill in both decisions requires the investor to look for relevant information and integrate it into the forecast. However, there is a reason to suspect that selling and buying decisions involve different psychological processes (Barber and Odean 2013). Recent work from the lab is consistent with this discrepancy: Buying decisions appear to be more forward-looking and belief-driven than selling decisions in an experimental asset market (Grosshans, Langnickel, and Zeisberger 2018). And indeed, anecdotal evidence from our sample points to PMs thinking differently about the two decisions; extensive interviews suggest that they appear to focus primarily on finding the next great idea to add to their portfolio and view selling largely as a way to raise cash for purchases.

 

We utilize a unique dataset and find evidence that financial market experts—institutional investors managing portfolios averaging $573 million—display costly, systematic biases. A striking finding emerges: While investors display skill in buying, their selling decisions underperform substantially—even relative to random sell strategies. We provide evidence that investors use heuristics when selling but not when buying, and that these heuristic strategies are empirically linked to the documented difference in performance.

 

As shown in Section 4, the comparison of trades on earnings announcement versus nonannouncement days suggests that PMs do not lack fundamental skills in selling; rather, results are consistent with PMs devoting more cognitive resources to buying than selling. When decision-relevant information is salient and readily available—as it is on announcement days—PMs’ selling performance improves substantially. We propose a mechanism through which overall underperformance in selling can be explained by a heuristic two-stage selling process, where PMs limit their consideration set to assets with salient characteristics (extreme prior returns) and sell those they are least attached to (low active share assets). A proxy for this heuristic strategy is associated with substantial losses relative to a no-skill random selling strategy.

 

The question remains of why professional PMs have not learned that their selling decisions are underperforming simple no-skill strategies. While we can only speculate, the environment in which fund managers make decisions offers several clues. As Hogarth (2001) notes, the development of expertise requires frequent and consistent feedback. While it is feasible to generate this type of feedback for both buy and sell decisions, anecdotal evidence from our interviews with PMs suggests that decisions are overwhelmingly focused on one domain over the other. In terms of time allocations, our understanding is that the vast majority of the investors’ research resources are devoted to finding the next winner to add to the portfolio. Moreover, standard reporting practices are well-suited for evaluating the performance of buying decisions: Purchased assets are tracked, providing salient and frequent feedback on the outcomes of buying decisions. This process appears successful in producing expertise—purchased assets consistently outperform the benchmark. In comparison, paltry resources are devoted to decisions of what to sell, and the relevant feedback is largely lacking: Assets sold are rarely if ever, tracked to quantify returns relative to potential alternatives such as our random sell counterfactual.

 

A recent paper by our friends at Essentia Analytics titled “The Alpha Lifecycle” confirms this conclusion with a different data set and a different approach.

 

Essentia is wrapping up a 5-month analysis of this phenomenon (Alpha Lifecycle), involving data from 42 portfolios over more than 10 years. The conclusions are clear: alpha has a beginning, a middle and an end. It tends to decay over time, reducing — or even reversing — the benefits it offered early on. Active managers who wish to deliver sustained alpha in their portfolios need to understand their own alpha lifecycles and adjust their investment decision-making processes accordingly.

 

Dominant Lifecycle from Essentia Analytics

Picture1

 

Alpha has a lifecycle and tends to decay over time — frequently causing managers who fall in love with their stocks to suffer. On average, managers we analyzed experienced a 400 basis point peak-to-trough decay in return on each position. 

 

These papers are great examples of the value of feedback for firms that want to improve. Capturing data, analyzing that data, and changing behavior based on empirical evidence is akin to Tiger Woods hitting golf balls while hooked up to a TrackMan and adjusting his swing to maximize the attributes that give him the greatest chance of success. The managers of the future are already adopting regimens that treat their process like that of an elite athlete. Those that don’t will get left behind.

 

*** We recognize this is not a perfect apples-to-apples comparison as our analysis measures the trading value as the difference between the Alpha Theory return and the return of keeping a consistent position size from beginning to end of the holding period.

June 1, 2019

Increasing the Probability of Success - Part 2

 

This article is a continuation of Increasing the Probability of Success - Part 1.

 

2. PROBABILITY BUCKETS

In many ways, this is the easiest of all the methods to implement. Predetermine as a firm how many price target scenarios you’re going to forecast per position. For example, let’s say you’re going to do three: Reward, Base, Risk. And for each position, the analyst can choose Low, Medium, or High Probability and you preset the probability distribution. For instance:

 

Screen Shot 2019-05-29 at 9.12.06 AM

 

In this case, probabilities are constrained within a range but allow for some flexibility. What you should expect from your analysts is a normal distribution of probability ranges. Mostly Mediums with a few Lows and Highs. No analyst should have more Highs than anything else. The way to explain this to your team is that Low isn’t bad. It’s just less likely than the average (Medium) name. And vice versa for a High.

 

Probability Buckets are the most common probability recommendation we’ve made for clients. They are a good combination of flexibility and practicability, easy to explain to the team, and an effective audit to determine if analysts are being too aggressive.

 

3. FLEXIBLE PROBABILITIES

 

The book Superforecasting explains how individuals can improve their forecasting skill (if you are a forecasting practitioner you should read the book – see our blog post about it here). One primary trait of Superforecasters is micro-updates. Superforecasters change their probability estimates in small increments with new information. For example, changing their probability of Trump winning the 2016 election from 43% to 45% after getting the latest polling data. These small updates accrue to better forecasting accuracy.

 

Given that fact, you’d think my recommendation would be for Alpha Theory clients to make micro-updates and have lots of flexibility with price targets. My answer is, well, it depends. The incentive for Superforecasters was to maximize their accuracy. The incentive for most analysts is P&L, not accuracy, making them reward seekers. This encourages behavior that increases position size (ie. inflated price targets and probabilities – case in point, the average hit rate for all Alpha Theory clients is 51% however analyst assume they're going to make money 72% of the time).

 

The culture and workflow of the firm determine what makes the most sense. If analysts come up with price targets and probabilities with little to no input from the Portfolio Manager then use Fixed Probabilities and Probability Buckets to reduce the likelihood of them “gaming the system.” If the opposite is true and price targets and probabilities are a collaborative exercise with the Portfolio Manager, then Flexible Probabilities is more than likely ideal.

 

One last suggestion as it relates to setting probabilities: know your funds' historical hit rate. What percentage of positions made money on an absolute basis? And on an alpha basis? What percentage hit their upside price target? What percentage hit their downside price target? Use these to set a baseline for the probabilities of the firm. For example, if the historical batting average of the firm is 51%, then the average probability of hitting the reward target should not be 72%. Keep your probabilities realistic and the portfolio you build will be a more accurate forecast of what you will receive (see the “Probability Inflation” blog post).

 

There is no question that setting probabilities is one of the trickiest parts of scenario-based forecasting. We hope this overview gives you a starting point for how to implement an effective probability setting framework.

 

May 1, 2019

Increasing the Probability of Success - Part 1

 

I was discussing with a new client how analysts should approach probabilities. Probabilities are used in calculating probability-weighted returns by multiplying them by the client’s scenarios of price forecasts to come up with a probability-weighted return.

 

The probability piece is the most subjective part of the probability-weighted return (see our “False Precision” blog post that discusses why it is important to set probabilities), so we came up with several approaches to see what fit best for their firm. I thought I’d share them with anyone that may be struggling with probabilities:

 

1. Fixed Probabilities (Distribution)

Analysts come up with price targets that match the part of the forecast distribution associated with the probabilities. In this example, all positions have a “fixed” 20%/60%/20% probability framework. The goal is to come up with price targets that match those buckets (i.e. what is the 20% risk price target?).

Probability of Success-1

 

This method pulls price targets associated that reflect the probability-weighted outcomes associated with a broad range of outcomes associated with different probability “buckets”. An analyst would iterate the assumptions in their financial model to estimate the extreme outcomes (two 20% probability buckets at the end) and the higher probability outcomes (60% probability bucket in the middle). The result is a price target that blends the possible outcomes in each bucket by their associated probability. Another way to think of this is a cumulative probability distribution.

Normal CDF

 

For example, the analyst may associate-5% sales growth and 10% EBITDA margins as the 20% cumulative probability outcome, 25% sales growth and 40% EBITDA margins as the 80% cumulative probability outcome, and 60% growth and 55% margins as the 99% cumulative probability. There would be many other points in between (represented by the green dots) where the analyst would apply different assumptions in their model.

 

The benefits of this method are that the probabilities are fixed and require no subjective assessment. This method also allows for highly-sensitive models with extreme outcomes to be reflected in the resultant probability-weighted return. The downside of this method is that it is time-intensive and allows no flexibility in the probabilities.

 

April 1, 2019

Why Price Targets are Broken and an Easy Method to Fix Them

 

I used to carefully calculate a price target for every asset I invested in. I was, after all, a sell-side analyst for many years and the price target was a staple. But when I used price targets to actually deploy capital, I was less than satisfied. I always had a nagging feeling that something was missing. It took reading a book on poker theory to wake me up.

 

Great poker players calculate price targets by determining the amount of money in the pot. But they do not stop there. Great poker players determine the probability of winning the pot and combine that with pot size and the amount they have to bet (risk) to determine a probability-weighted return (i.e. if they were to play the exact same hand situation 1,000 times, the return they would expect). Why wouldn’t I do the same thing for every investment I make? Take my price target and combine it with my estimate of downside risk and multiply each times my best guess of the probability of each event occurring.

 

Yes, the probability of winning a hand of poker is different than determining the probability of a stock going from $20 to $40. Poker has aleatory probabilities, which means they are defined by observable statistics and investing has epistemic probabilities meaning that probabilities cannot be determined by historical observation (these are words learned from listening to Ronald Howard, Stanford Business School professor that has studied decision making for the last 40 years). Investors describe the same aleatory and epistemic probabilities with different definitions. Definable probability is called risk, and an indefinable probability is called uncertainty. Uncertainty does not mean we should not use probability, because we are using our “confidence” to influence the investment decision anyway.

 

Gene Gigerenzer describes it like this in his book “Calculated Risk”, “Degrees of belief are subjective probabilities and are the most liberal means to translate uncertainty into a probability. The point here is that investors can translate even one-time events into probabilities provided they satisfy the laws of probability – the exhaustive and exclusive set of alternatives adds up to 100%. Also, investors can frequently update probabilities based on degrees of belief when new, relevant information becomes available.”

 

Many firms have a spreadsheet with price targets for each stock in their portfolio. Their price target represents the value the stock should achieve assuming their thesis is correct. But what if their thesis is wrong? Their price target assumes a 100% probability that their thesis comes true. If that is not the case, then downside risk has to be part of the equation. And if the chance of upside or downside is not a coin-flip, then probability must be assessed. These are the metrics that an analyst should be trying to tease out of their fundamental research because they describe the true expected payoff from the investment. Price target does not give you a probability-weighted return! Anything less than a probability-weighted return requires you to rely on your mental calculator to combine profit, risk, and conviction level.

 

Price Target is the most common measurement used by fundamental money managers to evaluate asset quality. The Price Target represents an analyst’s best estimate of value and is a synthesis of their research. The Price Target is then compared to the assets current price to determine if there is a significant enough dislocation of value to provide the fund an opportunity to profit. The Price Target is dynamic because it can be adjusted as the analyst receives new fundamental data. It can also be used as a trading tool that notifies the fund when to enter and exit positions.

 

For all of its attributes, one inherent flaw has made Price Targets impractical for money managers. Price Target only explains the most likely scenario and thus assumes a 100% probability of its outcome being true. For example, Company ABC is trading at $20 and has just hired a new CEO who is known for cutting costs and improving gross margins. Company ABC has historically had margins and multiples below industry norms. So, in your Price Target, you give the company the benefit of industry margins and multiples and determine the company should be worth $30. This is an implied return of 50% and sounds like a solid story. However, the company is a generation behind in product development, so it may be difficult to generate equivalent margins and the company may have to spend on R&D to catch up with the industry. Additionally, there is an even riskier scenario that the industry continues to outpace Company ABC in product development and their competitive position actually disproves to a point where margins are severely impaired. Maybe these other outcomes are not as likely, but they must be accounted for in the measurement of asset quality.

 

This is where many portfolio managers will assess that the asset has great potential upside to $30 from $20, but there are substantial risks that prevent the fund from taking the exposure that generally would be given to an asset with 50% potential return. In this method, the portfolio manager was forced to use heuristics and mental calculation to adjust for risk. But why force yourself to be a human computer when you have all of the relevant information to make a more accurate decision? The first step is to appreciate that the firm’s thesis does not have a 100% probability of occurring. Once a firm indoctrinates that tenet, it is easy to see that all 50% return assets are not created equal.

 

Let’s use our Company ABC example to fully describe the asset using the analyst’s research. The thesis is that the new CEO will improve margins and the company will receive a multiple in line with the industry. The analyst believes there is a reasonable chance that this occurs, say 50%. The analyst also calculates that if the company was forced to spend more on research and development to catch up from a product standpoint, the stock would be worth $22 and this has a lower chance of occurring, say 30%. Lastly, the chance that the industry advances their lead on Company ABC’s product, severely impacting margins, is about 20% and would probably take the stock down to $10. We have now described the full breadth of our research and we can synthesize it without heuristics or mental calculation:

 

Screen Shot 2019-03-29 at 9.29.58 AM

 

The 18% Probability-Weighted Return (PWR) explains the full breadth of your research, is an accurate measurement to make portfolio decisions and is an apples-to-apples comparison of all assets. But this is just the beginning of the benefits of PWR because assets are now being measured by how much return you gain for a given level of risk.

 

We will continue with our analysis of Company ABC. The analyst is in San Francisco and has just exited a one-on-one with Company ABC’s CEO. He finds out that product development is going ahead of plan at a lower cost and the product should be industry-competitive in the next few months. Let’s evaluate how Price Target and PWR would each deal with this new information. For Price Target, things are going according to plan so we wouldn’t raise our target above $30. With PWR we can confidently assess that our probability of success has increased and our probability of overspending on R&D and falling behind the competition has decreased. We would quickly adjust our assumptions:

 

Screen Shot 2019-03-29 at 9.30.11 AM

 

Using PWR we see exactly how much better our idea is given the new fundamental data and can easily see how much better the position is than it was before. The portfolio manager can be confident in adding to the position and have a sense for how much.

 

The process is so much better using Probability-Weighted Returns. Using Price Targets is certainly better than guessing, but it leaves important information needed to make the right decision. If you’re using Price Targets today, take the time to reassess your approach. Start with downside targets with fixed probabilities (50%/50%) and then, over time, add differentiated probabilities to get the full impact of Probability-Weighted Returns on your portfolio.

 

March 1, 2019

Your Position Size is Wrong: A Plea to Put Down the Mental Calculator

 

Hedge funds throw away half of their potential returns by not explicitly calculating probability-weighted return. After working for a fund and having numerous conversations with hedge and mutual fund managers over the past decade, it is obvious that an overwhelming majority of funds’ mistakes come from poor estimation of risk-reward. In fact, most funds have not explicitly defined an upside price target, downside risk target, and conviction level for each investment in their portfolio. This is because most fund managers trust that they can manage the portfolio in their head. They analyze and discuss the upside, downside, and conviction level for every investment, so they assume these factors’ influence is carefully measured into every decision. But I would posit that there is a distinct difference between factoring in upside, downside, and conviction level through mental calculation and measuring it with probability-weighted return. Why would you trust your mental calculator for such an important decision? Could you imagine a bungee jumper that knows the height of a bridge, tension of the bungee cord, and weight of the jumper but just estimates the correct length of the bungee cord? Absolutely not. For every jump, a calculation is performed to make sure that easily avoidable risk is eliminated. Investors all too often skip the “bungee cord” calculation of probability-weighted return and end up assuming undue risk.

 

Empirical research and common sense prove that probability-weighted return is the optimal method to measure an asset’s quality. But most firms do not use probability-weighted return because it questions the output of their mental calculators. Researchers in Behavioral Finance and Neuroeconomics have cautioned investors for over 30 years that their brain is poorly designed to make financial decisions. Armed with this knowledge, investors still do not adjust their process to eliminate known decision-making frailties. In most cases, these shortcomings can be eliminated by calculating a probability-weighted return for every investment.

 

A quick example from a meeting with a portfolio manager highlights the problem of not using probability-weighted return. I was working with a successful fund manager when I asked for the logic behind the largest position in his fund. He told me that it was a company that he knows well, and he is sure they are going to beat earnings. I asked for his upside target if they beat earnings and the probability of it occurring. His best estimates were a profit of 10% and a probability of 90%. I then asked him to explain what would happen if they did not beat earnings. He described a dire scenario where the stock would be down at least 20% because the Street was expecting a beat. I quickly took his estimates and calculated a probability-weighted return of 7%. This caused the manager to change his exposure to the relatively weak idea. He had all of the correct information in his head, but his mental calculator was being corrupted by his over-confidence in his thesis. If this happened with the largest position in the portfolio, you can guarantee that there are other inefficiently sized positions. 

 

It is all too common that funds perform the intense research to drive the ball 99 yards down the field but do not “punch it in” by explicitly defining upside profit, downside risk, and probability. Probability-weighted return takes into account the full breadth of your fundamental research and creates a discipline that treats each position like it is brand new every day and performance-draining oversight is eliminated. As you can see from the example, calculating probability-weighted return is easy and only requires that the firm explicitly define upside, downside, and probability and then compare the probability-weighted sum to the current price. 

 

Once you calculate a probability-weighted return for every investment in the portfolio, you will quickly point out position sizes that do not match your research. Probability-weighted return becomes the synthesis of your research, the common vernacular of investment discussions, and the anchor for decisions.  Your portfolio generates greater returns because you are continuously improving the portfolio to give more exposure to the firm’s best ideas while constantly pruning the weakest. Return is only half the equation. A portfolio constructed with probability weighted return has considerably less risk. Every decision is now made in the context of downside potential. If the downside risk increases, the probability-weighted return falls, which in turn lowers the position size.

 

The benefits of probability-weighted return position sizing are profound and because the process is based on common-sense and sound math, it will become the de facto standard in the coming years. Some of the brightest fundamental managers in the world have been utilizing this discipline for years and your firm can capture the benefits of probability-weighted return in as little as a few weeks. Make a commitment to put down the mental calculator and you are guaranteed to make better decisions.