(866)-482-2177

sales@alphatheory.com

REQUEST A DEMO

SYSTEM REQUIREMENTS


Please note the following System Requirements. Further, please limit the number of open applications (particularly price streaming applications) while logged in to Alpha Theory™.


Recommended System Specifications
Processor: Dual Core or Quad-Core 2.4GHz or faster
RAM: 4GB+
Browser: Google Chrome 30+
Screen Resolution: 1280 x 1024 or greater
Internet Access: Business Class High-Speed


Minimum System Requirements
Processor: Intel Pentium-M 2.0Ghz or equivalent
RAM: 2GB+
Browser: Google Chrome, Mozilla Firefox, Internet Explorer 9+ (without Compatibility View), Safari
Screen Resolution: 1024 x 768 or greater
Internet Access: High-Speed

Alpha Theory Blog - News and Insights

December 15, 2017

Superforecasting for Investors: Part 2

Alpha Theory hosted a book club on December 6th with portfolio managers, analysts, and allocators coming together to discuss “Superforecasting” by Phil Tetlock. We were lucky enough to have a Superforecaster, Warren Hatch, moderate and perform forecasting exercises with the group. We spent 2 hours together and only scratched the surface on applying Superforecasting to investing.

 

Here are a few key takeaways:

1. COMMON ATTRIBUTES OF SUPERFORECASTERS:

INTELLIGENCE: Above average but genius isn’t required

QUANTITATIVE: Not only understand math but apply it to everyday life

FOXES, NOT HEDGEHOGS: Speak in terms of possibilities, not absolutes

INTELLECTUALLY HUMBLE: Understand the limits of their knowledge

SYSTEM 2 DRIVEN: Use the logic-driven instead of instinct-driven portion of their brain

DO NOT BELIEVE IN FATALISM: Life is not preordained

CONSTANTLY REFINE: Make frequent small updates to their forecast based on new information (but not afraid to make big changes when warranted)

COUNTERFACTUALS: Believe that history is one of many possible paths that could have occurred

OUTSIDE VIEW: Incorporate the internal and external views

GROWTH MINDSET: CONSTANTLY SEARCH FOR WAYS TO IMPROVE THEIR FORECASTING PROCESS

 

2. IDENTIFYING TALENT: There are identifiable attributes that can be used in hiring and have a profound impact on forecasting skill

 

Active Open Mindedness*

   image from alphatheory.typepad.com

Fluid Intelligence*

image from alphatheory.typepad.com

 

* At a prior book club, we measured participants and the results showed they had the attributes of Superforecasters with high Active Open-Mindedness (3.99 out of 5) and high Fluid Intelligence (8 out of 10 – this is the highest score that the Good Judgment  folks have seen).

Active Open Mindedness (i) and Fluid Intelligence (a) are two measurable traits that managers can use to select talent. In the chart below, the improvement impact of the definable attributes equates to about 40% of their forecasting skill over standard forecasts.

image from alphatheory.typepad.com

3. DEVIL’S ADVOCATE: Firms should appoint a Devil’s Advocate for each investment to expand critical thinking (someone to ask the question, “I see your downside is $40. How is that if the 52-Week Low is $22 and the trough multiple would put it at $25?”)

 

4. OUTSIDE VIEW: Firms should require an Outside View for every investment idea (“While everyone I’ve spoken to says this deal will close, only 20% of deals with one party under SEC investigation close.”)

 

5. REFINEMENT: New information should always be incorporated in forecast (think Bayesian).

 

6. TEAMS MAKE BETTER FORECASTS: Team dialog generally improves forecasting accuracy.

 

7. FORECAST CULTURE: Firms should embrace “forecast” as part of their vernacular and conversations should revolve around how information impacts the forecast.

 

8. MEASURE TO BE BETTER: We all forecast, but we rarely measure. That fact needs to change if we really want to improve.

 

9. CLUSTERING: Break complex topics into individual components that are better able to be forecast and use the combination of the smaller forecasts to forecast the more complex. (ie. Will AAPL break $200 is a complex forecast that can be broken down into Will iPhone X ship more than 400m units? / Will Samsung’s technology outpace Apple’s? / etc.)

 

10. INDEXING: Individual clustering questions can be weighted to come up with a forecast for the complex topic instead of using simple equal weighting.

 

11. DIVERSITY OF FORECASTS MATTER: Forecasts made from similar perspectives are less accurate than those made from multiple perspectives (see Boosting below).

 

12. BOOSTING: If you have three forecasters with different perspectives that all arrive at a 70% probability of an event occurring then the actual probability is greater than 70%.

 

13. GISTING: We didn’t get to spend much time here, but the idea is that complex subjects, reports, presentations, etc. can be distilled down into gists that the team votes on and refines into supergist. Full understanding is never just quantitative or qualitative. Superforecasting is quantitative. Supergisting attempts to provide the qualitative piece. 

 

14. HYBRID FORECASTING COMPETITION: IARPA, the defense agency that sponsored the forecasting tournament that launch the Superforecasters (Good Judgment) is sponsoring a new Man+Machine Forecasting Tournament. For those interested in Forecasting and Machine Learning, this is your spot: https://www.iarpa.gov/index.php/research-programs/hfc

 

December 01, 2017

Concentration During Periods of High Volatility

As we’ve gotten feedback about the Concentration Manifesto, one common question is, “would the Monte Carlo simulation hold during periods of market turbulence?” To stress test the conclusion, we chose two periods of high volatility. The 12 months between June 2008 and June 2009 (high positive and negative volatility) and the 12 months ended February 2009 (the months leading up to market nadir).

While the results showed lower overall alpha returns for both portfolios during the financial crisis, the concentrated portfolio still outperforms and is safer than the diversified portfolio. This is due to maintaining the improved batting average during those periods.

 

June 2008 to June 2009

CM Blog

We reran this analysis for the 12 months leading up to 2/28/2009 and the relationship still holds. The difference in timeframes mainly impacts the alpha return for the two portfolios. Again, the separation between the portfolios is driven by the batting average difference between the concentration and diversified portfolios. 

 

Trailing Twelve Months Ending February 2009

CM Blog2

It’s worth noting that since we don’t have data prior to 2011, the assumption is that the batting average between these two portfolios remains consistent over time.  We did look at batting averages for the timeframes we have in our database and it appears to be relatively consistent except for a dip in June 2015. Note that this batting average includes all portfolios and is on an alpha basis, but does not assume fees. The goal was to look for some consistency over time.

CM Blog3

 

The Concentration Manifesto has become a part of the conversation about how to improve active management. Please keep the feedback and questions coming.

 

November 10, 2017

Predictably Insightful: Recap of the Behavioral Alpha Conference

 

This is a picture of me and Dan Ariely, author of “Predictably Irrational” and five other great books on decision pitfalls we all fall into. Dan was the keynote speaker at Behavioral Alpha 2017 an event put on by our friends at Essentia Analytics and we were proud to help sponsor.

 

Behavioral Alpha

 

The day was packed with great speakers including:

- Dan Ariely: “Behavioral Finance in Practice” 

- Denise Shull talking about “Your Senses, Feelings & Emotions are the Ultimate Dataset”

- Clare Flynn Levy: "Applying Behavioral Finance to Your Own Investment Process" 

- Fireside Chat with Mark Baumgartner: “Why Asset Allocators Care About Behavioral Analysis” 

- Cameron Hight: “Mistakes Managers Make & How to Fix Them”

- Peer Idea Exchange: Paul Sonkin and Paul Johnson: “Pitching the Perfect Investment:

- Managing the Tensions Between Analysts and Managers” 

- Dave Winsborough: “How the Collective Personality of Your Team Affects Performance”

 

Here’s a quick recap of some of the takeaways:

Dave Winsborough discussed ways that we can build better teams by understanding the goal we’re trying to accomplish, the needed components to accomplish that goal, and measuring the team participants to make sure that the team has all of the necessary components. It’s a relatively straightforward idea that should be applicable to almost any team.

Denise Shull discussed ways we can become better in tune with our feelings and emotions with the idea of learning when and how to leverage those feelings. Learning how to identify our own emotions is a powerful first step towards being able to mute the negative emotions and take advantage of the positive (signals).

Much of the conference was on emotion and bias and how they cause us to make poor decisions. I completely agree, but that’s not my expertise. I spent much of my time talking the processes that help mitigate bias. This primarily involved making our assumptions and decision process explicit so that they can be judged and analyzed.

Dan Ariely gave several fascinating anecdotes like how casinos are the best at applying behavioral tools, how company internal satisfaction surveys have predictive power for stock performance, how Intuit is giving teams time and money to try bold new initiatives to help them get over the risk of projects that fail, a weight scale that doesn’t show your weight (but tracks it over time) is a much better way to lose weight than one that gives immediate feedback that is subject to good-habit-breaking volatility, and how people in the next to the lowest tax bracket are the ones most opposed to minimum wage hikes because it could push them into the lowest rung of society. His major takeaway was the bias and personality are tough to eliminate so you have to create habits, rules, and routinized behaviors that help us do the things we say we want to do (very Alpha Theory😉).

Clare Flynn-Levy showed how investors can make better decisions by capturing some basic information about themselves and their decisions. Taking the time to tie those data points together can help us better understand when we make good decisions and when we make poor decisions. By understanding these cues when they’re happening we can take advantage of the positive and avoid the negative.

Mark Baumgartner discussed his time at the Ford Foundation and Institute of Advanced Studies and some of the things he’s seen in the managers he evaluates. He said that about 10% of the managers he meets have some form of structured process around behavioral science, decision making, portfolio management, position sizing, etc. He believes that the primary value of a manager isn’t based on these processes, but he believes there is a lot of easy to pick up alpha form implementing process. He would like to see his managers embrace it more actively but says the industry moves glacially while the products that help improve the process are evolving very fast.

The room was full of managers and allocators. There was a self-selection bias, but the crowd truly embraced the concepts for how to be better using the behavioral science discussed during the day. In fact, the crowd asked amazing questions and one of my favorite parts of the day was from a member of the audience that was expanding on his thoughts about the difficulties of capturing alpha. He said the number of investors has increased from 5,000 to 1 million over 50 years. How do you reverse that trend when it is one of the highest paid professions, where you get to work with amazing people, research a broad range of interest, get to meet leaders in industry, academics, and government, and be exposed to an array of amazing ideas? If I’m ambitious and at the top of my class, why would I not pursue that profession.

Hmmmmm, maybe we can ask Dan Ariely if he has some creative way to change that behavior.

 

October 20, 2017

American Idols

I was lucky enough to be part of a small event, The Frontier of Forecasting Conference, hosted by Good Judgment Inc. Among the participants were Phil Tetlock, Barbara Mellers, and Daniel Kahneman. For those that don’t know, Kahneman is a Nobel Laureate and considered the father of Behavioral Economics. Tetlock and Mellers are the brains behind Superforecasting.

Several of you were interested in attending but unable to make the trip. The following is a summary of the presentations from the conference. 

 

Imagejpeg_0

Left to right: Phil Tetlock, Barbara Mellers, Daniel Kahneman, Lucky Man

 

Tetlock on Gisting

Good Judgment's CEO Terry Murray opened up the morning by introducing the founder of the company, Phil Tetlock. Phil talked about a new idea that he’s working on called Gisting. The goal is to improve understanding by taking a large amount of information and having multiple people create a Gist or a shorter explanation of the information. These Gists would then be graded by peers and the best ones would be picked and synthesized into a team Gist. This leads to a deeper understanding by the “gisters” and easier understanding by readers that only have time for the gist.

Gists are important because full understanding is never just quantitative or qualitative. Superforecasting is quantitative. Supergisting attempts to provide the qualitative piece. The challenge is that time is scarce and this is a new task that will meet resistance in most organizational culture.

Gisting is a relatively new idea and it will be interesting to watch how it develops as Phil, Barbara, and Good Judgment group put more time into research. The next book, Supergisting?

Kahneman on Noise

Kahneman was next up and he spent his time talking about Noise. The concept is not new but he believes it should become a focus because it is easier to reduce than bias. He described an insurance company that he worked with to improve claims adjuster accuracy. He measured the efficacy of their claims process by having independent adjusters price the same claim. The average difference in claim value was 50%! That means that one adjuster might write a check for $1,000 and another for $1,500 for the same claim. He described how a simple algorithm would dramatically reduce noise and improve claim accuracy.

The discussion took a slightly cynical tone when he described how few of his practical ideas were actually put into practice. For example, the insurance firm, after learning of these gross miscalculations, didn’t implement the systematic approach he suggested. He gave another example of how Steven Levitt, of “Freakonomics” fame, showed a simple system of fraud detection improvement to a credit card company that would have saved many millions a year, but wasn’t implemented.

Kahneman said, “change causes winners and losers. Losers are much louder than winners, which makes reform much less likely.” And that “leaders don’t want to see their mystique questioned by systems.” Dr. Mellers had a nice rejoinder that “things will change, one funeral at a time.” For all of us Superforecasting believers, we hope it happens faster than that.

I believe the success that Ray Dalio and Bridgewater have seen by being very systematic and process-oriented may shed some light and make leaders less resistant to change. The publishing of Dalio’s “Principles” will be read by many leaders and get a conversation started about how we all can improve by being more disciplined.

Idea Exchange on Forecasting

The second half of the day was a “safe zone” event to permit free-flowing exchange of ideas due. This means that I’m not allowed to comment on the dialog but I can give a high-level recap.

I was a panelist for “Improving Probabilistic Forecasting Within Organizations.” The goal was to give real world examples of people implementing forecasting tools to improve decision making. It was exciting to see many firms experimenting with forecasting systems. In my view, shared by Good Judgment's president Warren Hatch, who chaired the panel, the challenge that most faced was getting broad adoption and keeping momentum.

The critical component for solving this challenge is getting top-level buy-in. If senior leadership asks questions and uses the output to make decisions, then people will participate. Another strategy for increasing participation was active feedback. Providing scores, leaderboards, best/worst forecasts, stats, etc. have a demonstrable impact on usage.

Better Forecasting Through Better Models

The final discussion was “Bayesian Cluster Forecasting Models for Strategic Decision-Making” lead by Dr. Kathryn McNabb Cochran. She is part of Good Judgment Inc. and is a leader in the field of better decision making through forecasting. The goal is to make better forecasts by creating better models. The models are a hybrid of pure forecasts and adjustments that lead to more accurate forecasts.

For anyone curious about how they can be better forecasters and apply that thinking to their organization, please contact the great folks at Good Judgment Inc.

Final Thought

Meeting several of my heroes in one day made me think how nice it would be if the GE ad campaign in which great scientists are treated like stars was reality. How cool would it be if my girls could grow up in a world where Kahneman and Tversky were admired as much as Brady and Gronk.

 

 

October 06, 2017

Poker: Art vs Science

 

“People describe poker as a game of art and science. Both intuition and science have merit, but the best players approach the game very quantitatively.” – Liv Boeree, Professional Poker Player

Our COO, Graham Stevens, and I met over a poker table. We’ve been playing together for many years and he was recently watching an Oxford Lecture Series video by Liv Boeree that he turned me on to.

Liv is a very successful poker player with a physics degree from the University of Manchester. She was discussing the use of Game Theory Optimal (GTO) play and the use of GTO Tables to break decisions down into ranges of hands based on different situations to aid poker players in knowing the optimal decision (Bet (big or small), Check, Fold).

She stated that the best players in the world all employ GTO. And even though all players assume their opponents are playing GTO, it is incredibly difficult to exploit those predicted decisions because they are optimal. In an interesting exchange in the video, Igor Kurganov, another very successful poker player who was in the audience, said that intuition (playing the player instead of playing the cards) factors into his decisions, but only to a small degree. He said that the best intuition can do is change a 50/50 bet to a 55/45.

The parallels to investing and Alpha Theory are clear. At Alpha Theory, we allow firms to build their own Game Theory Optimal system to figure out the “optimal” amount to bet on each position in their portfolio. And we find that firms that use intuition instead of their model lose to the hypothetical model performance about 75% of the time.

The reasons portfolio managers choose to vary from their model are numerous, but have a common theme; there is an intuition that the model isn’t capturing. Granted investing is not poker. Poker has a finite set of variables and permutations comparted to the seemingly infinite number of variables to consider in investing. But even still, just like in poker, the world-class players are going to be the ones that are following the model and only making small tweaks for intuition.

**Do I practice what I preach? A note on my own poker play. I do not play GTO because I have not memorized the tables. I know some of the shortcut rules for when to bet and fold pre-flop and I can do a rough calculation of pot odds post-flop but that’s the extent of my skills. If my buddies would let me pull out my computer while I’m sitting at the table, I would follow GTO.  If I were playing for a living, I would learn and follow the model.

 

September 14, 2017

Asset Manager Reliance on Human Judgement vs Machine

Asset management is the industry most reliant on human judgement according to a recent Price Waterhouse study on Data Analytics.

 

Screen Shot 2017-09-14 at 10.14.59 AM

 

Asset managers rely on human judgement 3x more than the next industry. For an industry with some of the best and brightest, we seem to be far behind. There is no expectation that this will happen overnight, but at a bare minimum we need to be experimenting with ways to enhance our judgement with machines.

Alpha Theory has been doing just that for over 10 years and our clients have outperformed the average hedge fund by over 2x. Getting started is not hard. Adopting “machine” does not require a wholesale change as all of our clients operate with Man + Machine. What it does require is an acceptance that Man alone is generally inferior to Man + Machine and a cultural embrace of the “machine” as an enhancement to the daily judgements we all make.

The reliance on human judgement will fall over time for asset managers. Do not be the last the change.

 

 

August 18, 2017

Man Versus Model of Man: Lewis Goldberg

I recently read an article by Jason Zweig and saw a reference to Lewis Goldberg’s, “Man Versus Model of Man” paper on Expert Studies in the 1970 Psychological Bulletin. There are hundreds of published studies that have a similar theme. Give an expert any and all available data that they want and ask them to make a judgement germane to their field of expertise (examples include Oncologist – how long will a patient live, Parole Board – who is most likely to recidivate, Wine Expert – price of wine at auction, etc.)

The experts tell the scientist which variables are most important in their decision and the scientist goes off and builds a model and compares the model’s results to the forecasts of the “experts.” Over the past 60 years, hundreds of expert studies have been performed and show that the model beats or ties the expert 94% of the time (1).

There was one of Goldberg’s quote about the use of models versus clinical decision making made me laugh:

Such an enterprise, originally viewed with considerable disdain by clinical psychologists, has recently weathered a period of intense controversy (Gough, 1962; Meehl, 1954; Sawyer, 1966), and may soon become a reasonably well accepted procedure in psychology—if not in medicine, stock forecasting, and other professional endeavors.

Consequently, it now seems safe to assert rather dogmatically that when acceptable criterion information is available, the proper role of the human in the decision-making process is that of a scientist: (a) discovering or identifying new cues which will improve predictive accuracy, and (b) constructing new sorts of systematic procedures for combining predictors in increasingly more optimal ways.

This quote was written 46 years ago yet clinical judgement still dominates psychology, medicine, and stock forecasting. Given the evidence, it is hard to argue against model-based decision making or man + model, but expert judgement still dominates.

The experts that will dominate the future (and are already beginning to do so) are the ones that embrace models as an extension of their own expertise. Models do not replacement human judgement. The parameters models are built upon are determined by experts. Experts also are required to intuit when exceptions to the model are necessary.

My belief is that Lewis Goldberg’s prediction will come true in the next decade as computing power, statistical techniques, software, and zeitgeist have grown to a point where Man + Machine will become the rule instead of the exception.

Here’s a few other great quotes from Lewis Goldberg’s article:

- Mathematical representations of such clinical judges can often be constructed to capture critical aspects of their judgmental strategies.

- The results of these analyses indicate that for this diagnostic task models of the men are generally more valid than the men themselves. Moreover, the finding occurred even when the models were constructed on a small set of cases, and then man and model competed on a completely new set.

- Ten years of research on the clinical judgment process have demonstrated that for many types of common clinical decisions and for many sorts of clinical judges, a simple linear regression equation can be constructed which will predict the responses of a judge at approximately the level of his own reliability. For documentation of this assertion and for details of the methodology, see Hoffman (1960), Hammond, Hursch, and Todd (1964), Naylor and Wherry (1965), and Goldberg (1968). While such regression models have 424 LEWIS R. GOLDBERG been utilized (probably somewhat inappropriately) to explain the manner in which clinicians combine cues in making their diagnostic and prognostic decisions (see Green, 1968; Hoffman, 1968), there is little controversy about their power as predictors of the clinical judgments

 

(1) “Comparative Efficiency of Informal and Formal Prediction Procedures” – William Grove and Paul Meehl, published in Psychology, Public Policy, and Law (1996)

July 28, 2017

Crist on Value

There is a paper, famous in value investing circles, called Crist on Value. It is a chapter from a book written by horse handicapper Steven Crist who opines on the short-comings of the average horse bettor. Its popularity amongst value investors is due to its sage advice that can readily be applied to investing. The article is of moderate length and a must read for any fundamental investor. I’ve taken the liberty of highlighting a few of quotes pertinent to our profession:

- “How often have you or a fellow track-goer opined that you're a pretty good handicapper but you really need to work on your betting strategies or your so-called money management? The problem with this line of thinking is that it suggests betting is some small component of the game, which is like pretending that putting is a minor part of championship golf.” INVESTOR CORROLARY: Investors that believe they are good stock pickers but just don’t get the position size right.

- “Even a horse with a very high likelihood of winning can be either a very good or a very bad bet, and the difference between the two is determined by only one thing: the odds. A horseplayer cannot remind himself of this simple truth too often, and it can be reduced to the following equation: Value = Probability x Price." INVESTOR CORROLARY: Investments decision process requires three components: profit from win, cost from loss, and probabilities of each.

- “Now ask yourself honestly: Do you really think this way when you're handicapping (in probability-weighted returns)? Or do you find horses you "like" and hope for the best on price? Most honest players will admit they follow the latter path. This is the way we all have been conditioned to think: Find the winner, then bet. Know your horses and the money will take care of itself.” INVESTOR CORROLARY: Every investment requires story and value. With both, you don’t have an investment.

- “Sticking to your guns is easier said than done, but it is the only way to win in the long run. The horseplayer who wants to show a profit must adopt a cold-blooded and unsentimental approach to the game that is at variance with both the "sporting" impulse to be loyal to your favorite horses and the egotistical impulse to stick with your initial selection at any price. This approach requires the confidence and Zen-like temperament to endure watching victories at unacceptably low prices by such horses.” INVESTOR CORROLARY: If you’re human, you’re subject to bias and emotion. Define rules and procedures in advance that highlight discrepancies between your actions and your rules.

- “I cannot argue in good conscience that Two Item Limit had precisely a 60 percent chance of victory as opposed to 57 or 63 percent, and I doubt that such calibration is in fact achievable. It is, however, possible through experience to get close enough that if you demand sufficient value to cover the margin of error, you should outperform the competition-your fellow horseplayers.” INVESTOR CORROLARY: Coming up with probabilities for stock price outcomes are even more subjective, but that doesn’t mean you can skip the exercise. Play around with a range of outcomes and figure out what is “too conservative” and “too aggressive” to give you comfort for where the probability should be.

 - “If every horseplayer but you were a certifiable idiot, betting at random on names and colors, you would win every day. Conversely, if the only people betting into the pool were the small number of professionals who make a living this way, your chances for long-term victory would be slim.” INVESTOR CORROLARY: How does your competition in the stock market stack up today?

- “Your opportunity for profit at the racetrack consists entirely of mistakes that your competition makes in assessing each horse's probability of winning.” INVESTOR CORROLARY: While we look at the move to passive as a negative; more passive money should increase the number of opportunities for active investors.

- “There is no shame in passing a race because you just don't see any value in it. Nor should you force yourself to play a race in which you have no confidence in your own odds line.” INVESTOR CORROLARY: Good ideas are hard to find, but worth the wait.

- “Recognize the difference between picking horses and making wagers in which you have an edge. The only path to consistent profit is to exploit the discrepancy between the true likelihood of an outcome and the odds being offered.” INVESTOR CORROLARY: Probability-weighted return is the arbiter of all decisions.

July 05, 2017

Jason Zweig on Being Your Own Quant

Until the day you are relieved of your cognitive biases, it is important to frequently remind yourself of your mental fallibility so that you can be on guard for their effects. Jason Zweig’s “How to Be Your Own Quant” is perfect for that task. Here are a few gems from the short article:

 - Take a hint from hedge-fund manager Magnetar Capital LLC, which is seeking to “take what was in our head and our database and make rules out of it” — measuring intuitions, testing them for reliability and then basing decisions on them. Human judgment is inconsistent. People are good at knowing what matters, but not very good at always looking at it the same way.

 - Presented with identical information under different circumstances, we come to different conclusions about it. The judgments of everyone from accountants to physicians and weather forecasters will vary depending on such factors as mood, time of day and how many other demands they have on their attention.

 - Decades ago, the psychologist Lewis Goldberg showed that if you determine which factors experts consider most important in coming to a conclusion, you can program a computer to size up a ​decision based on those — and only those — factors. The computer’s predictions using the experts’ criteria turned out to be more accurate than the experts’ own predictions, because the computer always interprets the same evidence the same way.

June 01, 2017

The Value of Price Targets

Abstract

We analyze the difference in ROIC (Return on Invested Capital) for the portion of client portfolios with price targets and the portion of client portfolios without price targets. In examining exposures, we find a long bias in the portion of the portfolio with research versus the portion of the portfolio without research making the Total measurements less meaningful. To neutralize this distortion, we look at differences on a long and short basis. We find that the portfolios with price targets outperform by 16% on an annualized ROIC basis (average of long and short improvement).

We also look at optimal position sizing suggested by the Alpha Theory algorithm and find that optimizing position sizes that have price targets adds an additional 5% for long positions.

Screen Shot 2017-06-01 at 2.32.50 PM

Table 1

Data

Alpha Theory tracks investment results for clients on a daily basis. We’re able to segment the daily returns for portfolios into two categories, securities with price targets and securities with no price targets. We’re then able to calculate an average daily return across our client base, segmenting into these two categories. All of our calculations are done on an ROIC basis for comparability purposes. Removing the exposure effect would result in returns of a portfolio that is 100% allocated to either category. For comparison, we also calculate returns based on optimal position sizing recommendations as well as returns on the ACWI, an all-world index.

Research

Figure 1 shows the cumulative returns over the time for the period of the analysis, where we break out client portfolios by the portion with and without price targets. We also look at how the optimal portfolio would have performed, which is a portfolio composed of securities with optimal position sizing output from Alpha Theory. Alpha Theory uses price target inputs to recommend position sizing based on probability-weighted return calculated from those price targets. Alpha Theory does not recommend position sizing on securities without research, so the comparison would be best made between optimal returns and the portion of the portfolio with price targets. We find that the portion of the portfolio with no price targets significantly underperforms the price target portfolio, by 11.5% on an annualized basis.  What we also see is that the price target portfolio, if sized optimally (where not already done so), would have increased performance even further, by 3.3%. Decomposing the exposures on the price target and non-price target portfolio reveals that the average net exposure for the price target portfolio is 40% and the average net exposure on the non-price target portfolio is -12%. This is an interesting divergence, as it tells us that managers are more likely to initiate a short position without research than a long position without research. With the non-price target portion running such a low net exposure, we would expect average returns to be roughly zero, as longs and shorts balance each other over a large sample. A clearer picture would be one where we break out the long and short portions.

Screen Shot 2017-06-01 at 2.39.03 PM

Figure 1

Figure 2 shows the same data for the long portion of the portfolio. We find that the long price target portfolio outperforms the long non-price target portfolio by 3.4% on an annualized basis. Sizing optimally (where not already done so) adds an additional 1.0% to the annualized return.

Screen Shot 2017-06-01 at 2.39.38 PM

Figure 2

We then look at the same data for the short portfolio in Figure 3. We find that the price target portfolio outperforms by 0.9%, on an annualized basis. Sizing optimally (where not already done so) would have added an additional 3.2%.

Screen Shot 2017-06-01 at 2.41.13 PM

Figure 3

Conclusion

Our assumption for why securities with price targets outperform those without is that price targets inform investors of value, make explicit the logic around their decisions, allow for optimal position sizing to be calculated and have a higher level of research rigor.

Investing in assets without first calculating price targets is deleterious to returns. This result is intuitive to most managers but hopefully this gives empirical evidence that will prevent future positions going into the portfolio without the critical step of defining risk-reward.