(866)-482-2177

sales@alphatheory.com

REQUEST A DEMO

SYSTEM REQUIREMENTS


Please note the following System Requirements. Further, please limit the number of open applications (particularly price streaming applications) while logged in to Alpha Theory™.


Recommended System Specifications
Processor: Dual Core or Quad-Core 2.4GHz or faster
RAM: 4GB+
Browser: Google Chrome 30+
Screen Resolution: 1280 x 1024 or greater
Internet Access: Business Class High-Speed


Minimum System Requirements
Processor: Intel Pentium-M 2.0Ghz or equivalent
RAM: 2GB+
Browser: Google Chrome, Mozilla Firefox, Internet Explorer 9+ (without Compatibility View), Safari
Screen Resolution: 1024 x 768 or greater
Internet Access: High-Speed

Subscribe to Alpha Theory content

Alpha Theory Blog - News and Insights

9 posts categorized "Superforecasting"

May 03, 2018

Positive Skew…Part 2 – Maybe It’s Not So Bad for Active Managers After All

In my last post, I discussed the negative impact of positive skew for active managers. Basically, that more than 50% of all stocks in a given market underperform the average because there are stocks that go up more than 100% but no stocks that go down more than 100%. This means that if you pick a random portfolio of stocks from the market, you have a greater than 50% chance of underperforming the market because most portfolios will not hold those few stocks that went up more than 100%.

 

Because of the popularity of the last post and TV appearance, we spent time digging further into the data to answer questions posed by readers and viewers. We noticed that there was a tendency for the returns between the average stock return and the index return to be different.

 

And that is the problem with using the average stock return as the hurdle for funds. Investors are not measured against the average stock return, they’re measured against the benchmark, typically the S&P 500. Most indexes are market cap weighted, meaning that the index return and the average stock return are generally different.

 

In the example below, we’ve taken the current S&P 500 constituents and calculated their return since the beginning of 2012 and compared that to an average return (Equal Weighted) and the actual return of the S&P 500. The S&P 500 over that period was up 136% vs 175% for the average stock (this isn’t a perfect analysis because the constituents in the portfolio changed over that time but it is an approximation).

 

Positive Skew-part2

 

The graph above shows the distribution of individual stock returns over that period. You can see the outliers that pull the average stock return (red line) up to a point where 63% of individual securities underperform the average of 176%. But the S&P 500 was up 136% (green line) over that period so only 51% of stocks underperformed the benchmark. Pretty much a coin flip.

 

We brought positive skew up with Andrew Wellington at Lyrical Asset Management. They have done some great analysis comparing the top 1000 stocks by market cap in the US to the S&P 500 each year going back to 1998.

 

Chart2

Source: FactSet and Lyrical Asset Management

 

As you can see in the chart above, the average stock beating the S&P 500 index is a coin flip. For the past 20 years, the likelihood of any individual stock beating the S&P 500 in any given year is 50.2%. If I build random portfolios using the Top 1000 stocks in the US, there is a high likelihood that the portfolio return will be close to the S&P 500 return.

 

Some years are clearly better than others. ’98 and ’99 were horrible stock picking years. If you didn’t own the few stocks that had meteoric rises, you had a high likelihood of underperforming the S&P 500. ’01 and ’02 were good stock picking years. Over 60% of stocks beat the index.

 

What this means, is that any given fund’s batting average should be compared to the batting average of the universe of stocks compared to the benchmark. A 54% batting average in ’98 is heroic, in ’03, 54% is just inline. Take a look at 2017. It was the 3rd hardest stock picking environment in the last 20 years using this metric.

 

But what about other indices? Thankfully, our friend Julien Messias from Quantology Capital Management has done the analysis (1999-2014) comparing the S&P 500 and Russell 2000. Below are thoughts from Julien on the topic:

 

The Russell 2000 components returns exhibit a much more leptokurtic distribution (fat-tailed) than S&P 500, meaning that you have a huge part of the index’s components suffering from huge loss (or even bankruptcies), with an average of more than 60% of the components underperforming the index performance and 2% of the components with huge performance (more than 500% per year). The performance of the index is therefore pulled up by those latter 2%.

Assuming a stock-picker operates at random to choose its investment within the index universe, this means that his performance should be closer to the median performance of the components, than to the index performance itself. Therefore, given that the median performance is almost always lower than the index performance (see chart below), an investor in Russell 2000 securities is very likely to underperform and very unlikely to outperform.

The S&P 500 distribution is much more mean-centered, with very shallow/thin tails, meaning that the average stock picker is much more likely to generate a performance close to the index performance (graph from Lyrical AM) and less likely to underperform.

 

Chart3

Source: Quantology CM

 

The Russell 2000 index more apparently displays the impacts of positive skew because it is less impacted by a contribution of a few very large companies. AAPL, MSFT, GOOG, AMZN make up 12.2% of the S&P 500 while the Russell 2000’s top 4 positions make up 1.7% of the index. The result is that the average of all stocks in the Russell 2000 is much closer to the Russell 2000 index return than the average of all stocks in the S&P 500 (recall the large difference from the 2012 to 2018 analysis that showed the S&P 500 return was 136% vs 175% average of all stocks).

 

This means that the index chosen as the benchmark for your fund has a profound impact on your ability to beat it. More specifically, the probability of beating the S&P 500 with a random portfolio is 50%, for the Russell 2000, it’s 42%.

 

There has been quite a bit of press regarding positive skew. It’s a great conversation but, for the average fund that is measured against the S&P 500, the impact is overblown. Almost every investor is compared against a benchmark. I recommend that you dig a layer into your benchmark and measure its positive skew, the likelihood of beating the average stock return, the likelihood of beating the index return, and compare your hit rate against the hit rate each year to know how difficult or easy it was for you on any given year.

 

Quantology Capital Management Russell 2000 and S&P 500 Analysis:

 ­ Screen Shot 2018-05-03 at 10.03.58 AM Screen Shot 2018-05-03 at 10.05.52 AM

Does not include management fees

Data is cleaned from index turnover, with updates every year

April 06, 2018

Positive Skew is Negative for Active Managers

 

Let’s play a game. In this game, there are 10 random poker chips in a bag. 9 of these chips will give you a return between -8% and +8% on the money that you bet. The 10th coin will give you a 100% return. The distribution of returns for this game has a positive skew.

 

Screen Shot 2018-04-06 at 9.29.11 AM
 

If offered to put money down on this proposition you would take it because you would expect a 10% return if you could play the game over and over.

 

Now let’s add a wrinkle. Your goal isn’t just to make a positive return, you have to beat the bag. The bag puts 10% of their money on each chip and pulls them all. Voila, a 10% return. One last wrinkle, you can only pick one chip at a time.

 

How many times out of 10 would you beat the bag? Only 1 in 10. 90% of the time you would lose to the bag. It doesn’t matter if we expand the number of chips as long as the bag maintains the same positive skew (we could increase the to 100 chips and you get to pick 10, 100 chips and you pick 1000, etc.)

 

By now, you’ve probably guessed that the bag is the market, the chips are stocks, and you are, well, you. This is the game we play when trying to beat an index. True, you can be better than the market at figuring out the good chips but given that initial conditions for a random game means you lose 9 out of 10 times, it’s really hard to beat the market. Add fees and the likelihood of beating the market goes down even further.

 

Positive Skewness has gotten a decent amount of press over the past year because of the championing of JB Heaton who wrote a paper1 researching the impacts of positive skew on manager underperformance. Heaton’s paper is similar to research from Dr. Richard Shockley in 19982. See below for an article written by Bloomberg News on the topic.

 

Picture1

Source: Bloomberg News (“Lopsided Stocks and the Math Explaining Active Manager Futility” by Oliver Renick)

 

Given that many of the conversations active managers have today revolve around active versus passive, “positive skew” should be top of mind. This is my push to increase awareness.

 

Given that active managers can’t change market skew, what should we do? We could measure skill in a different way. Let’s say I want to measure a manager skill. If I take all of the stocks of the markets they’re investing in and then randomly build 100,000 portfolios with the same number of securities as the manager. I can then plot where that manager falls on the distribution and give them a Z-Score for how far away from the norm they are. I could do the same thing for hedge funds by randomly buying and selling securities in the same universe as the investor.

 

I’m not saying that this excuses active managers from underperforming passive strategies, but it should at least be a more realistic assessment of their skill. My hope is that positive skew becomes just as common an explanation as fees when discussing active manager underperformance. Only by knowing the causes, will we be able to make changes that allow active managers to outperform.

 

1 Nicholas Polson and Jan Hendrik Witte; Hendrik Bessembinder of Arizona State University

2“Why Active Managers Underperform the S&P 500: The Impact of Size and Skewness,” published in the inaugural issue of the Journal of Private Portfolio Management. One of the original authors of the study is Richard Shockley.

 

March 12, 2018

Capital Allocators Podcast with Ted Seides: Moneyball for Managers

 

Learn how to enhance your investment results in this great podcast from Ted Seides and his guests, Clare Flynn Levy from Essentia Analytics and Cameron Hight from Alpha Theory.

This conversation covers the founding of these two respective businesses, the mistakes portfolio managers commonly make, the tools they employ to help managers improve, and the challenges they face in broader adoption of these modern tools. The good news is the clients of Essentia Analytics and Alpha Theory have demonstrated improvement in their results after employing these techniques. If you ask Clare and Cameron, you may develop a whole new appreciation about the potential for active management going forward.

 

LevyHight-FINAL

 

By creating a disciplined, real-time process based on a decision algorithm with roots in actuarial science, physics, and poker, Alpha Theory takes the guessing out of position sizing and allows managers to focus on what they do best – picking stocks.

In this podcast, you will learn how Alpha Theory allows Portfolio Managers convert their implicit assumptions into an explicit decision-making process. 

 

To learn how this method could be applicable to your decision-making process:

 

LISTEN NOW

 


 

 

March 02, 2018

Size-Based Batting - A Different Perspective on Stock Selection

 

How do you determine if an investor is a good stock picker? One commonly used measure is to count the number of positions that make money (winners) divided by the total number of positions. This metric is commonly called a Batting Average, analogizing stock picking with baseball hit-rates.

The problem with Batting Average is that several inconsequential positions that lose money can really bring down the total. We saw this with our clients. They have historically outperformed other funds (every year for the past six) but have a batting average, adjusted for the move in the bench, of only 51%.

We decided to take a different approach and measure the total exposure of positions that made money versus the total gross exposure of the fund. For instance, if 60% of a fund made money on an alpha-adjusted basis and the fund was 120% gross exposed, then the fund had a Sized-Based Batting Average of 50% (60/120).

Our clients had a Sized-Based Batting Average of 54% versus the non-sized based average of 51%. That means that our clients were good at selecting investments and at sizing them, but they were harming their overall returns with small losing investments.

Alpha-Adjusted Batting Average1

 

Screen Shot 2018-03-02 at 10.09.00 AM

 

In the table above, Size-Based Batting, while not perfectly consistent, is generally better from year-to-year for our clients (exceptions being 2012 and 2015).

We’ve performed other analyses that have proved this point, specifically that our clients’ positions under 1% dramatically underperform the rest of the portfolio, but Sized-Based Batting presents a compelling way to highlight the “small position” issue (see the “Concentration Manifesto” for other issues with small positions).

In our profession, it is incredibly difficult to detangle skill from luck and, as cathartic as it would just rely on returns, returns are actually negatively correlated with next year’s returns for most funds (i.e. funds that outperform in year N have a higher likelihood underperforming in year N+1 – there are multiple research sources that analyze mean reversion in funds, here is one).

Sized-Based Batting is a nice addition to the allocator’s tool bag for finding managers with stock picking skill. In much the same way, managers should use Sized-Based Batting as a way to highlight their strengths and compare it to traditional Batting Average as a way to potentially point out weaknesses.

 

1 S&P 500 for US securities and MSCI WEI for non-US securities

2 Why is “All Time” so low compared to each year? Reason #1: There are many more observations in the more recent years which skew the overall results to be more similar to the more recent years. Reason #2: There were many assets that were losers over “All Time” while being winners for multiple years (small win in 2015, a small win in 2016, big loss in 2017 = 2 winning period vs 1 losing but a loser in the All-Time bucket).

 

 

February 07, 2018

Alpha Theory Case Study: Top Performing Funds of 2017

Alpha Theory’s clients have historically outperformed (see 2017 Year in Review from last month), but 2017 was special as our most active client was also the 2nd best performing equity fund. We have worked with them since their launch, and their focus on discipline and process is a testament to how to build a fund. If you would like to learn more about the client, their challenges, their solution, and the data supporting their process, check out the Case Study.

 

DOWNLOAD NOW

 

 

December 15, 2017

Superforecasting for Investors: Part 2

Alpha Theory hosted a book club on December 6th with portfolio managers, analysts, and allocators coming together to discuss “Superforecasting” by Phil Tetlock. We were lucky enough to have a Superforecaster, Warren Hatch, moderate and perform forecasting exercises with the group. We spent 2 hours together and only scratched the surface on applying Superforecasting to investing.

 

Here are a few key takeaways:

1. COMMON ATTRIBUTES OF SUPERFORECASTERS:

INTELLIGENCE: Above average but genius isn’t required

QUANTITATIVE: Not only understand math but apply it to everyday life

FOXES, NOT HEDGEHOGS: Speak in terms of possibilities, not absolutes

INTELLECTUALLY HUMBLE: Understand the limits of their knowledge

SYSTEM 2 DRIVEN: Use the logic-driven instead of instinct-driven portion of their brain

DO NOT BELIEVE IN FATALISM: Life is not preordained

CONSTANTLY REFINE: Make frequent small updates to their forecast based on new information (but not afraid to make big changes when warranted)

COUNTERFACTUALS: Believe that history is one of many possible paths that could have occurred

OUTSIDE VIEW: Incorporate the internal and external views

GROWTH MINDSET: CONSTANTLY SEARCH FOR WAYS TO IMPROVE THEIR FORECASTING PROCESS

 

2. IDENTIFYING TALENT: There are identifiable attributes that can be used in hiring and have a profound impact on forecasting skill

 

Active Open Mindedness*

   image from alphatheory.typepad.com

Fluid Intelligence*

image from alphatheory.typepad.com

 

* At a prior book club, we measured participants and the results showed they had the attributes of Superforecasters with high Active Open-Mindedness (3.99 out of 5) and high Fluid Intelligence (8 out of 10 – this is the highest score that the Good Judgment  folks have seen).

Active Open Mindedness (i) and Fluid Intelligence (a) are two measurable traits that managers can use to select talent. In the chart below, the improvement impact of the definable attributes equates to about 40% of their forecasting skill over standard forecasts.

image from alphatheory.typepad.com

3. DEVIL’S ADVOCATE: Firms should appoint a Devil’s Advocate for each investment to expand critical thinking (someone to ask the question, “I see your downside is $40. How is that if the 52-Week Low is $22 and the trough multiple would put it at $25?”)

 

4. OUTSIDE VIEW: Firms should require an Outside View for every investment idea (“While everyone I’ve spoken to says this deal will close, only 20% of deals with one party under SEC investigation close.”)

 

5. REFINEMENT: New information should always be incorporated in forecast (think Bayesian).

 

6. TEAMS MAKE BETTER FORECASTS: Team dialog generally improves forecasting accuracy.

 

7. FORECAST CULTURE: Firms should embrace “forecast” as part of their vernacular and conversations should revolve around how information impacts the forecast.

 

8. MEASURE TO BE BETTER: We all forecast, but we rarely measure. That fact needs to change if we really want to improve.

 

9. CLUSTERING: Break complex topics into individual components that are better able to be forecast and use the combination of the smaller forecasts to forecast the more complex. (ie. Will AAPL break $200 is a complex forecast that can be broken down into Will iPhone X ship more than 400m units? / Will Samsung’s technology outpace Apple’s? / etc.)

 

10. INDEXING: Individual clustering questions can be weighted to come up with a forecast for the complex topic instead of using simple equal weighting.

 

11. DIVERSITY OF FORECASTS MATTER: Forecasts made from similar perspectives are less accurate than those made from multiple perspectives (see Boosting below).

 

12. BOOSTING: If you have three forecasters with different perspectives that all arrive at a 70% probability of an event occurring then the actual probability is greater than 70%.

 

13. GISTING: We didn’t get to spend much time here, but the idea is that complex subjects, reports, presentations, etc. can be distilled down into gists that the team votes on and refines into supergist. Full understanding is never just quantitative or qualitative. Superforecasting is quantitative. Supergisting attempts to provide the qualitative piece. 

 

14. HYBRID FORECASTING COMPETITION: IARPA, the defense agency that sponsored the forecasting tournament that launch the Superforecasters (Good Judgment) is sponsoring a new Man+Machine Forecasting Tournament. For those interested in Forecasting and Machine Learning, this is your spot: https://www.iarpa.gov/index.php/research-programs/hfc

 

September 12, 2016

The Day the Music Died

For the past few years, I have had a significant edge in my fantasy football leagues because of my pre-draft preparation.  Each year, I would go online to do my preparation before the draft and pull data from multiple sources and look around for novel analytical approaches to player selection which I could combine with the data I was able to source.  Unfortunately, it looks like the years of my fantasy football spreadsheet’s domination has come to an end. I knew this day would come. Each year the data and analysis I was running sourcing seemed to get better and better. This year, I found 95% of my bespoke spreadsheet already prepared by someone else online at: http://apps.fantasyfootballanalytics.net/.  Undoubtedly, several of my competitors found it too.

Key attributes of my fantasy football sheet:

1. Improvement over Average (also called Value over Replacement) – this is by far the most important attribute. It basically states how much better any particular player is to the average player that will be drafted at the same position. Without IOA, the players with the highest potential points will always pop to the top, which is inaccurate because you can’t fill a team with just one position. For example, a QB may be forecast to score 350 points over the year and a WR is forecast to score 325. On that basis, the QB looks like the favorite. But the average drafted QB will score 280 and the average drafted WR will score 180 points. That means that the WR will add 145 points (325-180) against other teams vs the QB which will only add 70 points (350-280).

2. Multiple Sources – I pull projections from multiple sites to create a “wisdom of crowds” improvement.

3. Forecast Risk – measure standard deviation of projections from multiple sites to see how likely a player was to hit the average I was using in my model.

4. Average Draft Position – pulled Average Draft Position from multiple sources to compare my ranking to what was likely to happen.

5. Over/Undervalued to Average Draft Position – this allowed me to see if a player that my sheet really liked, could be selected in a later round because folks on average were drafting him later.

6. Dropoff – used to quickly see how big the difference is between the player and the next best player at the same position.

1

Now let’s compare to FFAnalytics.net:

2

The only thing my sheet has is a Strength of Schedule adjustment (#7) and I can just add that in after I download the data.

My reasons for writing this blog are two-fold. One, I want everyone to feel sorry for me that my edge is gone. Two, we all should recognize that with each passing year, data becomes easier to access and manipulate. If there are things in your business you consider your “edge” (competitive advantages), do a careful assessment of the landscape and forecast the probability that it can be undermined by data ubiquity, data analytics tools, or machine learning (artificial intelligence). Old businesses are being disintermediated in a matter of a few years by the likes of Google, Alibaba, Amazon, Uber, AirBNB, etc. Asset managers would be well served to take a hard look at what parts of their spreadsheets/processes and analytics are no longer novel and which are likely to persist and contribute as their true “edge”.    Without an edge in your processes or analytics, there is a limited likelihood that they will see an “edge” in their returns. 

July 18, 2016

Superforecasting – Alpha Theory Book Club

Alpha Theory hosted its first ever book club on July 12th with over 40 portfolio managers, analysts, and allocators coming together to discuss “Superforecasting” by Phil Tetlock. We were lucky enough to have two Superforecasters, Warren Hatch and Steve Roth, moderate and perform forecasting exercises with the group. We spent 2 ½ hours together and only scratched the surface on applying Superforecasting to investing.

Here are a few key takeaways:

1. RAW TALENT: On average, our group had the attributes of Superforecasters with high Active Open-Mindedness (3.99 out of 5) and high Fluid Intelligence (8 out of 10 – this is the highest score that the Good Judgement folks have seen).

Active Open Mindedness

 

 1

Fluid Intelligence

2

2. IDENTIFYING TALENT: There are identifiable attributes that can be used in hiring and have a profound impact on forecasting skill (40% - see chart below).

Screen Shot 2016-07-18 at 4.02.10 PM

3. DEVIL’S ADVOCATE: Firms should appoint a Devil’s Advocate for each investment to expand critical thinking (someone to ask the question, “I see your downside is $40. How is that if the 52-Week Low is $22 and the trough multiple would put it at $25?”)

4. OUTSIDE VIEW: Firms should require an Outside View for every investment idea (“While everyone I’ve spoken to says this deal will close, only 20% of deals with one party under SEC investigation close.”)

5. REFINEMENT: New information should always be incorporated in forecast (think Bayesian).

6. POSTMORTEM: An Accuracy Score should be calculated for every investment and should frame the conversation of “what did we do well?” and “what did we do poorly?”.

7. TEAMS MAKE BETTER FORECASTS: Team dialog generally improves forecasting accuracy.

8. FORECAST CULTURE: Firms should embrace “forecast” as part of their vernacular and conversations should revolve around how information impacts the forecast.

9. MEASURE TO BE BETTER: We all forecast, but we rarely measure. That fact needs to change if we really want to improve.

10. BOOK CLUBS ARE COOL!!!

The October Alpha Theory Book Club topic will be “Success Equation: Untangling Skill and Luck” by Michael Mauboussin. Mr. Mauboussin will moderate and highlight the book’s application to investing. Contact your Alpha Theory representative if interested in attending.

 

June 21, 2016

How Good Are My Analysts? Building a Better Hedge Fund Through Moneyball & Superforecasting

Traditionally, measuring hedge fund analyst skill has been an opaque process mired in ambiguity and subjectivity.  It is often misconstrued and tainted by portfolio manager influence in the form of sizing decisions, liquidity constraints and other non-analyst determinants.  But, in the same way Moneyball revolutionized evaluating baseball player value by prioritizing on-base percentage over batting average, Alpha Theory has distilled the key indicator for predictive aptitude. Alpha Theory invented the Alpha Theory Accuracy Score to introduce radical transparency into the rating of forecasting skill for hedge fund analysts.

P&L is Yesterday’s Batting Average

Using the Moneyball analogy, quantitative disruption of baseball player evaluation changed the way players are paid by isolating the player skill that contributes most to team wins. Using that data, managers now pay athletes in proportion to the amount of that winning skill they individually possess.  As such, the key metric for baseball player value evolved from batting average, to the more predictive on-base percentage, or OBP. 

Specifically, OBP has a 92 percent correlation with runs scored compared to batting’s 81 percent, making it more predictive.  Also, OBP’s 44 percent correlation year-to-year is more persistent than the 32 percent correlation of batting.  The predictive reliability and performance consistency make OBP a superior metric to forecast wins for baseball teams.  OBP’s disruption of batting average is an apt metaphor for the way Alpha Theory’s Accuracy Score will transform analyst ranking and assessment today.      

In 2016, analysts are still primarily rated by the profits and losses their investments generate for the fund, or P&L.  But making money on an investment is a misleading measure of analyst skill.  Beyond its tendency to be distorted by portfolio manager discretion, P&L performance, both good and bad, often masks the integrity and quality of investment processes.  Thus, P&L often misleads portfolio managers into thinking lucky analysts are actually skilled and vice versa.

For example, take these two analysts:

How good is my

Looking at the table above and using P&L to measure skill, Analyst #1 would be exceptional and Analyst #2 would be sub-par.  But Analyst #1 and #2 had the same forecasts, so their forecasting skill is actually identical.  P&L does not translate into forecast skill because analysts do not have ultimate control over position sizing; the portfolio manager does!

More Science, Less Art                                                                                                                         

Inspired by the ideas presented in the groundbreaking book, Superforecasting: The Art and Science of Prediction, Alpha Theory’s Accuracy Score delivers quantitative insight into a qualitative blind spot for portfolio managers.  Authored by Wharton Professor Phillip Tetlock and Dan Gardner in 2015, Superforecasting applies a Brier Score-inspired approach to quantifying predictive skill.  The Brier Score was created by meteorological statistician, Glenn Brier, in 1950 and measures the accuracy of probabilistic outcomes.  Superforecasting applies Brier’s methodology to only binary, or yes/no, outcomes.  

The New Standard

Alpha Theory’s Accuracy Score is an algorithmic solution that measures analysts’ predictive skill over a 0 - 100 percent range, where 100 is the best.  Scores are calculated on a per-forecast basis and then averaged per analyst.  The Accuracy Score algorithm transforms point estimate price targets and probability forecasts into an implied probability distribution, enabling each forecast to be independently scored.  By distributing multi-faceted outcomes across a range of probabilities, the Accuracy Score can measure forecasting skill for any price along the distribution.

The distribution of scores across our Alpha Theory clients is shown below.  The results follow a normal distribution, which further validates the Accuracy Score’s efficacy in rating analysts’ ability to forecast future price movements.

Screen Shot 2016-06-21 at 9.41.12 AM

Good forecasts are the most essential component of fund success and critical when portfolio managers are sizing positions.  Using a data-driven approach to determine which analysts make the best forecasts allows managers to apply those forecasts with greater confidence, leading to better position sizing and superior performance.

The Good Judgement Project

In 2011, the Intelligence Advanced Research Projects Activity, a U.S. government research organization, sponsored a geopolitical forecasting tournament that would span 4 years. The IARPA tournament enlisted tens of thousands of forecasters and solicited more than 1 million forecasts across nearly 500 questions related to U.S. national security.

A group called the Good Judgement Project entered the competition, engaged tens of thousands of ordinary people to make predictions, and the won the tournament. The GJP’s forecast accuracy was so persistent that IARPA closed the tournament early to focus exclusively on them. In fact, GJP was able to find a select group of “Superforecasters” that generated forecasts that were "30 percent better than intelligence officers with access to actual classified information.” 

Ways to Improve Forecasting Skill

The main findings of the GJP and the book that followed are especially relevant to investors. The research in Superforecasting indicates that predictive accuracy doesn’t require sophisticated algorithms or artificial intelligence.  Instead, forecast reliability is the result of process-oriented discipline.  

This process entails collecting evidence from a wide variety of sources, thinking probabilistically, working collaboratively, keeping score and being flexible in the face of error. According to the book, the 10 traits that most Superforecasters possess are: 

    1.  Intelligence - above average, but genius isn’t required

    2.  Quantitative - not only understand math but apply it to everyday life

    3.  Foxes, not hedgehogs - speak in terms of possibilities, not absolutes

    4.  Humility - understand the limits of their knowledge

    5.  System 2 Driven - use the logic-driven instead of instinct-driven portion of their brain

    6.  Refute fatalism - life is not preordained

    7.  Make frequent and small updates to their forecast based on new information

    8.  Believe that history is one of many possible paths that could have occurred

    9.  Incorporate internal and external views

    10. CONSTANTLY SEARCH FOR WAYS TO IMPROVE THEIR FORECASTING PROCESS

Accountability = Profitability

Organizations cannot improve without systematic and data-driven assessments of their personnel.  Take Bridgewater Associates, for example.  One of the primary factors driving the persistent outperformance of Ray Dalio’s storied fund has been the institutional commitment to radical transparency and accountability.  Similarly, Alpha Theory’s Accuracy Score illuminates blind spots and holds analysts accountable through the precise measurement of predictive skill. For funds that lack the time, inclination or internal resources to create their own probabilistic forecast-grading models, Alpha Theory’s Accuracy Score fills the void.

To this end, Alpha Theory is exploring areas of collaboration with the leadership of Good Judgment Inc. (a spin-off from the Good Judgement Project in “Superforecasting”).  As the competitive landscape for investment capital tightens, discretionary managers must leverage probabilistic data to survive.  Alpha Theory’s Accuracy Score is a mission-critical asset that can help funds compete in the current investment landscape, improving operating inefficiencies and better aligning analyst pay with their intrinsic value to the firm.