(866)-482-2177

sales@alphatheory.com

REQUEST A DEMO

SYSTEM REQUIREMENTS


Please note the following System Requirements. Further, please limit the number of open applications (particularly price streaming applications) while logged in to Alpha Theory™.


Recommended System Specifications
Processor: Dual Core or Quad-Core 2.4GHz or faster
RAM: 4GB+
Browser: Google Chrome 30+
Screen Resolution: 1280 x 1024 or greater
Internet Access: Business Class High-Speed


Minimum System Requirements
Processor: Intel Pentium-M 2.0Ghz or equivalent
RAM: 2GB+
Browser: Google Chrome, Mozilla Firefox, Internet Explorer 9+ (without Compatibility View), Safari
Screen Resolution: 1024 x 768 or greater
Internet Access: High-Speed

Alpha Theory Blog - News and Insights

April 06, 2018

Positive Skew is Negative for Active Managers

 

Let’s play a game. In this game, there are 10 random poker chips in a bag. 9 of these chips will give you a return between -8% and +8% on the money that you bet. The 10th coin will give you a 100% return. The distribution of returns for this game has a positive skew.

 

Screen Shot 2018-04-06 at 9.29.11 AM
 

If offered to put money down on this proposition you would take it because you would expect a 10% return if you could play the game over and over.

 

Now let’s add a wrinkle. Your goal isn’t just to make a positive return, you have to beat the bag. The bag puts 10% of their money on each chip and pulls them all. Voila, a 10% return. One last wrinkle, you can only pick one chip at a time.

 

How many times out of 10 would you beat the bag? Only 1 in 10. 90% of the time you would lose to the bag. It doesn’t matter if we expand the number of chips as long as the bag maintains the same positive skew (we could increase the to 100 chips and you get to pick 10, 100 chips and you pick 1000, etc.)

 

By now, you’ve probably guessed that the bag is the market, the chips are stocks, and you are, well, you. This is the game we play when trying to beat an index. True, you can be better than the market at figuring out the good chips but given that initial conditions for a random game means you lose 9 out of 10 times, it’s really hard to beat the market. Add fees and the likelihood of beating the market goes down even further.

 

Positive Skewness has gotten a decent amount of press over the past year because of the championing of JB Heaton who wrote a paper1 researching the impacts of positive skew on manager underperformance. Heaton’s paper is similar to research from Dr. Richard Shockley in 19982. See below for an article written by Bloomberg News on the topic.

 

Picture1

Source: Bloomberg News (“Lopsided Stocks and the Math Explaining Active Manager Futility” by Oliver Renick)

 

Given that many of the conversations active managers have today revolve around active versus passive, “positive skew” should be top of mind. This is my push to increase awareness.

 

Given that active managers can’t change market skew, what should we do? We could measure skill in a different way. Let’s say I want to measure a manager skill. If I take all of the stocks of the markets they’re investing in and then randomly build 100,000 portfolios with the same number of securities as the manager. I can then plot where that manager falls on the distribution and give them a Z-Score for how far away from the norm they are. I could do the same thing for hedge funds by randomly buying and selling securities in the same universe as the investor.

 

I’m not saying that this excuses active managers from underperforming passive strategies, but it should at least be a more realistic assessment of their skill. My hope is that positive skew becomes just as common an explanation as fees when discussing active manager underperformance. Only by knowing the causes, will we be able to make changes that allow active managers to outperform.

 

1 Nicholas Polson and Jan Hendrik Witte; Hendrik Bessembinder of Arizona State University

2“Why Active Managers Underperform the S&P 500: The Impact of Size and Skewness,” published in the inaugural issue of the Journal of Private Portfolio Management. One of the original authors of the study is Richard Shockley.

 

March 12, 2018

Capital Allocators Podcast with Ted Seides: Moneyball for Managers

 

Learn how to enhance your investment results in this great podcast from Ted Seides and his guests, Clare Flynn Levy from Essentia Analytics and Cameron Hight from Alpha Theory.

This conversation covers the founding of these two respective businesses, the mistakes portfolio managers commonly make, the tools they employ to help managers improve, and the challenges they face in broader adoption of these modern tools. The good news is the clients of Essentia Analytics and Alpha Theory have demonstrated improvement in their results after employing these techniques. If you ask Clare and Cameron, you may develop a whole new appreciation about the potential for active management going forward.

 

LevyHight-FINAL

 

By creating a disciplined, real-time process based on a decision algorithm with roots in actuarial science, physics, and poker, Alpha Theory takes the guessing out of position sizing and allows managers to focus on what they do best – picking stocks.

In this podcast, you will learn how Alpha Theory allows Portfolio Managers convert their implicit assumptions into an explicit decision-making process. 

 

To learn how this method could be applicable to your decision-making process:

 

LISTEN NOW

 


 

 

March 02, 2018

Size-Based Batting - A Different Perspective on Stock Selection

 

How do you determine if an investor is a good stock picker? One commonly used measure is to count the number of positions that make money (winners) divided by the total number of positions. This metric is commonly called a Batting Average, analogizing stock picking with baseball hit-rates.

The problem with Batting Average is that several inconsequential positions that lose money can really bring down the total. We saw this with our clients. They have historically outperformed other funds (every year for the past six) but have a batting average, adjusted for the move in the bench, of only 51%.

We decided to take a different approach and measure the total exposure of positions that made money versus the total gross exposure of the fund. For instance, if 60% of a fund made money on an alpha-adjusted basis and the fund was 120% gross exposed, then the fund had a Sized-Based Batting Average of 50% (60/120).

Our clients had a Sized-Based Batting Average of 54% versus the non-sized based average of 51%. That means that our clients were good at selecting investments and at sizing them, but they were harming their overall returns with small losing investments.

Alpha-Adjusted Batting Average1

 

Screen Shot 2018-03-02 at 10.09.00 AM

 

In the table above, Size-Based Batting, while not perfectly consistent, is generally better from year-to-year for our clients (exceptions being 2012 and 2015).

We’ve performed other analyses that have proved this point, specifically that our clients’ positions under 1% dramatically underperform the rest of the portfolio, but Sized-Based Batting presents a compelling way to highlight the “small position” issue (see the “Concentration Manifesto” for other issues with small positions).

In our profession, it is incredibly difficult to detangle skill from luck and, as cathartic as it would just rely on returns, returns are actually negatively correlated with next year’s returns for most funds (i.e. funds that outperform in year N have a higher likelihood underperforming in year N+1 – there are multiple research sources that analyze mean reversion in funds, here is one).

Sized-Based Batting is a nice addition to the allocator’s tool bag for finding managers with stock picking skill. In much the same way, managers should use Sized-Based Batting as a way to highlight their strengths and compare it to traditional Batting Average as a way to potentially point out weaknesses.

 

1 S&P 500 for US securities and MSCI WEI for non-US securities

2 Why is “All Time” so low compared to each year? Reason #1: There are many more observations in the more recent years which skew the overall results to be more similar to the more recent years. Reason #2: There were many assets that were losers over “All Time” while being winners for multiple years (small win in 2015, a small win in 2016, big loss in 2017 = 2 winning period vs 1 losing but a loser in the All-Time bucket).

 

 

February 07, 2018

Alpha Theory Case Study: Top Performing Funds of 2017

Alpha Theory’s clients have historically outperformed (see 2017 Year in Review from last month), but 2017 was special as our most active client was also the 2nd best performing equity fund. We have worked with them since their launch, and their focus on discipline and process is a testament to how to build a fund. If you would like to learn more about the client, their challenges, their solution, and the data supporting their process, check out the Case Study.

 

DOWNLOAD NOW

 

 

January 05, 2018

2017 Year in Review

 

Alpha Theory’s product helps investment managers reduce emotion and guesswork in position sizing. The result is reduced errors and improved returns. For six consecutive years, Alpha Theory clients have outperformed their peers (see table below – we use the benchmark of Major Equity Hedge Index because 86% of Alpha Theory clients are hedge funds). Our clients have consistently outperformed their competitors, more than doubling their returns over the period.

 

Graph1

*Totals are not including 2017 data

In 2017, our average client generated 18.9% returns and, when it is released, I anticipate that we’ll beat the Hedge Index again. These results are consistent with other blog posts we’ve written highlighting our clients in 3rd party rankings: Reuters / WSJ / Novus.

 

NEW 13-F ANALYSIS

This year, we expanded our analysis through a new 13-F dataset with all publicly filing funds. The upside of using this dataset is it enables us to compare results against every reporting fund in 2017. The downside is it only includes the US equity long positions. The results indicate that once again, Alpha Theory clients outperform their peers.

The average Alpha Theory client performance in 2017 (13-F data) was 27.6% vs 19.9% for all others (3013 total funds with over 20 positions). That’s almost one full standard deviation higher (8.8% standard deviation) than the mean and has a Z-Score of 2.03 (statistically significant above the 95% confidence level).

Even more interesting was the individual performance results of our clients, one Alpha Theory client was the 2nd best performing fund in 2017 (this client thanked us more than once for our contribution to their success) and four clients landed in the top 40 performers.  We also had six of the top 100, and 10 of the top 200. Statistically, we’d anticipate less than 1% in all categories because Alpha Theory clients are less than 1% of all funds. Instead, as in previous periods, there is a concentration of Alpha Theory clients amongst the top performers.

Graph2

Simply put, Alpha Theory clients outperform their peers. The traits these firms share are discipline, intellectual honesty, and process focused. They gravitate to Alpha Theory because it is their tool kit to implement and measure that process.

 

PROCESS EQUALS PERFORMANCE

Alpha Theory clients use process to reduce the impacts from emotion and guesswork as they make position sizing decisions. Alpha Theory highlights when good ideas coincide with largest position sizes in the portfolio. This rules engine codifies a discipline that:

1. Centralizes price targets and archives them in a database

2. Provides notifications of price target updates and anomalies

3. Calculates probability-weighted returns (PWR) for assets and the portfolio as a whole.

4. Enhances returns

5. Mitigates portfolio risk 

6. Saves time

7. Adds precision and rigor to sizing process

8. Real time incorporation of market and individual asset moves into sizing decisions.

DISCIPLINED USAGE REDUCES RESEARCH SLIPPAGE

Alpha Theory’s research not only suggests that adoption of the application by itself leads to improved performance, but actual usage intensity further enhances results.

Usage intensity is determined by:

1. Percent of Positions with Research

2. Correlation with Optimal Position Size

3. Login Frequency

 

Graph3

1.Measured as the annualized ROIC where data was available, for a sample of 48 clients, 12 for each quartile

 

OPTIMAL POSITION SIZING REDUCES RESEARCH SLIPPAGE

Comparing clients’ actual versus optimal returns shows:

HIGHER TOTAL RETURNS
ROIC is 4.5% higher.

IMPROVED BATTING AVERAGE
Batting Average is 8% higher. Explanation: many of the assets that don’t have price targets or have negative PWRs are held by the fund but recommended as 0% positions by AT. Those positions underperform and allow AT’s batting average to prevail.

 Graph4

1.Measured as the average full year return for clients where full year data was available, adjusted for differences in exposure, net of trading costs

2.Before trading costs

 

ALPHA THEORY CLIENTS OUTPERFORM NON-CLIENTS
Alpha Theory clients have outperformed Major Equity Hedge Indices every year since Alpha Theory started collecting historical data. While our clients are a self-selecting cohort who believe in process and discipline; process orientation goes hand-in-hand with Alpha Theory software that serves as a disciplining mechanism to align best risk/reward ideas with rankings in the portfolio.

 Graph5

PRICE TARGETING REDUCES RESEARCH SLIPPAGE

Alpha Theory has further found that ROIC for assets with price targets is 5.6% higher than for those without price targets. Some investors chafe at price targets because they smack of “false precision.” These investors are missing the point because the key to price targets is not their absolute validity but their explicit nature which allows for objective conversation of the assumptions that went into them.  Said another way, the requirements of calculating a price target and the questions that targets foster are central to any good process.

Graph6*Long-only as many short positions are hedges and have no price targets

 

December 15, 2017

Superforecasting for Investors: Part 2

Alpha Theory hosted a book club on December 6th with portfolio managers, analysts, and allocators coming together to discuss “Superforecasting” by Phil Tetlock. We were lucky enough to have a Superforecaster, Warren Hatch, moderate and perform forecasting exercises with the group. We spent 2 hours together and only scratched the surface on applying Superforecasting to investing.

 

Here are a few key takeaways:

1. COMMON ATTRIBUTES OF SUPERFORECASTERS:

INTELLIGENCE: Above average but genius isn’t required

QUANTITATIVE: Not only understand math but apply it to everyday life

FOXES, NOT HEDGEHOGS: Speak in terms of possibilities, not absolutes

INTELLECTUALLY HUMBLE: Understand the limits of their knowledge

SYSTEM 2 DRIVEN: Use the logic-driven instead of instinct-driven portion of their brain

DO NOT BELIEVE IN FATALISM: Life is not preordained

CONSTANTLY REFINE: Make frequent small updates to their forecast based on new information (but not afraid to make big changes when warranted)

COUNTERFACTUALS: Believe that history is one of many possible paths that could have occurred

OUTSIDE VIEW: Incorporate the internal and external views

GROWTH MINDSET: CONSTANTLY SEARCH FOR WAYS TO IMPROVE THEIR FORECASTING PROCESS

 

2. IDENTIFYING TALENT: There are identifiable attributes that can be used in hiring and have a profound impact on forecasting skill

 

Active Open Mindedness*

   image from alphatheory.typepad.com

Fluid Intelligence*

image from alphatheory.typepad.com

 

* At a prior book club, we measured participants and the results showed they had the attributes of Superforecasters with high Active Open-Mindedness (3.99 out of 5) and high Fluid Intelligence (8 out of 10 – this is the highest score that the Good Judgment  folks have seen).

Active Open Mindedness (i) and Fluid Intelligence (a) are two measurable traits that managers can use to select talent. In the chart below, the improvement impact of the definable attributes equates to about 40% of their forecasting skill over standard forecasts.

image from alphatheory.typepad.com

3. DEVIL’S ADVOCATE: Firms should appoint a Devil’s Advocate for each investment to expand critical thinking (someone to ask the question, “I see your downside is $40. How is that if the 52-Week Low is $22 and the trough multiple would put it at $25?”)

 

4. OUTSIDE VIEW: Firms should require an Outside View for every investment idea (“While everyone I’ve spoken to says this deal will close, only 20% of deals with one party under SEC investigation close.”)

 

5. REFINEMENT: New information should always be incorporated in forecast (think Bayesian).

 

6. TEAMS MAKE BETTER FORECASTS: Team dialog generally improves forecasting accuracy.

 

7. FORECAST CULTURE: Firms should embrace “forecast” as part of their vernacular and conversations should revolve around how information impacts the forecast.

 

8. MEASURE TO BE BETTER: We all forecast, but we rarely measure. That fact needs to change if we really want to improve.

 

9. CLUSTERING: Break complex topics into individual components that are better able to be forecast and use the combination of the smaller forecasts to forecast the more complex. (ie. Will AAPL break $200 is a complex forecast that can be broken down into Will iPhone X ship more than 400m units? / Will Samsung’s technology outpace Apple’s? / etc.)

 

10. INDEXING: Individual clustering questions can be weighted to come up with a forecast for the complex topic instead of using simple equal weighting.

 

11. DIVERSITY OF FORECASTS MATTER: Forecasts made from similar perspectives are less accurate than those made from multiple perspectives (see Boosting below).

 

12. BOOSTING: If you have three forecasters with different perspectives that all arrive at a 70% probability of an event occurring then the actual probability is greater than 70%.

 

13. GISTING: We didn’t get to spend much time here, but the idea is that complex subjects, reports, presentations, etc. can be distilled down into gists that the team votes on and refines into supergist. Full understanding is never just quantitative or qualitative. Superforecasting is quantitative. Supergisting attempts to provide the qualitative piece. 

 

14. HYBRID FORECASTING COMPETITION: IARPA, the defense agency that sponsored the forecasting tournament that launch the Superforecasters (Good Judgment) is sponsoring a new Man+Machine Forecasting Tournament. For those interested in Forecasting and Machine Learning, this is your spot: https://www.iarpa.gov/index.php/research-programs/hfc

 

December 01, 2017

Concentration During Periods of High Volatility

As we’ve gotten feedback about the Concentration Manifesto, one common question is, “would the Monte Carlo simulation hold during periods of market turbulence?” To stress test the conclusion, we chose two periods of high volatility. The 12 months between June 2008 and June 2009 (high positive and negative volatility) and the 12 months ended February 2009 (the months leading up to market nadir).

While the results showed lower overall alpha returns for both portfolios during the financial crisis, the concentrated portfolio still outperforms and is safer than the diversified portfolio. This is due to maintaining the improved batting average during those periods.

 

June 2008 to June 2009

CM Blog

We reran this analysis for the 12 months leading up to 2/28/2009 and the relationship still holds. The difference in timeframes mainly impacts the alpha return for the two portfolios. Again, the separation between the portfolios is driven by the batting average difference between the concentration and diversified portfolios. 

 

Trailing Twelve Months Ending February 2009

CM Blog2

It’s worth noting that since we don’t have data prior to 2011, the assumption is that the batting average between these two portfolios remains consistent over time.  We did look at batting averages for the timeframes we have in our database and it appears to be relatively consistent except for a dip in June 2015. Note that this batting average includes all portfolios and is on an alpha basis, but does not assume fees. The goal was to look for some consistency over time.

CM Blog3

 

The Concentration Manifesto has become a part of the conversation about how to improve active management. Please keep the feedback and questions coming.

 

November 10, 2017

Predictably Insightful: Recap of the Behavioral Alpha Conference

 

This is a picture of me and Dan Ariely, author of “Predictably Irrational” and five other great books on decision pitfalls we all fall into. Dan was the keynote speaker at Behavioral Alpha 2017 an event put on by our friends at Essentia Analytics and we were proud to help sponsor.

 

Behavioral Alpha

 

The day was packed with great speakers including:

- Dan Ariely: “Behavioral Finance in Practice” 

- Denise Shull talking about “Your Senses, Feelings & Emotions are the Ultimate Dataset”

- Clare Flynn Levy: "Applying Behavioral Finance to Your Own Investment Process" 

- Fireside Chat with Mark Baumgartner: “Why Asset Allocators Care About Behavioral Analysis” 

- Cameron Hight: “Mistakes Managers Make & How to Fix Them”

- Peer Idea Exchange: Paul Sonkin and Paul Johnson: “Pitching the Perfect Investment:

- Managing the Tensions Between Analysts and Managers” 

- Dave Winsborough: “How the Collective Personality of Your Team Affects Performance”

 

Here’s a quick recap of some of the takeaways:

Dave Winsborough discussed ways that we can build better teams by understanding the goal we’re trying to accomplish, the needed components to accomplish that goal, and measuring the team participants to make sure that the team has all of the necessary components. It’s a relatively straightforward idea that should be applicable to almost any team.

Denise Shull discussed ways we can become better in tune with our feelings and emotions with the idea of learning when and how to leverage those feelings. Learning how to identify our own emotions is a powerful first step towards being able to mute the negative emotions and take advantage of the positive (signals).

Much of the conference was on emotion and bias and how they cause us to make poor decisions. I completely agree, but that’s not my expertise. I spent much of my time talking the processes that help mitigate bias. This primarily involved making our assumptions and decision process explicit so that they can be judged and analyzed.

Dan Ariely gave several fascinating anecdotes like how casinos are the best at applying behavioral tools, how company internal satisfaction surveys have predictive power for stock performance, how Intuit is giving teams time and money to try bold new initiatives to help them get over the risk of projects that fail, a weight scale that doesn’t show your weight (but tracks it over time) is a much better way to lose weight than one that gives immediate feedback that is subject to good-habit-breaking volatility, and how people in the next to the lowest tax bracket are the ones most opposed to minimum wage hikes because it could push them into the lowest rung of society. His major takeaway was the bias and personality are tough to eliminate so you have to create habits, rules, and routinized behaviors that help us do the things we say we want to do (very Alpha Theory😉).

Clare Flynn-Levy showed how investors can make better decisions by capturing some basic information about themselves and their decisions. Taking the time to tie those data points together can help us better understand when we make good decisions and when we make poor decisions. By understanding these cues when they’re happening we can take advantage of the positive and avoid the negative.

Mark Baumgartner discussed his time at the Ford Foundation and Institute of Advanced Studies and some of the things he’s seen in the managers he evaluates. He said that about 10% of the managers he meets have some form of structured process around behavioral science, decision making, portfolio management, position sizing, etc. He believes that the primary value of a manager isn’t based on these processes, but he believes there is a lot of easy to pick up alpha form implementing process. He would like to see his managers embrace it more actively but says the industry moves glacially while the products that help improve the process are evolving very fast.

The room was full of managers and allocators. There was a self-selection bias, but the crowd truly embraced the concepts for how to be better using the behavioral science discussed during the day. In fact, the crowd asked amazing questions and one of my favorite parts of the day was from a member of the audience that was expanding on his thoughts about the difficulties of capturing alpha. He said the number of investors has increased from 5,000 to 1 million over 50 years. How do you reverse that trend when it is one of the highest paid professions, where you get to work with amazing people, research a broad range of interest, get to meet leaders in industry, academics, and government, and be exposed to an array of amazing ideas? If I’m ambitious and at the top of my class, why would I not pursue that profession.

Hmmmmm, maybe we can ask Dan Ariely if he has some creative way to change that behavior.

 

October 20, 2017

American Idols

I was lucky enough to be part of a small event, The Frontier of Forecasting Conference, hosted by Good Judgment Inc. Among the participants were Phil Tetlock, Barbara Mellers, and Daniel Kahneman. For those that don’t know, Kahneman is a Nobel Laureate and considered the father of Behavioral Economics. Tetlock and Mellers are the brains behind Superforecasting.

Several of you were interested in attending but unable to make the trip. The following is a summary of the presentations from the conference. 

 

Imagejpeg_0

Left to right: Phil Tetlock, Barbara Mellers, Daniel Kahneman, Lucky Man

 

Tetlock on Gisting

Good Judgment's CEO Terry Murray opened up the morning by introducing the founder of the company, Phil Tetlock. Phil talked about a new idea that he’s working on called Gisting. The goal is to improve understanding by taking a large amount of information and having multiple people create a Gist or a shorter explanation of the information. These Gists would then be graded by peers and the best ones would be picked and synthesized into a team Gist. This leads to a deeper understanding by the “gisters” and easier understanding by readers that only have time for the gist.

Gists are important because full understanding is never just quantitative or qualitative. Superforecasting is quantitative. Supergisting attempts to provide the qualitative piece. The challenge is that time is scarce and this is a new task that will meet resistance in most organizational culture.

Gisting is a relatively new idea and it will be interesting to watch how it develops as Phil, Barbara, and Good Judgment group put more time into research. The next book, Supergisting?

Kahneman on Noise

Kahneman was next up and he spent his time talking about Noise. The concept is not new but he believes it should become a focus because it is easier to reduce than bias. He described an insurance company that he worked with to improve claims adjuster accuracy. He measured the efficacy of their claims process by having independent adjusters price the same claim. The average difference in claim value was 50%! That means that one adjuster might write a check for $1,000 and another for $1,500 for the same claim. He described how a simple algorithm would dramatically reduce noise and improve claim accuracy.

The discussion took a slightly cynical tone when he described how few of his practical ideas were actually put into practice. For example, the insurance firm, after learning of these gross miscalculations, didn’t implement the systematic approach he suggested. He gave another example of how Steven Levitt, of “Freakonomics” fame, showed a simple system of fraud detection improvement to a credit card company that would have saved many millions a year, but wasn’t implemented.

Kahneman said, “change causes winners and losers. Losers are much louder than winners, which makes reform much less likely.” And that “leaders don’t want to see their mystique questioned by systems.” Dr. Mellers had a nice rejoinder that “things will change, one funeral at a time.” For all of us Superforecasting believers, we hope it happens faster than that.

I believe the success that Ray Dalio and Bridgewater have seen by being very systematic and process-oriented may shed some light and make leaders less resistant to change. The publishing of Dalio’s “Principles” will be read by many leaders and get a conversation started about how we all can improve by being more disciplined.

Idea Exchange on Forecasting

The second half of the day was a “safe zone” event to permit free-flowing exchange of ideas due. This means that I’m not allowed to comment on the dialog but I can give a high-level recap.

I was a panelist for “Improving Probabilistic Forecasting Within Organizations.” The goal was to give real world examples of people implementing forecasting tools to improve decision making. It was exciting to see many firms experimenting with forecasting systems. In my view, shared by Good Judgment's president Warren Hatch, who chaired the panel, the challenge that most faced was getting broad adoption and keeping momentum.

The critical component for solving this challenge is getting top-level buy-in. If senior leadership asks questions and uses the output to make decisions, then people will participate. Another strategy for increasing participation was active feedback. Providing scores, leaderboards, best/worst forecasts, stats, etc. have a demonstrable impact on usage.

Better Forecasting Through Better Models

The final discussion was “Bayesian Cluster Forecasting Models for Strategic Decision-Making” lead by Dr. Kathryn McNabb Cochran. She is part of Good Judgment Inc. and is a leader in the field of better decision making through forecasting. The goal is to make better forecasts by creating better models. The models are a hybrid of pure forecasts and adjustments that lead to more accurate forecasts.

For anyone curious about how they can be better forecasters and apply that thinking to their organization, please contact the great folks at Good Judgment Inc.

Final Thought

Meeting several of my heroes in one day made me think how nice it would be if the GE ad campaign in which great scientists are treated like stars was reality. How cool would it be if my girls could grow up in a world where Kahneman and Tversky were admired as much as Brady and Gronk.

 

 

October 06, 2017

Poker: Art vs Science

 

“People describe poker as a game of art and science. Both intuition and science have merit, but the best players approach the game very quantitatively.” – Liv Boeree, Professional Poker Player

Our COO, Graham Stevens, and I met over a poker table. We’ve been playing together for many years and he was recently watching an Oxford Lecture Series video by Liv Boeree that he turned me on to.

Liv is a very successful poker player with a physics degree from the University of Manchester. She was discussing the use of Game Theory Optimal (GTO) play and the use of GTO Tables to break decisions down into ranges of hands based on different situations to aid poker players in knowing the optimal decision (Bet (big or small), Check, Fold).

She stated that the best players in the world all employ GTO. And even though all players assume their opponents are playing GTO, it is incredibly difficult to exploit those predicted decisions because they are optimal. In an interesting exchange in the video, Igor Kurganov, another very successful poker player who was in the audience, said that intuition (playing the player instead of playing the cards) factors into his decisions, but only to a small degree. He said that the best intuition can do is change a 50/50 bet to a 55/45.

The parallels to investing and Alpha Theory are clear. At Alpha Theory, we allow firms to build their own Game Theory Optimal system to figure out the “optimal” amount to bet on each position in their portfolio. And we find that firms that use intuition instead of their model lose to the hypothetical model performance about 75% of the time.

The reasons portfolio managers choose to vary from their model are numerous, but have a common theme; there is an intuition that the model isn’t capturing. Granted investing is not poker. Poker has a finite set of variables and permutations comparted to the seemingly infinite number of variables to consider in investing. But even still, just like in poker, the world-class players are going to be the ones that are following the model and only making small tweaks for intuition.

**Do I practice what I preach? A note on my own poker play. I do not play GTO because I have not memorized the tables. I know some of the shortcut rules for when to bet and fold pre-flop and I can do a rough calculation of pot odds post-flop but that’s the extent of my skills. If my buddies would let me pull out my computer while I’m sitting at the table, I would follow GTO.  If I were playing for a living, I would learn and follow the model.