(866)-482-2177

sales@alphatheory.com

REQUEST A DEMO

SYSTEM REQUIREMENTS


Please note the following System Requirements. Further, please limit the number of open applications (particularly price streaming applications) while logged in to Alpha Theory™.


Recommended System Specifications
Processor: Dual Core or Quad-Core 2.4GHz or faster
RAM: 4GB+
Browser: Google Chrome 30+
Screen Resolution: 1280 x 1024 or greater
Internet Access: Business Class High-Speed


Minimum System Requirements
Processor: Intel Pentium-M 2.0Ghz or equivalent
RAM: 2GB+
Browser: Google Chrome, Mozilla Firefox, Internet Explorer 9+ (without Compatibility View), Safari
Screen Resolution: 1024 x 768 or greater
Internet Access: High-Speed

Alpha Theory Blog - News and Insights

July 28, 2022

What does it mean to be a good stock picker?

 

Bottom Line Up Front: A batting average above 38% demonstrates stock picking skill for an active manager. 

 

Defining skill in investing is challenging, so it is no surprise that measuring skill is even harder. While we strongly believe there is an inherent skill for many “stock pickers,” how do we measure that belief? What if we compare them to the “blindfolded monkey throwing darts?” (Sidenote: unless monkeys can read, I think the blindfold is superfluous). Said another way, can the manager beat a randomly constructed portfolio? 

 

In this experiment, we will measure skill by measuring two metrics: batting average and slugging percentage (Batting = # of positions that make money / total number of positions | Slugging = average percent gain of winning position/average percent loss of losing position).  

 

Important Note: Demonstrating skill does not mean beating the market. It also does not mean making money. I know, weird. You can be better than the monkey throwing darts and still lose to the market because indices do not equal weight the stocks in the index. 

 

Batting Average 

 

ABSOLUTE BATTING AVERAGE. Let us start by building the comparison portfolio from the ACWI Index (All-Country World Index - ~3000 global stocks - ~70% US). To demonstrate skill, a manager should be able to “beat” random. Over the past five years, the ACWI has had a 56.4% batting average (if you randomly picked 100 stocks from the ACWI, 56 of them had a return greater than 0%).

 

INDEX ADJUSTED BATTING AVERAGE. You could say that during the past five years, a manager should make money on 56% or better of their long positions. The problem with looking at it this way is that some five year periods are more positive than others. A potentially better approach is to measure the number of stocks that beat the ACWI. The ACWI was up +40.9% during this period. Once we subtract that return, only 27.1% of stocks beat the ACWI. To assess, the manager would simply measure the number of stocks up more than the ACWI during the holding period. If that was higher than 27.1%, they were demonstrating skill. 

 

EQUAL WEIGHTED BATTING AVERAGE. The problem is that the last five years have seen the largest returns from the biggest index weights (large caps outperforming the small caps).  To get a random selection of the ACWI (like a monkey throwing darts), we should equal weight the ACWI. Equal weighted, the ACWI is up 21.7%. This more reasonable version results in a batting average of 37.8%. 

 

From this, we would argue that a batting average above 37.8% demonstrates skill. The higher above, the more skill demonstrated. Of course, there is luck involved, and untangling skill and luck is difficult, but over time, we can look for persistence of skill in picking winners and losers with this metric*. 

 

So far, the only skill measured is that of picking stocks that go up. In the next post, we will measure the skill of picking the stocks that go up the most and avoiding those that go down the most (Slugging – position sizing skill). 

June 24, 2022

8 Data Science Resources for Investment Management Professionals

 

Investment managers are increasingly interested in using data science and artificial intelligence to improve their processes and outcomes. Recently, a few clients have asked our team how they can sharpen their skills in the subject.  

 

The problem is that most entry-level data science material is not very useful for finance, and the material useful for finance is not entry-level by any means. Not to worry, our team has shared their top books and articles for investment professionals eager to learn about solving problems with data. Read on for a list of our top eight picks. 

 

Algorithms to Live By- The Computer Science of Human Decisions 

 

Authors Brian Christian and Tom Griffiths show how algorithms developed for computers also untangle very human questions by explaining, in layman’s terms, algorithmic solutions for real-world decision making. If you like problem-solving and decision theory, you’ll love this book. 

 

Recommended by Cameron Hight, Alpha Theory CEO 

 

Big Data: A Revolution That Will Transform How We Live, Work, and Think 

 

Viktor Mayer-Schönberger and Kenneth Cukier, two leading experts in data science, wrote this non-technical book that discusses what big data is, how it will change our lives, and what we can do to protect ourselves from its hazards. It’s a great place to start for those who wish to get into a data-oriented mindset, but do not have direct experience. 

 

Recommended by Aaron Hirsch, Data Scientist at Alpha Theory 

 

CRISP-DM – a Standard Methodology to Ensure a Good Outcome 

 

CRISP-DM is a framework for applying data science to business problems. This article gives a solid explanation on how to approach a project before getting started. For those getting started in practicing data science, it will save them time by helping to avoid rabbit holes. 

 

Recommended by Billy Armfield, Data Scientist at Alpha Theory 

 

The 7 Steps of Machine Learning 

 

This article, written by a google developer, outlines in broad strokes the steps in a typical machine learning problem. It walks through a basic example to describe the process of getting answers from data using machine learning. Readers will gain a foundational framework to think through the problem and the language to articulate each step. 

 

Recommended by Billy Armfield, Data Scientist at Alpha Theory 

 

Machine Learning: An Applied Mathematics Introduction by Paul Wilmott 

 

This slim book by uber-quant Paul Wilmott gives clear and detailed explanations of the machine learning models most used in quant finance, along with pointers to further reading. While the book assumes basic calculus and linear algebra skills, it is an approachable resource for those who desire a deeper understanding of machine learning models without dense textbook reading. 

 

Recommended by Ross Fabricant, Director of Data Science at CenterBook Partners 

 

Statistical Methods for Machine Learning- Learn How to Transform Data into Knowledge with Python 

 

Machine learning specialist Jason Brownlee provides a thorough hands-on introduction to statistics and hypothesis testing with step-by-step instructions through Python-based projects. The book builds a solid foundation for future discovery and assumes little prior knowledge of statistics and coding. 

 

Recommended by Chris White, Head of Portfolio Implementation & Risk at CenterBook Partners 

 

Machine Learning Mastery with Python- Understand Your Data, Create Accurate Models, and Work Projects End-to-End   

 

Also, by Jason Brownlee, this step-by-step guide helps the reader master foundational techniques in machine learning, using Python with scikit-learn, pandas, tensorflow and other helpful libraries. It is written in an engaging and accessible style, without assuming much prior knowledge. 

 

Recommended by Chris White, Asia CEO & Head of Risk & Portfolio Implementation at CenterBook Partners 

 

An Introduction to the Bootstrap 

 

Bradley Efron and Robert J. Tibshirani arm scientists and engineers with computational techniques to analyze and understand complicated data sets, without relying on an understanding of advanced mathematics. But be warned- this dense academic textbook is no-nonsense. Fancy charts and descriptions of tooling are few and far between.  

 

Recommended by Ake Kullenberg, Head of Execution Trading at CenterBook Partners 

 

Were there any books that have been helpful to you as you begun learning about data science? We’d love to know. 

 

Get in Touch with Alpha Theory 

 

If you have questions about the resources mentioned above, our in-house data science team, or our leading portfolio construction platform and services for investment managers, please do not hesitate to reach out.  

 

Any communication regarding CenterBook Partners LP (“CenterBook”) is for information purposes only and should not be regarded as advice, a solicitation of an offer to buy or sell any CenterBook financial product, or as an official statement of CenterBook. 

May 26, 2022

Turtle Creek: The Discipline Behind 21% Compound Returns

We recently spoke with a portfolio manager who runs a concentrated Canadian long/short fund. He said that they aspire to be like Turtle Creek, a firm that embodies the Alpha Theory philosophy of refining the investment process to generate better outcomes. Turtle Creek has a remarkable return stream, with compounded capital at 21% per year for almost 25 years. 

After reading a few of their letters, it’s obvious that Turtle Creek has been running the Alpha Theory strategy for 10 years longer than Alpha Theory has existed. Their performance is no surprise given our findings that the most process-oriented managers outperform. 

In Turtle Creek’s Third Quarter Investor Letter, they discuss their portfolio construction and continuous portfolio optimization. I’m going to include a rather long excerpt (slightly edited for brevity) because it is a great explanation of how and why this is the best way to manage a fundamental portfolio. 

Our investment approach comprises four steps: 1. Finding the right kind of companies; 2. Valuation; 3. Portfolio Construction; and, 4. Continuous Portfolio Optimization.  

Our final two steps – the initial sizing of a position (Portfolio Construction) and then adding to or trimming the position in reaction to changing share prices (Continuous Portfolio Optimization) – can be understood as one step. But years ago, we decided to break them into two separate steps in order to better explain our process. By doing so, we can focus people on our initial sizing without the distraction of talking about future fluctuating share prices and how we continuously revise the size of a position. Then, once the logic of how we size a new holding is understood, we can introduce step four: Continuous Portfolio Optimization.  

In explaining Portfolio Construction, we often start with a simple example. Suppose that we only know two companies and that we have built a balanced, long term financial forecast for each. The present value of our forecasted cash flows represents our view of Business Value. If both companies are trading at the same discount to our Business Value estimate, we could buy their shares and, if our forecasts turn out to be correct, we would earn an annual return that is somewhat better than our discount rate – say 15% [this is akin to Alpha Theory Probability Weighted Return]. Next, assume that we know both companies equally well and assess management quality, business risk, etc. to be the same [this is akin to the Alpha Theory Confidence Checklist]. In this simple example, we would invest half of the fund in one company and half in the other, since we expect both to earn a 15% return.  

But of course, no two companies are the same. Some trade at bigger discounts to our view of Business Value than others. Discount to Business Value is one of the largest drivers of our portfolio weightings since tilting towards cheaper companies both minimizes risk and, almost always, enhances expected returns. Away from the discount, we also consider other factors when determining portfolio weightings. For example, every company has a range of future outcomes – some fall within a relatively tight band, while others have a much broader range. In developing our financial forecast, we think about the probability of different future outcomes and consider risk to be the potential downside scenarios from our expected outcome, not the chance that the share price will fluctuate in the short term. So, in our simple example, we might decide that, while the long term expected return is 15% per annum for both companies, by the time we account for other factors – dispersion of future outcomes, relative strength of management, how long we have known the company, etc. – we might initially size one holding at 60% and the other at 40%, rather than 50% for each. Think of it as one company having a better risk-adjusted 15% expected return than the other.  

Of course, we don’t own just two companies; we own 25 to 30 in each of our funds. Things become a lot more complicated, but the ideas behind our Portfolio Construction described above still drive our portfolio weightings.  

And once we have sized a position, we don’t just sit back and simply wait for some stock price target to be reached. The sizing exercise we undertake when adding a position essentially assumes a static portfolio. But portfolios are far from static – stock prices are constantly moving around and so, in a way, we must continuously re-construct the portfolio. This ongoing re-construction process is something we call Continuous Portfolio Optimization (“CPO”). Typically, this process entails selling small amounts of positions that have seen share price appreciation and, in turn, buying small amounts of positions that have seen their share prices fall.  

Let’s return to our two-stock portfolio example. After initially constructing the portfolio, let’s then assume that the share price of one company declines by 10% while the share price of the other rises by 10% (a pretty common occurrence in the stock market, as you know), with no change to our long term view of Business Value for either company. Obviously, the portfolio is imbalanced because now the company with the lower long term expected return (because its share price has risen) is a larger weighting and the company with the higher long term expected return (because its share price has declined) is a smaller weighting. To us, it makes no sense to do nothing and so we would sell some of the lower prospective return position (the one which saw its share price rise 10%) and invest the proceeds in the higher prospective return position (the one which saw its share price fall 10%).  

In our previous commentaries we highlighted some of the differences in our approach to company identification and valuation. But in these two final steps – Portfolio Construction and CPO – the entire approach is different. We are often asked to identify the source of our out-performance. It’s a complex question that proves difficult to answer. Turtle Creek Equity Fund is comprised of only 30 names, so clearly identifying great companies, and avoiding not so great ones, coupled with a well thought out view of value is an important contributor to our returns. Yet our approach of overweighting the most attractive positions, on a continuous basis, is also an important contributor. The thing is, these steps are all inter-connected and heavily dependent on each other. Without a view of value that we have confidence in, we would be unable to initially size a position, nor react to changing prices. And arriving at a well thought out view of value is certainly made easier when you are dealing with highly intelligent, shareholder focused organizations.  

The impact of our CPO shows up whether you look at individual holdings or the portfolio overall. On an individual holding basis, for virtually all of them, the return we have generated exceeds that of a ‘buy and hold’ approach and, over time, the difference becomes larger and larger. This is also the case for the overall portfolio 

While CPO has generated incremental positive returns, we don’t do it to boost performance. We engage in CPO to constantly de-risk the portfolio – to maintain one that has the lowest risk or highest margin of safety. But, of course, this also has the inverse impact of constantly fine tuning the portfolio to have the highest long term expected return. For those of you who are interested in reading a more expansive discussion on risk, we would direct you to our Tao of the Turtle, Risk, A Further Discussion.  

We are often asked “why don’t others do CPO?” The answer is complicated. First off, one must do the first three steps really well: finding the right companies, doing fundamental work to have a confident view of the true intrinsic value of each company and having a logical means of initially sizing individual holdings. That is the foundation that enables us to buy more of a holding at lower prices and, equally, have the comfort to trim the position at higher prices. Then there are factors such as temperament that make CPO difficult to put into practice. Recently, in a meeting with a large U.S. family office, they commented that they have some good investment managers but every time those managers try to ‘trade around’ their positions they find that they actually detract from a buy and hold return. And then they observed that, clearly, we have added value. We explained that we are not ‘trading around’ our holdings; instead, we are simply reacting to other people ‘trading around’ and the share price changes that result.  

Looking forward, as we survey the companies in our portfolio today, we would be very surprised if each share price wasn’t higher (frankly, a lot higher) in five to ten years. Think of us as having high confidence in each share price far into the future. But we have very low confidence as to where the share prices will go in the shorter term. If we are lucky, the path to those higher long term share prices will be uneven with lots of ups and downs so that we will be able to apply Continuous Portfolio Optimization to the benefit of our investors.  

And in their 2013 piece, “The Tao of the Turtle” Turtle Creek discusses how portfolio construction requires valuation: 

This brings us to a key point about portfolio construction: without our Edge 2 – Valuation, it would be impossible to construct an optimal portfolio, or even any logical sort of portfolio. You have to understand the value of each of your investments (how much free cash each investment will generate over time) – and most importantly, the relative value among your investments – before you can go about the process of portfolio construction. It is worth reiterating that we think about valuation probabilistically.  In our recent Tao (on valuation), we provided the visual of a bell curve to show how we think about our intrinsic value estimates – while we use the ‘best estimate’ midpoint for each valuation we recognize that, in an uncertain world, value exists over a range. 

The concept of continuous portfolio optimization is simple. Maintaining the discipline to follow it is not simple. Our clients, it should be assumed, would be the most likely to follow continuous portfolio optimization, because they use our platform, which is designed to drive refinement of the investment process. Still, even they leave over five percent of returns on the table from not more closely following the optimal position sizes generated from their research. Our hope is that seeing Turtle Creek’s 25-year track record, which is made possible by continuous portfolio optimization makes sticking to the discipline a little bit easier.  

April 28, 2022

The Alternative Data Podcast: The CenterBook Episode, with Ross Fabricant

 

Our colleague, Ross Fabricant, was recently on “The Alternative Data Podcast” hosted by Mark Fleming-Williams. In this episode, he discusses his data science pedigree, his experience using Alpha Theory at Third Point, and what led him to CenterBook Partners. 

 

Ross describes the benefits of Alpha Theory from a quant’s perspective, why the Alpha Theory data set is unique, and how CenterBook Partners collaborates with funds to help them improve their investment process. 

 

Listen to the episode here

 

Disclaimer: Any communication regarding CenterBook Partners LP (“CenterBook”) is for information purposes only and should not be regarded as advice, a solicitation of an offer to buy or sell any CenterBook financial product, or as an official statement of CenterBook.

 

 

 

March 15, 2022

March Madness Math 2.0

 

I love college basketball and have been a Tarheels fan my whole life. I’m a graduate of UNC-Chapel Hill (home of Michael Jordan) and grew up thirty minutes from campus. So, if you’re anything like me, you’ll spend too much time this week putting together your bracket.

 

In Alpha Theory fashion, I built a systematic approach to filling out my NCAA brackets. I wrote about it 10 years ago, but since then, I’ve scratched it because FiveThirtyEight.com has built a better version for me. 

 

I was showing some of my colleagues the old NCAA spreadsheet tournament and realized that KenPom.com no longer reports the Pyth statistic. If you’ve read the previous post on my method for creating a bracket, you will see that I used Pyth along with Log5 to calculate win percentage. Apparently, the change happened in 2017, but I’ve been taking the shortcut through FiveThirtyEight so I haven’t noticed (see my reasons below). 

 

But if you’re curious about the rabbit hole I went down to use KenPom’s new statistic, AdjEM, to calculate Win Percentage, the rabbit hole awaits below.

 

How to Calculate Win Percentage using KenPom

 

For those curious to understand an updated method to calculate Win Percentage for KenPom, here is an example with formulas:

 

Boise State (8) plays Memphis (9) in the first round. First, calculate the expected point differential (I pulled the formula from Reddit). AdjEM (Adjusted Efficiency Margin) is the number of points you would expect this team to beat the average Division 1 team by, if they played 100 possessions. The AdjT (adjusted Tempo), is a way of accounting for Tempo, or how many possessions they actually play. 

 

PointDiff = (AdjEM_A - AdjEM_B)*(AdjT_A + AdjT_B)/200 

PointDiff = (A – B) * (C + D) / 200 – see below

PointDiff = (16.41 – 16.22) * (64.6 + 70.3) / 200 = 0.13 Boise St. point advantage

 

March Madness Update Table 1

 

Then, in Excel, use the =Norm.Dist(x,mean,StDev,distribution) function. In this case, it would be:

 

=Norm.Dist(0.13,0,11,true) = 50.5% probability that Boise St. is going to win. Makes sense that it is a coin flip given that Boise St. is only favored by 0.13 points. Why 11 for the StDev? That’s what KenPom suggests here as the average StDev for NCAA basketball.

 

Why I Use FiveThirtyEight

 

The main reason I use FiveThirtyEight is because it uses multiple sources and, well, KenPom is just one source. For comparison, Vegas odds have Memphis winning by 2.5 points vs. losing by 0.13 points on KenPom. That’s a 59% probability of Memphis winning using the formula above. FiveThirtyEight has a 54% probability of Memphis winning. I believe that the best answer comes from the wisdom of the crowd.

 

The nice folks at FiveThirtyEight have taken the time to crowdsource many great thinkers, including KenPom and themselves (using their Elo method), into one forecast (see here or the schema below).

 

March Madness Update Picture 2

 

Also, they have this nifty little Excel downloader which makes life so much easier. So, while it was a fun intellectual challenge to build forecasts using the Monte Carlo simulations based on KenPom win probabilities, I can’t compete with FiveThirtyEight.

 

We all should strive to constantly improve our process. KenPom improved their process by switching from Pyth to AdjEM. FiveThirtyEight changes their weightings, sources, and adjustments. I realized that I was better served by using an external source versus my internal solution.

 

And, in the same way Ken Pomeroy’s AdjEM is updated after each game played, or FiveThirtyEight updates probabilities, fresh price targets and probabilities in Alpha Theory lead to better sizing recommendations because they reflect your most up-to-date beliefs about the future performance of your investments. Luck is not a strategy. Unlike a Vegas casino, you don’t have a built-in edge. Your edge is your process. Build it. Refine it. And then win.

 

Contact us if you are interested in how we partner with fund managers to define and implement disciplined, repeatable processes that ultimately lead to confident decision making, driving portfolio optimization.

 

February 25, 2022

The 10-Year Streak: Alpha Theory 2021 Year in Review

This article was co-written by Billy Armfield, Data Scientist at Alpha Theory, and Cameron Hight, CEO of Alpha Theory.

As data enthusiasts, we love to dig into numbers and uncover patterns. From examining ten years of Alpha Theory All Manager data, we agree with Obi-Wan Kenobi, "In my experience, there is no such thing as luck."  

 

The best investors know the key to outperforming year-over-year is discipline, process, and consistency. Sure, an investor can get lucky over the short-term. But the question that should be asked is whether that investor can continue to outperform over the long-term.  

 

One thing that is for certain is that Alpha Theory clients continue to outperform! Over the past ten years, Alpha Theory clients have outperformed other hedge funds seven times by an average of two percent per year. Over the same period, Alpha Theory’s optimal position sizing outperformed clients’ actual return EVERY year by an average of five percent. That’s ten years in a row!

 

HFRI v Actual v Optimal

 

What does this mean? Our clients are self-selecting, better-than-average managers that could be even better if they more closely followed the optimal models they built in Alpha Theory. In fact, over the past decade, the compound return is twice that of their actual performance, at 269% vs. 133%, and almost three times that of the average hedge fund, 269%i vs. 89%. (Sidenote: five percent additional return for ten years doubles the returns. Isn’t compounding amazing?)

 

What does this look like from an annual return perspective? The graph below shows Equity Hedge Index gross returns (no fees and full leverage), Alpha Theory client returns, and the returns if Alpha Theory clients’ portfolios were managed systematically according to optimal position sizing. 2021 was the fourth year of the past ten where clients underperformed the primary Equity Hedge Index on a gross return basis, which does not account for their substantially lower leverage. The return difference between the Equity Hedge Index and Alpha Theory optimal was roughly in line for the year, again, with lower leverage. 

 

Industry vs Actual vs Optimal Performance (Total Gross Returns)

 

 

Once we normalize for leverage, the differences become more obvious. On average, Alpha Theory clients operate with ~125% of leverage vs. the industry average of ~175%, per a 10-year study from Morgan Stanleyii. The chart below is based on Return on Invested Capital (ROIC), with 100% gross exposure per manager, and is an apples-to-apples comparison. On a ROIC basis, Alpha Theory clients outperformed the industry seven of the past ten yearsAlpha Theory’s optimal position sizing outperformed both Alpha Theory clients and the industry all ten years.

 

Industry vs Actual vs Optimal Performance (No Leverage - Gross ROIC)

 

HOW OFTEN DOES IT WORK?  

 

On average, returns from optimal position sizing have topped returns from actual position sizing every year. But it doesn’t win for every client and every position. If we randomly select a client in a given year, optimal sizing is better 69% of the time. If we randomly select a position in a given year, optimal sizing wins 58% of the time. What we see in the results is the benefit of consistently applying process. The more time spent applying process, the more likely the process is to winii.

  

PROCESS ENHANCES PERFORMANCE  

 

Alpha Theory clients use our platform to reduce the impact of emotion and eliminate the guesswork as they make position sizing decisions. Alpha Theory gives a true ranking of ideas in the portfolio so managers can size them accordingly. It does this with a rules engine that: 

 

    1. Centralizes price targets and archives them in a database  

    2. Provides notifications of price target updates and anomalies  

    3. Calculates Probability-Weighted Returns (PWR) for assets and

        the portfolio as a whole  

    4. Enhances returns  

    5. Mitigates portfolio risk  

    6. Saves time  

    7. Adds precision and rigor to sizing process  

    8. Incorporates real-time market data (price, liquidity, etc.)

        into sizing decisions 

 

Our clients are a self-selecting cohort who believe in process and discipline; process orientation goes together with the Alpha Theory platform, which serves as a disciplining mechanism to align the best risk/reward ideas with rankings in the portfolio.  

 

There is no question that a sound investment process requires discipline. Below are some of the best lessons for turning process into performance.  

 

START WITH PRICE TARGETS  

 

Alpha Theory research shows that ROIC for assets with price targets is 5.1% higher than for those without price targets. Some investors chafe at price targets because they smack of “false precision.” These investors miss the point because the key to price targets is not their absolute validity but their explicit nature, which allows for objective conversation of the assumptions that went into them. In other words, the requirements of calculating a price target and the questions that price targets foster are central to any good process.  

 

Price Targets vs No Price Targets by Annualized ROIC

 

KEEP THE PRICE TARGETS FRESH  

 

Once you establish price targets, keeping them fresh adds 4.9% of ROIC. See below for a chart comparing Fresh vs. Stale Price Targets (stale is defined as older than 90 days).

 

Fresh vs Stale Price Targets by Annualized ROIC 2012-2021

 

In a world of shrinking alpha opportunities, the best way to consistently outperform is by applying a systematic approach to research and sizing. The founding principle of Alpha Theory is that managers can benefit from a systematic sizing process based on their research. The more frequently that research is updated, the higher the probability of making good investment decisions. This has held true historically and was especially important in 2021. Below, we show the annualized ROIC on fresh and stale positions for 2021. Positions with fresh research outperformed positions with stale research with a similar margin, more than two times. 

 

Fresh vs Stale Price Targets by Annualized ROIC 2021 Only

 

FINALLY, CREATE A SYSTEMATIC APPROACH TO SIZING POSITIONS  

 

Once you create a research process based on fresh price targets, the next step is to create a systematic process to highlight when positions are out of line with the research. That’s what Alpha Theory does in the form of optimal position sizing. As you can see below, there is a marked improvement in almost every metric with systematic position sizing. Again, this is based on ten years of data across 100+ managers. We can confidently say that the managers using Alpha Theory are great price target forecasters. Still, they could do even better if they more closely followed the system they built in Alpha Theory.  

 

ROIC:Batting:Slugging Actual vs Optimal

 

In the future, finding alpha will not become easier. It is imperative that the funds of the 21st century develop plans to evolve to new realities. Data and process are critical to that evolution. Why? Process and systems can be observed and measured, creating a positive feedback loop where managers adopt more of the systems associated with high returns and fewer of the systems that aren’t. Let Alpha Theory help you and your team grow to meet the challenges of tomorrow. 

________________________

i The optimal trading strategy will only make a trade when the change in OPS is greater than 50 bps from the previous day, and assumes trading costs of the lesser of 1bps or three cents per trade.

ii Morgan Stanley Hedge Fund Report, Dec 2019.

iii Tennis match simulator from Michael Mauboussin showing the benefit of compounding small edges. 

January 27, 2022

The January Effect: Cooling Investor Returns

 

The January Effect is the belief that equities tend to perform well in January, possibly because of tax-loss harvesting in December and the subsequent repurchasing of positions in January. In light of the noticeable absence of any desirable January Effect in 2022, we decided to investigate to what extent the January Effect is visible within the All-Manager Alpha Theory dataset, which dates back to January 1st, 2012. Is this most recent January an outlier?

 

To test, we analyzed average monthly returns for long positions for Alpha Theory (AT) managers against the S&P 500 Index (SPY). For the purpose of this exercise, we put aside the fact that if any excess return existed by sizing up in January, it likely would have long since ceased to be a profitable strategy.

 

We can see from ten years of data that January is not the best-performing month for AT managers. In fact, it is not even in the top half.

 

Average Monthly Return

 

In the table below, January is the 3rd worst-performing month for Alpha Theory managers and the SPY over the past decade.



Screen Shot 2022-01-27 at 3.30.12 PM

January performance returns across AT Actual, AT Optimal, and SPY are roughly flat. But we can’t give up yet. Is perhaps this most recent January an outlier?

 

Jan Return by Year

 

Indeed, returns for this January, though not completely tallied, are the lowest on record since the beginning of the All-Manager Alpha Theory dataset.

 

The conclusion? January is one of the worst-performing months for Alpha Theory managers and the SPY over the last ten years, and this year, AT managers are seeing the lowest January returns since 2016. Received wisdom, like the January Effect, must always be taken with a grain of salt.

 

Do you have any hypotheses you would like us to investigate? Comment below or reach out to us at Support@AlphaTheory.com. Alpha Theory is committed to helping fund managers think more clearly about the challenges associated with forecasting and asset allocation, including analysis of assumed dogma, like the January Effect.

 

December 29, 2021

A Lack of Freshness Spoils Returns

 

Alpha Theory can’t tell you how to do your research, but it can tell you when. Using insights from the Alpha Theory All-Manager dataset, we can provide guidance on some of the basics in managing your research process. 

 

Managers understand intuitively that producing investment research and updating that research regularly (i.e. freshness) is important. But how frequently? Should I update my research every 60 days? Every two weeks? Do I need to produce scenarios for all my positions?

 

Key conclusions: 

1. Assign price targets and probabilities to every investment

2. Update them once a month

 

To determine the impact of freshness and coverage on returns, we measured the one-year forward return for the optimal long portfolio for each fund in the Alpha Theory All-Manager dataset on a quarterly basis1. We then put each fund into four buckets based on their average freshness (days since the last update or DSLU) and coverage (percentage of positions with price targets). Next, we calculated the return of each quartiled bucket to see if returns correlated to freshness and coverage.

 

We found that funds that were diligent enough to place in the top quartile produced more than four times as much alpha as the bottom quartile, increasing monotonically from bottom to top. The median update frequency for the top quartile was 25 days (once a month updates), meaning the top funds updated more than 10x as often as managers in the bottom quartile. Additionally, managers in the top quartile had research on all active positions.  

 

A Lack of Freshness Spoils Returns

 

As a fundamental manager, you may argue that very rarely does something meaningful happen every 30-days that warrants a forecast update. We would counter that price is an important signal. For example, let’s say you initiated coverage on a position at $100 with a 70% chance of going to $150 and a 30% chance of going to $50. If the price moves from $100 to $125, wouldn’t you say the probability of reaching your bull target has changed? While $150 may still be the price suggested by your model, updating the probabilities of your scenarios to more accurately reflect likely outcomes allows the OPS model to make better sizing recommendations.

 

In addition, Daniel Kahneman’s new book “Noise” describes how the same expert can take the same information and come to different conclusions at different times. And, that the best answer is the average of those forecasts. This means that an analyst may come to a different conclusion for price target and probability on a different day and that the constant refinement (updating once a month) is healthy and leads to more accurate forecasts.

 

Finally, research from our friends at Good Judgement Inc. shows that over the past six years, their top forecasters (orange) update roughly 4x as often (11 updates vs 3 updates per question) as non-Superforecasters. Update frequency has a high correlation with outperformance and incorporating even small additional bits of information (Superforecaster updates were roughly half the size of non-Superforecasters) that either support or detract from the probability of a given outcome lead to better results over time.

 

A Lack of Freshness Spoils Returns Chart 2

 

We are always interested in learning more about your research process and where Alpha Theory can help. Alpha Theory is a process enhancement tool, creating a space that systematizes how you conduct and use research for allocation decisions. Please reach out to us with any questions so we can better optimize your workflow to generate more alpha.

 

1To normalize for different benchmarks, we calculated alpha on an idio+sector basis using the Axioma World-Wide Equity Factor Risk model, which removes performance derived from all their tracked factors, excluding sector. 

 

November 29, 2021

Getting Comfortable with Many, Micro Updates

 

For years we’ve worked closely with the folks at Good Judgement Inc. from “Superforecasting” fame. One of our friends there, Chris Karvetski, recently published a white paper called “Superforecasters: A Decade of Stochastic Dominance” on Superforecasters’ attributes and skills. For analysis, Chris studied 108 forecast questions with 167,000 forecasts to compare the differences between accuracy and approach between Superforecasters and everyone else.

 

From an accuracy perspective, Superforecasters dominate with accuracy that is 36% better (0.166 error for Superforecasters versus 0.259 for general forecasters).

 

Picture1

 

Alpha Theory clients forecast stock price movement. As such, the question we should ask is “how can we be more like Superforecasters?” Well, Chris broke down the frequency and magnitude of updates and I believe the answer is clear.

 

Picture2 Picture2

 

Superforecasters update their forecasts ~4x more often which leads them to adjustments that are about half the size. Imagine steering a boat towards a lighthouse. You can choose to make 3 major adjustments or 11 minor adjustments. Which method is going to get you closer to the lighthouse?

 

As analysts, to gain better forecast accuracy, we should frequently update our price targets and probability forecasts. Obviously, new information warrants updates but we should still make updates even when there is no new information. As The Verve says, “we’re a million different people from one day to the next.” We all have what Daniel Kahneman calls, Occasion Noise, which basically means that we change our opinions without the facts changing. Our mood impacts our forecasts. To get a truer sense of our own opinions, we should ask ourselves the same question at different times.

 

Let’s be like Superforecasters and get comfortable with many, micro updates. In our next blog post, we’ll explore the impact that update frequency has on returns.

 

October 27, 2021

KISS, or How to Make Money by Following Your Research

 

“It is remarkable how much long-term advantage people like us have gotten by trying to be consistently not stupid, instead of trying to be very intelligent.” - Charlie Munger

 

Successful traders and investors encourage entrants to the field to find an “edge”, ideally a strategy that has not already been widely adopted by other market participants. This has led to the proliferation of esoteric strategies, especially in the quantitative arena. In order to generate alpha in the increasingly competitive asset management industry, you need an army of PhD’s, complex strategies, and troves of data, right? Well, not necessarily.

 

KISS_Sep2020

 

Analysis of the Alpha Theory dataset shows that if managers simply exit all positions where probability-weighted return is zero or negative, the average manager’s CAGR would improve by 3%!

 

Alpha Theory managers create a probability-weighted value for each position based on price targets and probabilities for the various scenarios which may play out in the market. In an ideal long scenario, the current market price of a security will increase towards the probability-weighted value. As price and expected value converge, probability-weighted return drops to zero, and the analyst should either revise price targets upward, trim, or exit the position all together. If expected return is zero, Optimal Position Size will recommend exiting the position, as there are other investments with greater expected return.

 

Sometimes, however, managers are slow to update price targets, or to reallocate the portfolio to higher expected return investments. We compared the return on invested capital (ROIC or total return/gross exposure) of the manager’s actual portfolios to what ROIC would have been if managers were only invested in positive probability-weighted return positions. This means a long position would only be in the portfolio if the probability-weighted return was positive, and a short position only if the probability-weighted return was negative.

 

The data below shows the improvement in ROIC over actual for simply removing positions with negative probability-weighted returns (blue column) and then for Alpha Theory’s Optimal Position Size (gray column), which layers on additional sizing logic in addition to zeroing out positions with zero probability-weighted return. The sample includes all Alpha Theory clients from January 1st, 2014 to June 30th, 2021.

 

Improvement Over Actual ROIC

Returns on manager portfolios of only the positions which had a directionally accurate positive probability-weighted return had a 3% higher CAGR, and returns on Optimal Position Size, which uses manager research as well as other portfolio constraints, improved CAGR by 6.7% over actual ROIC.

 

Highly intelligent, sophisticated investors look for ways to improve by default, and the temptation to distinguish oneself with new strategies is intense. But our research suggests that it is more important to focus on the fundamentals. John Wooden’s insight that free throws contribute to national championships also applies to portfolio management. Having high research coverage, updating price targets, and being allocated to positive returns are simple rules which contribute to outperformance, but which are often ignored at the expense of alpha.