(866)-482-2177

sales@alphatheory.com

REQUEST A DEMO

SYSTEM REQUIREMENTS


Please note the following System Requirements. Further, please limit the number of open applications (particularly price streaming applications) while logged in to Alpha Theory™.


Recommended System Specifications
Processor: Dual Core or Quad-Core 2.4GHz or faster
RAM: 4GB+
Browser: Google Chrome 30+
Screen Resolution: 1280 x 1024 or greater
Internet Access: Business Class High-Speed


Minimum System Requirements
Processor: Intel Pentium-M 2.0Ghz or equivalent
RAM: 2GB+
Browser: Google Chrome, Mozilla Firefox, Internet Explorer 9+ (without Compatibility View), Safari
Screen Resolution: 1024 x 768 or greater
Internet Access: High-Speed

Subscribe to Alpha Theory content

Alpha Theory Blog - News and Insights

59 posts categorized "Analytics"

September 23, 2022

The Impact of Large Trades: Helping CenterBook Partner Funds Profit from Transaction Cost Analysis

 

This article was co-written by Billy Armfield, Data Scientist at Alpha Theory and Director of Partner Fund Services at CenterBook Partners, Aaron Hirsch, Data Scientist at Alpha Theory and Partner Fund Analytics Manager at CenterBook Partners, and Chris White, Head of Risk and Portfolio Implementation at CenterBook Partners. 

 

Introduction 

 
“You not only are hunted by others, you unknowingly hunt yourself.” 

― Dejan Stojanovic, The Sun Watches the Sun 

 

Few things seem more perplexing to the fundamental equity manager than the hidden costs of execution. They appear to be hard to predict, arcane in construction, and yet invariably a frustrating and persistent drag on performance. While managers may follow a heuristic approach for the interplay of trade size, impact, and expected alpha, they are rarely formulated or systematized. Yet, for quantitative managers, Transaction Cost Analysis (TCA) lies at the heart of both the strategy construction and execution process and is a well-accepted and broadly appreciated component. 

 

At CenterBook Partners LP 1 (“CenterBook”), we have partnered with a select group of Alpha Theory clients (called Partner Funds2) to harvest insights from their Alpha Theory data to build systematic investment strategies. We have been working with our Partner Funds to help bridge the gap between the quantitative and qualitative worlds by providing insights and tools to help our Partner Funds both understand and profit from TCA. 

 

The Alpha Horizon 

 

“A stock operator has to fight a lot of expensive enemies within himself.” 

― Edwin Lefèvre, Reminiscences of a Stock Operator 

 

A concept heavily used in quantitative strategies is that of the “alpha horizon” of a signal and the linked concept of alpha decay. In simple terms, this refers to the tendency of a position to produce positive alpha for a period of time, reach a zenith in terms of performance, and then decay or begin to lose money. 

 

Applying this to the fundamental world, we refer to the “alpha horizon” of a manager’s trades as being the timeline over which the average position generates positive return before peaking and subsequently flatlining or (more commonly) starting to decline. 

 

Defining the alpha horizon for a fundamental manager is a complicated exercise. We must decide whether we look only at new positions (and what constitutes “new”3) or only look at the individual trades which make up a position. And how do we measure value creation? For example, do we look at total return,  return relative to a benchmark, idiosyncratic return from a risk model, or something else? The underlying data needs to be meticulously cleaned for outliers, corporate actions, dividend payments, and other potential issues. 

 

Once this exercise is completed, we can examine the typical value path of a position or a trade for the manager. The shape, steepness, and longevity of this curve are key to assessing the optimum execution strategy, as we will see below. 

 

Trade Impact 

 

“Client doesn’t want to move the price, so stay below 30% of volume.” 

  • Overheard (frequently) on the trading desk in the 90s. 

 

Most fundamental managers understand that there is a cost to executing in the market (over and above the cost of commissions and other fees) and that this cost generally scales with size and speed. The more one wishes to buy or sell, and the greater the urgency, the more likely it is that other market participants will identify your intent and attempt to front-run you, and deeper into the order book you must go to get your fill. This situation has become increasingly acute over the past decade (or more) with the rise of High Frequency Trading (HFT) strategies.  It is generally accepted that a trade greater than 5% of the volume in a security is likely to experience substantial adverse impact from predatory market participants. 

 

TCA models vary by provider,4 but generally, they incorporate three factors known to correlate to ex-post measures of impact: the volatility of the stock, the bid-ask spread, and the percent of daily volume being executed. As any of these three elements increase, the greater the impact is likely to be5

 

Given this unavoidable circumstance, how should a manager execute in the most optimal fashion? At one end of the spectrum, executing in minuscule increments will minimize impact but at the expense of potentially missing the alpha. On the other end, maximizing the speed and aggression of the trade will ensure that the position will be present for as much of the alpha as possible, but at the very real risk of uneconomically high impact costs. How should we determine the trade-off? 

 

Optimal Execution 

 

“Fast is fine, but accuracy is final. You need to take your time in a hurry.” 

  • Wyatt Earp 

 

A robust understanding of both the manager’s alpha curve as well as the expected impact costs at different modes of speed and size is key to solving this dilemma. If we define the total dollar size of the position which the manager wishes to achieve as D, then we can use our TCA data in combination with the manager’s alpha curve to determine the optimal number of days over which that position should be implemented. For example: 

 

  • - Manager A typically generates 2bp of alpha every day for the first 10
  •   days of a new position.
  • - She wishes to achieve a $10MM position in a stock.  
  • - Buying over 1 day will create 25bp of negative trade impact,
  •   resulting in an estimated net loss on the position of 5bp after 10
  •   days.
  • - Buying over 2 days, however, will generate 20bp of negative trade
  •   impact.  
  • - However, we must remember that she will only have a $5MM
  •   position on day 1 and so will only benefit by half the expected alpha
  •   that day, resulting in a net loss of 1bp after 10 days. 
  • - Spreading over 5 days will only generate 10bp of trade impact but at
  •   the expense of missing some of the initial alpha (day 1 will have a
  •   position of $2MM, day 2 $4MM, etc.), resulting in a gain of 6bp after
  •   10 days. 

 

We can define this quantitatively. If: 

  • - X is the dollar size of the required position 
  • - n is the number of days over which we intend to execute X 
  • - A is the vector of daily expected alphas over some horizon 
  • - and M is the estimated impact for a trade of size X/n 

 

Then the vector of position sizes each day d is, 

𝑆= {𝑑𝑛 𝑓𝑜𝑟 𝑑<𝑛𝑋 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒S= dn for d<nX otherwise

 

 Then we are seeking to maximize: 

∑𝑑=𝑛𝑑=1𝐴𝑑∙𝑆𝑑−∑𝑑=𝑛𝑑=1𝑀𝑑∑d=1d=nAd∙Sd−∑d=1d=nMd

 

Conclusion 

 

Typically, our analysis suggests that most managers would benefit from spreading their execution—especially in large percent-of-volume trades—over more days than they currently do. On average, even the highest conviction/size trades have insufficient alpha in the first few days of their lives to overcome the associated impact. 

  

We reach these insights through our dedicated team of data scientists and execution experts who employ their multi-decadal experience to analyze, enhance, refine, and implement execution strategies. The conclusions that CenterBook Partners reach as they operate their own business are continually reflected back to the Partner Fund team as we collaborate to help refine and enhance all that we do together. We consider ourselves to be an extension of each Partner Funds’ business, helping to bring quantitative and systematic detail to fundamental investment teams. 

  

We look forward to sharing additional research efforts with our Partner Funds in the near future in order to help them Be Better

 

Footnotes: 

1 CenterBook is a multi-manager asset management platform based out of Greenwich, CT. Any communication regarding CenterBook is for information purposes only and should not be regarded as advice, a solicitation of an offer to buy or sell any CenterBook financial product, or as an official statement of CenterBook.  

2 Alpha Theory clients who want to learn more are encouraged to reach out to their CX representative. 

3 For example, if a large position is reduced to a few basis points in size as a “holding position” and then scaled back up, is this a new position? 

4 At CenterBook, we leverage external TCA providers as well as our own internal models. 

5 Our work also suggests that liquidity is a correlate. Ceteris paribus, execution in lower liquidity stocks generates a higher impact than execution in higher liquidity ones. We speculate that this is due to the greater presence of hedgers and other natural flows in more liquid securities. 

August 11, 2022

What does it mean to be a good stock picker? Part 2

 

BOTTOM LINE UP FRONT: A slugging percentage above 1.65x demonstrates stock picking skill for an active manager.

 

In the previous post, we explained that comparing an active manager to a randomized version of an active manager was the best way to measure their stock selection skill. We discussed why constructing our random portfolio using absolute return or benchmark adjusted returns was flawed. This resulted in the use of an equal-weighted benchmark (equivalent to a monkey throwing darts in a more normalized time period*). We started with Batting Average and will continue our analysis with Slugging Percentage.

 

Slugging Percentage

 

Slugging percentage is a measurement of the average winner divided by the average loser. If I have a batting average of 50% and a slugging of 1.0x, then my fund will generate a 0.0% return (50% of stocks make +20% and 50% make -20%). Anything greater for either of those metrics and the returns turn positive. If you can find winners that go up twice as much as losers, you need only have a batting of 33.3% to result in a return of 0.0%.

 

To demonstrate skill, a manager needs a slugging percentage* of 1.65x. This may seem high, but we must understand that the market demonstrates persistent positive skew (a straightforward way to think of positive skew is that more stocks go up more than 100% than go down more than 100%). This positive skew is also one of the major reasons that the batting average of the randomized portfolio we calculated in the previous post, 37.8%, seems so low.

  

If a manager has a 40% batting average (better than 37.8%) and 1.8x slugging percentage (better than 1.65x), they are demonstrating skill in two categories: both in picking winners and picking winners that go up much more than their losers. This is a good start, but how good is 40% and 1.8x?  

 

The next step would be to Monte Carlo random portfolios and determine the probability of getting a 40% batting and 1.8x slugging portfolio. In fact, the ideal method would be to: 

  •  -  Match manager time period and industry; 
  •  -  Randomly select longs at the ratio of manager long positions; 
  •  -  Randomly select shorts at the ratio of manager short positions; 
  •  -  Randomly assign weights to Monte-Carlo’d portfolios using position
  •     size ranges; 
  •  -  Assess the gross exposure of an individual manager; and 
  •  -  Build a distribution of portfolios with those standardized
  •     characteristics to determine the probability of achieving a
  •     similar portfolio.
     

With that, you can perform significance tests and take samples from different periods to determine the persistence of skill.

  

We may explore this concept in the future, but in the interim, the simple approach above is a shortcut method to help elucidate the idea of comparing managers to random as a way of measuring manager skill. 

 

 *SLUGGING PERCENTAGE FORMULA: (if positive: average (return of stock - return of the average stock in the index) / if negative: average (return of stock – return of the average stock in the index) 

 

THE RANDOMNESS EQUATION: Equal Weighted Batting Average of ACWI * Slugging Percentage of ACWI + (1-Equal Weighted Batting Average of ACWI * Denominator of Slugging Percentage of ACWI = The Randomness Equation 

37.8% * 165% + 62.2% * -100% = 0.0% Return – The Randomness Equation 

 

July 28, 2022

What does it mean to be a good stock picker?

 

Bottom Line Up Front: A batting average above 38% demonstrates stock picking skill for an active manager. 

 

Defining skill in investing is challenging, so it is no surprise that measuring skill is even harder. While we strongly believe there is an inherent skill for many “stock pickers,” how do we measure that belief? What if we compare them to the “blindfolded monkey throwing darts?” (Sidenote: unless monkeys can read, I think the blindfold is superfluous). Said another way, can the manager beat a randomly constructed portfolio? 

 

In this experiment, we will measure skill by measuring two metrics: batting average and slugging percentage (Batting = # of positions that make money / total number of positions | Slugging = average percent gain of winning position/average percent loss of losing position).  

 

Important Note: Demonstrating skill does not mean beating the market. It also does not mean making money. I know, weird. You can be better than the monkey throwing darts and still lose to the market because indices do not equal weight the stocks in the index. 

 

Batting Average 

 

ABSOLUTE BATTING AVERAGE. Let us start by building the comparison portfolio from the ACWI Index (All-Country World Index - ~3000 global stocks - ~70% US). To demonstrate skill, a manager should be able to “beat” random. Over the past five years, the ACWI has had a 56.4% batting average (if you randomly picked 100 stocks from the ACWI, 56 of them had a return greater than 0%).

 

INDEX ADJUSTED BATTING AVERAGE. You could say that during the past five years, a manager should make money on 56% or better of their long positions. The problem with looking at it this way is that some five year periods are more positive than others. A potentially better approach is to measure the number of stocks that beat the ACWI. The ACWI was up +40.9% during this period. Once we subtract that return, only 27.1% of stocks beat the ACWI. To assess, the manager would simply measure the number of stocks up more than the ACWI during the holding period. If that was higher than 27.1%, they were demonstrating skill. 

 

EQUAL WEIGHTED BATTING AVERAGE. The problem is that the last five years have seen the largest returns from the biggest index weights (large caps outperforming the small caps).  To get a random selection of the ACWI (like a monkey throwing darts), we should equal weight the ACWI. Equal weighted, the ACWI is up 21.7%. This more reasonable version results in a batting average of 37.8%. 

 

From this, we would argue that a batting average above 37.8% demonstrates skill. The higher above, the more skill demonstrated. Of course, there is luck involved, and untangling skill and luck is difficult, but over time, we can look for persistence of skill in picking winners and losers with this metric*. 

 

So far, the only skill measured is that of picking stocks that go up. In the next post, we will measure the skill of picking the stocks that go up the most and avoiding those that go down the most (Slugging – position sizing skill). 

June 24, 2022

8 Data Science Resources for Investment Management Professionals

 

Investment managers are increasingly interested in using data science and artificial intelligence to improve their processes and outcomes. Recently, a few clients have asked our team how they can sharpen their skills in the subject.  

 

The problem is that most entry-level data science material is not very useful for finance, and the material useful for finance is not entry-level by any means. Not to worry, our team has shared their top books and articles for investment professionals eager to learn about solving problems with data. Read on for a list of our top eight picks. 

 

Algorithms to Live By- The Computer Science of Human Decisions 

 

Authors Brian Christian and Tom Griffiths show how algorithms developed for computers also untangle very human questions by explaining, in layman’s terms, algorithmic solutions for real-world decision making. If you like problem-solving and decision theory, you’ll love this book. 

 

Recommended by Cameron Hight, Alpha Theory CEO 

 

Big Data: A Revolution That Will Transform How We Live, Work, and Think 

 

Viktor Mayer-Schönberger and Kenneth Cukier, two leading experts in data science, wrote this non-technical book that discusses what big data is, how it will change our lives, and what we can do to protect ourselves from its hazards. It’s a great place to start for those who wish to get into a data-oriented mindset, but do not have direct experience. 

 

Recommended by Aaron Hirsch, Data Scientist at Alpha Theory 

 

CRISP-DM – a Standard Methodology to Ensure a Good Outcome 

 

CRISP-DM is a framework for applying data science to business problems. This article gives a solid explanation on how to approach a project before getting started. For those getting started in practicing data science, it will save them time by helping to avoid rabbit holes. 

 

Recommended by Billy Armfield, Data Scientist at Alpha Theory 

 

The 7 Steps of Machine Learning 

 

This article, written by a google developer, outlines in broad strokes the steps in a typical machine learning problem. It walks through a basic example to describe the process of getting answers from data using machine learning. Readers will gain a foundational framework to think through the problem and the language to articulate each step. 

 

Recommended by Billy Armfield, Data Scientist at Alpha Theory 

 

Machine Learning: An Applied Mathematics Introduction by Paul Wilmott 

 

This slim book by uber-quant Paul Wilmott gives clear and detailed explanations of the machine learning models most used in quant finance, along with pointers to further reading. While the book assumes basic calculus and linear algebra skills, it is an approachable resource for those who desire a deeper understanding of machine learning models without dense textbook reading. 

 

Recommended by Ross Fabricant, Director of Data Science at CenterBook Partners 

 

Statistical Methods for Machine Learning- Learn How to Transform Data into Knowledge with Python 

 

Machine learning specialist Jason Brownlee provides a thorough hands-on introduction to statistics and hypothesis testing with step-by-step instructions through Python-based projects. The book builds a solid foundation for future discovery and assumes little prior knowledge of statistics and coding. 

 

Recommended by Chris White, Head of Portfolio Implementation & Risk at CenterBook Partners 

 

Machine Learning Mastery with Python- Understand Your Data, Create Accurate Models, and Work Projects End-to-End   

 

Also, by Jason Brownlee, this step-by-step guide helps the reader master foundational techniques in machine learning, using Python with scikit-learn, pandas, tensorflow and other helpful libraries. It is written in an engaging and accessible style, without assuming much prior knowledge. 

 

Recommended by Chris White, Asia CEO & Head of Risk & Portfolio Implementation at CenterBook Partners 

 

An Introduction to the Bootstrap 

 

Bradley Efron and Robert J. Tibshirani arm scientists and engineers with computational techniques to analyze and understand complicated data sets, without relying on an understanding of advanced mathematics. But be warned- this dense academic textbook is no-nonsense. Fancy charts and descriptions of tooling are few and far between.  

 

Recommended by Ake Kullenberg, Head of Execution Trading at CenterBook Partners 

 

Were there any books that have been helpful to you as you begun learning about data science? We’d love to know. 

 

Get in Touch with Alpha Theory 

 

If you have questions about the resources mentioned above, our in-house data science team, or our leading portfolio construction platform and services for investment managers, please do not hesitate to reach out.  

 

Any communication regarding CenterBook Partners LP (“CenterBook”) is for information purposes only and should not be regarded as advice, a solicitation of an offer to buy or sell any CenterBook financial product, or as an official statement of CenterBook. 

April 28, 2022

The Alternative Data Podcast: The CenterBook Episode, with Ross Fabricant

 

Our colleague, Ross Fabricant, was recently on “The Alternative Data Podcast” hosted by Mark Fleming-Williams. In this episode, he discusses his data science pedigree, his experience using Alpha Theory at Third Point, and what led him to CenterBook Partners. 

 

Ross describes the benefits of Alpha Theory from a quant’s perspective, why the Alpha Theory data set is unique, and how CenterBook Partners collaborates with funds to help them improve their investment process. 

 

Listen to the episode here

 

Disclaimer: Any communication regarding CenterBook Partners LP (“CenterBook”) is for information purposes only and should not be regarded as advice, a solicitation of an offer to buy or sell any CenterBook financial product, or as an official statement of CenterBook.

 

 

 

March 15, 2022

March Madness Math 2.0

 

I love college basketball and have been a Tarheels fan my whole life. I’m a graduate of UNC-Chapel Hill (home of Michael Jordan) and grew up thirty minutes from campus. So, if you’re anything like me, you’ll spend too much time this week putting together your bracket.

 

In Alpha Theory fashion, I built a systematic approach to filling out my NCAA brackets. I wrote about it 10 years ago, but since then, I’ve scratched it because FiveThirtyEight.com has built a better version for me. 

 

I was showing some of my colleagues the old NCAA spreadsheet tournament and realized that KenPom.com no longer reports the Pyth statistic. If you’ve read the previous post on my method for creating a bracket, you will see that I used Pyth along with Log5 to calculate win percentage. Apparently, the change happened in 2017, but I’ve been taking the shortcut through FiveThirtyEight so I haven’t noticed (see my reasons below). 

 

But if you’re curious about the rabbit hole I went down to use KenPom’s new statistic, AdjEM, to calculate Win Percentage, the rabbit hole awaits below.

 

How to Calculate Win Percentage using KenPom

 

For those curious to understand an updated method to calculate Win Percentage for KenPom, here is an example with formulas:

 

Boise State (8) plays Memphis (9) in the first round. First, calculate the expected point differential (I pulled the formula from Reddit). AdjEM (Adjusted Efficiency Margin) is the number of points you would expect this team to beat the average Division 1 team by, if they played 100 possessions. The AdjT (adjusted Tempo), is a way of accounting for Tempo, or how many possessions they actually play. 

 

PointDiff = (AdjEM_A - AdjEM_B)*(AdjT_A + AdjT_B)/200 

PointDiff = (A – B) * (C + D) / 200 – see below

PointDiff = (16.41 – 16.22) * (64.6 + 70.3) / 200 = 0.13 Boise St. point advantage

 

March Madness Update Table 1

 

Then, in Excel, use the =Norm.Dist(x,mean,StDev,distribution) function. In this case, it would be:

 

=Norm.Dist(0.13,0,11,true) = 50.5% probability that Boise St. is going to win. Makes sense that it is a coin flip given that Boise St. is only favored by 0.13 points. Why 11 for the StDev? That’s what KenPom suggests here as the average StDev for NCAA basketball.

 

Why I Use FiveThirtyEight

 

The main reason I use FiveThirtyEight is because it uses multiple sources and, well, KenPom is just one source. For comparison, Vegas odds have Memphis winning by 2.5 points vs. losing by 0.13 points on KenPom. That’s a 59% probability of Memphis winning using the formula above. FiveThirtyEight has a 54% probability of Memphis winning. I believe that the best answer comes from the wisdom of the crowd.

 

The nice folks at FiveThirtyEight have taken the time to crowdsource many great thinkers, including KenPom and themselves (using their Elo method), into one forecast (see here or the schema below).

 

March Madness Update Picture 2

 

Also, they have this nifty little Excel downloader which makes life so much easier. So, while it was a fun intellectual challenge to build forecasts using the Monte Carlo simulations based on KenPom win probabilities, I can’t compete with FiveThirtyEight.

 

We all should strive to constantly improve our process. KenPom improved their process by switching from Pyth to AdjEM. FiveThirtyEight changes their weightings, sources, and adjustments. I realized that I was better served by using an external source versus my internal solution.

 

And, in the same way Ken Pomeroy’s AdjEM is updated after each game played, or FiveThirtyEight updates probabilities, fresh price targets and probabilities in Alpha Theory lead to better sizing recommendations because they reflect your most up-to-date beliefs about the future performance of your investments. Luck is not a strategy. Unlike a Vegas casino, you don’t have a built-in edge. Your edge is your process. Build it. Refine it. And then win.

 

Contact us if you are interested in how we partner with fund managers to define and implement disciplined, repeatable processes that ultimately lead to confident decision making, driving portfolio optimization.

 

December 29, 2021

A Lack of Freshness Spoils Returns

 

Alpha Theory can’t tell you how to do your research, but it can tell you when. Using insights from the Alpha Theory All-Manager dataset, we can provide guidance on some of the basics in managing your research process. 

 

Managers understand intuitively that producing investment research and updating that research regularly (i.e. freshness) is important. But how frequently? Should I update my research every 60 days? Every two weeks? Do I need to produce scenarios for all my positions?

 

Key conclusions: 

1. Assign price targets and probabilities to every investment

2. Update them once a month

 

To determine the impact of freshness and coverage on returns, we measured the one-year forward return for the optimal long portfolio for each fund in the Alpha Theory All-Manager dataset on a quarterly basis1. We then put each fund into four buckets based on their average freshness (days since the last update or DSLU) and coverage (percentage of positions with price targets). Next, we calculated the return of each quartiled bucket to see if returns correlated to freshness and coverage.

 

We found that funds that were diligent enough to place in the top quartile produced more than four times as much alpha as the bottom quartile, increasing monotonically from bottom to top. The median update frequency for the top quartile was 25 days (once a month updates), meaning the top funds updated more than 10x as often as managers in the bottom quartile. Additionally, managers in the top quartile had research on all active positions.  

 

A Lack of Freshness Spoils Returns

 

As a fundamental manager, you may argue that very rarely does something meaningful happen every 30-days that warrants a forecast update. We would counter that price is an important signal. For example, let’s say you initiated coverage on a position at $100 with a 70% chance of going to $150 and a 30% chance of going to $50. If the price moves from $100 to $125, wouldn’t you say the probability of reaching your bull target has changed? While $150 may still be the price suggested by your model, updating the probabilities of your scenarios to more accurately reflect likely outcomes allows the OPS model to make better sizing recommendations.

 

In addition, Daniel Kahneman’s new book “Noise” describes how the same expert can take the same information and come to different conclusions at different times. And, that the best answer is the average of those forecasts. This means that an analyst may come to a different conclusion for price target and probability on a different day and that the constant refinement (updating once a month) is healthy and leads to more accurate forecasts.

 

Finally, research from our friends at Good Judgement Inc. shows that over the past six years, their top forecasters (orange) update roughly 4x as often (11 updates vs 3 updates per question) as non-Superforecasters. Update frequency has a high correlation with outperformance and incorporating even small additional bits of information (Superforecaster updates were roughly half the size of non-Superforecasters) that either support or detract from the probability of a given outcome lead to better results over time.

 

A Lack of Freshness Spoils Returns Chart 2

 

We are always interested in learning more about your research process and where Alpha Theory can help. Alpha Theory is a process enhancement tool, creating a space that systematizes how you conduct and use research for allocation decisions. Please reach out to us with any questions so we can better optimize your workflow to generate more alpha.

 

1To normalize for different benchmarks, we calculated alpha on an idio+sector basis using the Axioma World-Wide Equity Factor Risk model, which removes performance derived from all their tracked factors, excluding sector. 

 

November 29, 2021

Getting Comfortable with Many, Micro Updates

 

For years we’ve worked closely with the folks at Good Judgement Inc. from “Superforecasting” fame. One of our friends there, Chris Karvetski, recently published a white paper called “Superforecasters: A Decade of Stochastic Dominance” on Superforecasters’ attributes and skills. For analysis, Chris studied 108 forecast questions with 167,000 forecasts to compare the differences between accuracy and approach between Superforecasters and everyone else.

 

From an accuracy perspective, Superforecasters dominate with accuracy that is 36% better (0.166 error for Superforecasters versus 0.259 for general forecasters).

 

Picture1

 

Alpha Theory clients forecast stock price movement. As such, the question we should ask is “how can we be more like Superforecasters?” Well, Chris broke down the frequency and magnitude of updates and I believe the answer is clear.

 

Picture2 Picture2

 

Superforecasters update their forecasts ~4x more often which leads them to adjustments that are about half the size. Imagine steering a boat towards a lighthouse. You can choose to make 3 major adjustments or 11 minor adjustments. Which method is going to get you closer to the lighthouse?

 

As analysts, to gain better forecast accuracy, we should frequently update our price targets and probability forecasts. Obviously, new information warrants updates but we should still make updates even when there is no new information. As The Verve says, “we’re a million different people from one day to the next.” We all have what Daniel Kahneman calls, Occasion Noise, which basically means that we change our opinions without the facts changing. Our mood impacts our forecasts. To get a truer sense of our own opinions, we should ask ourselves the same question at different times.

 

Let’s be like Superforecasters and get comfortable with many, micro updates. In our next blog post, we’ll explore the impact that update frequency has on returns.

 

October 27, 2021

KISS, or How to Make Money by Following Your Research

 

“It is remarkable how much long-term advantage people like us have gotten by trying to be consistently not stupid, instead of trying to be very intelligent.” - Charlie Munger

 

Successful traders and investors encourage entrants to the field to find an “edge”, ideally a strategy that has not already been widely adopted by other market participants. This has led to the proliferation of esoteric strategies, especially in the quantitative arena. In order to generate alpha in the increasingly competitive asset management industry, you need an army of PhD’s, complex strategies, and troves of data, right? Well, not necessarily.

 

KISS_Sep2020

 

Analysis of the Alpha Theory dataset shows that if managers simply exit all positions where probability-weighted return is zero or negative, the average manager’s CAGR would improve by 3%!

 

Alpha Theory managers create a probability-weighted value for each position based on price targets and probabilities for the various scenarios which may play out in the market. In an ideal long scenario, the current market price of a security will increase towards the probability-weighted value. As price and expected value converge, probability-weighted return drops to zero, and the analyst should either revise price targets upward, trim, or exit the position all together. If expected return is zero, Optimal Position Size will recommend exiting the position, as there are other investments with greater expected return.

 

Sometimes, however, managers are slow to update price targets, or to reallocate the portfolio to higher expected return investments. We compared the return on invested capital (ROIC or total return/gross exposure) of the manager’s actual portfolios to what ROIC would have been if managers were only invested in positive probability-weighted return positions. This means a long position would only be in the portfolio if the probability-weighted return was positive, and a short position only if the probability-weighted return was negative.

 

The data below shows the improvement in ROIC over actual for simply removing positions with negative probability-weighted returns (blue column) and then for Alpha Theory’s Optimal Position Size (gray column), which layers on additional sizing logic in addition to zeroing out positions with zero probability-weighted return. The sample includes all Alpha Theory clients from January 1st, 2014 to June 30th, 2021.

 

Improvement Over Actual ROIC

Returns on manager portfolios of only the positions which had a directionally accurate positive probability-weighted return had a 3% higher CAGR, and returns on Optimal Position Size, which uses manager research as well as other portfolio constraints, improved CAGR by 6.7% over actual ROIC.

 

Highly intelligent, sophisticated investors look for ways to improve by default, and the temptation to distinguish oneself with new strategies is intense. But our research suggests that it is more important to focus on the fundamentals. John Wooden’s insight that free throws contribute to national championships also applies to portfolio management. Having high research coverage, updating price targets, and being allocated to positive returns are simple rules which contribute to outperformance, but which are often ignored at the expense of alpha.

 

September 29, 2021

The Cost of Volatility – The Path Dependency of Returns

 

In a recent analysis, we were comparing the volatility of a return stream on a daily and monthly basis. We all know that if a portfolio goes down by 10%, it must be up more than 10% to get back to even (11.11% to be exact). The path dependency can cause the differences in return and volatility to be stark. In trying to understand where the differences arise, we compared the outcomes of a perfectly stable return stream (0% volatility) to those of increasing levels of volatility.

 

Cost of volatility

 

The starting point is an ideal 20% return (blue line) over the course of a year (252 days). The most efficient way to create that return is to generate 0.072% of return per day. Any additional volatility, with the same average of 0.072% return, negatively impacts total return. To show the impact, we simulated six return streams that all had an average of 0.072% daily returns but with varying daily volatilities between 0% and 5% (5% daily volatility is 79% annualized volatility).

 

For example, with 1% daily volatility (orange line), the daily return flips between 1.071% and -0.926% (average of 0.072%). For 5% volatility (green line), the daily returns flip between 5.066% and -4.922% (also a 0.072% average).

 

Volatility is a cost that requires a higher average return to get to the same goal. Ex-post (after the goal is reached), if you achieve a 20% return from two assets, you do not care about the path (volatility) that led to the 20% return. On the other hand, ex-ante (before the bet is made), it is harder to get to a 20% return with a 5% vol than a 1% vol.

 

The compounding impact of volatility is difficult to conceptualize but is important to understand when making investment decisions. This graphic is hopefully a helpful tool to remember how volatility causes us to raise the return bar for an asset.

 

As a life-long fundamental investor, acknowledging volatility as a risk is a tough pill to swallow. In my career as an analyst, I made forecasts over long periods of time, and I did not care about the path, just the outcome. However, before the bet is made, if I have two assets with the same probability-weighted return of 20% and one has a volatility of 10% and the other 30%, should I be indifferent or should I factor that in to how I size my bet? And if so, how? These are interesting questions that we’ll continue to explore in future posts.