(866)-482-2177

sales@alphatheory.com

REQUEST A DEMO

SYSTEM REQUIREMENTS


Please note the following System Requirements. Further, please limit the number of open applications (particularly price streaming applications) while logged in to Alpha Theory™.


Recommended System Specifications
Processor: Dual Core or Quad-Core 2.4GHz or faster
RAM: 4GB+
Browser: Google Chrome 30+
Screen Resolution: 1280 x 1024 or greater
Internet Access: Business Class High-Speed


Minimum System Requirements
Processor: Intel Pentium-M 2.0Ghz or equivalent
RAM: 2GB+
Browser: Google Chrome, Mozilla Firefox, Internet Explorer 9+ (without Compatibility View), Safari
Screen Resolution: 1024 x 768 or greater
Internet Access: High-Speed

Subscribe to Alpha Theory content

Alpha Theory Blog - News and Insights

101 posts categorized "Portfolio Optimization"

April 28, 2022

The Alternative Data Podcast: The CenterBook Episode, with Ross Fabricant

 

Our colleague, Ross Fabricant, was recently on “The Alternative Data Podcast” hosted by Mark Fleming-Williams. In this episode, he discusses his data science pedigree, his experience using Alpha Theory at Third Point, and what led him to CenterBook Partners. 

 

Ross describes the benefits of Alpha Theory from a quant’s perspective, why the Alpha Theory data set is unique, and how CenterBook Partners collaborates with funds to help them improve their investment process. 

 

Listen to the episode here

 

Disclaimer: Any communication regarding CenterBook Partners LP (“CenterBook”) is for information purposes only and should not be regarded as advice, a solicitation of an offer to buy or sell any CenterBook financial product, or as an official statement of CenterBook.

 

 

 

December 29, 2021

A Lack of Freshness Spoils Returns

 

Alpha Theory can’t tell you how to do your research, but it can tell you when. Using insights from the Alpha Theory All-Manager dataset, we can provide guidance on some of the basics in managing your research process. 

 

Managers understand intuitively that producing investment research and updating that research regularly (i.e. freshness) is important. But how frequently? Should I update my research every 60 days? Every two weeks? Do I need to produce scenarios for all my positions?

 

Key conclusions: 

1. Assign price targets and probabilities to every investment

2. Update them once a month

 

To determine the impact of freshness and coverage on returns, we measured the one-year forward return for the optimal long portfolio for each fund in the Alpha Theory All-Manager dataset on a quarterly basis1. We then put each fund into four buckets based on their average freshness (days since the last update or DSLU) and coverage (percentage of positions with price targets). Next, we calculated the return of each quartiled bucket to see if returns correlated to freshness and coverage.

 

We found that funds that were diligent enough to place in the top quartile produced more than four times as much alpha as the bottom quartile, increasing monotonically from bottom to top. The median update frequency for the top quartile was 25 days (once a month updates), meaning the top funds updated more than 10x as often as managers in the bottom quartile. Additionally, managers in the top quartile had research on all active positions.  

 

A Lack of Freshness Spoils Returns

 

As a fundamental manager, you may argue that very rarely does something meaningful happen every 30-days that warrants a forecast update. We would counter that price is an important signal. For example, let’s say you initiated coverage on a position at $100 with a 70% chance of going to $150 and a 30% chance of going to $50. If the price moves from $100 to $125, wouldn’t you say the probability of reaching your bull target has changed? While $150 may still be the price suggested by your model, updating the probabilities of your scenarios to more accurately reflect likely outcomes allows the OPS model to make better sizing recommendations.

 

In addition, Daniel Kahneman’s new book “Noise” describes how the same expert can take the same information and come to different conclusions at different times. And, that the best answer is the average of those forecasts. This means that an analyst may come to a different conclusion for price target and probability on a different day and that the constant refinement (updating once a month) is healthy and leads to more accurate forecasts.

 

Finally, research from our friends at Good Judgement Inc. shows that over the past six years, their top forecasters (orange) update roughly 4x as often (11 updates vs 3 updates per question) as non-Superforecasters. Update frequency has a high correlation with outperformance and incorporating even small additional bits of information (Superforecaster updates were roughly half the size of non-Superforecasters) that either support or detract from the probability of a given outcome lead to better results over time.

 

A Lack of Freshness Spoils Returns Chart 2

 

We are always interested in learning more about your research process and where Alpha Theory can help. Alpha Theory is a process enhancement tool, creating a space that systematizes how you conduct and use research for allocation decisions. Please reach out to us with any questions so we can better optimize your workflow to generate more alpha.

 

1To normalize for different benchmarks, we calculated alpha on an idio+sector basis using the Axioma World-Wide Equity Factor Risk model, which removes performance derived from all their tracked factors, excluding sector. 

 

November 29, 2021

Getting Comfortable with Many, Micro Updates

 

For years we’ve worked closely with the folks at Good Judgement Inc. from “Superforecasting” fame. One of our friends there, Chris Karvetski, recently published a white paper called “Superforecasters: A Decade of Stochastic Dominance” on Superforecasters’ attributes and skills. For analysis, Chris studied 108 forecast questions with 167,000 forecasts to compare the differences between accuracy and approach between Superforecasters and everyone else.

 

From an accuracy perspective, Superforecasters dominate with accuracy that is 36% better (0.166 error for Superforecasters versus 0.259 for general forecasters).

 

Picture1

 

Alpha Theory clients forecast stock price movement. As such, the question we should ask is “how can we be more like Superforecasters?” Well, Chris broke down the frequency and magnitude of updates and I believe the answer is clear.

 

Picture2 Picture2

 

Superforecasters update their forecasts ~4x more often which leads them to adjustments that are about half the size. Imagine steering a boat towards a lighthouse. You can choose to make 3 major adjustments or 11 minor adjustments. Which method is going to get you closer to the lighthouse?

 

As analysts, to gain better forecast accuracy, we should frequently update our price targets and probability forecasts. Obviously, new information warrants updates but we should still make updates even when there is no new information. As The Verve says, “we’re a million different people from one day to the next.” We all have what Daniel Kahneman calls, Occasion Noise, which basically means that we change our opinions without the facts changing. Our mood impacts our forecasts. To get a truer sense of our own opinions, we should ask ourselves the same question at different times.

 

Let’s be like Superforecasters and get comfortable with many, micro updates. In our next blog post, we’ll explore the impact that update frequency has on returns.

 

October 27, 2021

KISS, or How to Make Money by Following Your Research

 

“It is remarkable how much long-term advantage people like us have gotten by trying to be consistently not stupid, instead of trying to be very intelligent.” - Charlie Munger

 

Successful traders and investors encourage entrants to the field to find an “edge”, ideally a strategy that has not already been widely adopted by other market participants. This has led to the proliferation of esoteric strategies, especially in the quantitative arena. In order to generate alpha in the increasingly competitive asset management industry, you need an army of PhD’s, complex strategies, and troves of data, right? Well, not necessarily.

 

KISS_Sep2020

 

Analysis of the Alpha Theory dataset shows that if managers simply exit all positions where probability-weighted return is zero or negative, the average manager’s CAGR would improve by 3%!

 

Alpha Theory managers create a probability-weighted value for each position based on price targets and probabilities for the various scenarios which may play out in the market. In an ideal long scenario, the current market price of a security will increase towards the probability-weighted value. As price and expected value converge, probability-weighted return drops to zero, and the analyst should either revise price targets upward, trim, or exit the position all together. If expected return is zero, Optimal Position Size will recommend exiting the position, as there are other investments with greater expected return.

 

Sometimes, however, managers are slow to update price targets, or to reallocate the portfolio to higher expected return investments. We compared the return on invested capital (ROIC or total return/gross exposure) of the manager’s actual portfolios to what ROIC would have been if managers were only invested in positive probability-weighted return positions. This means a long position would only be in the portfolio if the probability-weighted return was positive, and a short position only if the probability-weighted return was negative.

 

The data below shows the improvement in ROIC over actual for simply removing positions with negative probability-weighted returns (blue column) and then for Alpha Theory’s Optimal Position Size (gray column), which layers on additional sizing logic in addition to zeroing out positions with zero probability-weighted return. The sample includes all Alpha Theory clients from January 1st, 2014 to June 30th, 2021.

 

Improvement Over Actual ROIC

Returns on manager portfolios of only the positions which had a directionally accurate positive probability-weighted return had a 3% higher CAGR, and returns on Optimal Position Size, which uses manager research as well as other portfolio constraints, improved CAGR by 6.7% over actual ROIC.

 

Highly intelligent, sophisticated investors look for ways to improve by default, and the temptation to distinguish oneself with new strategies is intense. But our research suggests that it is more important to focus on the fundamentals. John Wooden’s insight that free throws contribute to national championships also applies to portfolio management. Having high research coverage, updating price targets, and being allocated to positive returns are simple rules which contribute to outperformance, but which are often ignored at the expense of alpha.

 

September 29, 2021

The Cost of Volatility – The Path Dependency of Returns

 

In a recent analysis, we were comparing the volatility of a return stream on a daily and monthly basis. We all know that if a portfolio goes down by 10%, it must be up more than 10% to get back to even (11.11% to be exact). The path dependency can cause the differences in return and volatility to be stark. In trying to understand where the differences arise, we compared the outcomes of a perfectly stable return stream (0% volatility) to those of increasing levels of volatility.

 

Cost of volatility

 

The starting point is an ideal 20% return (blue line) over the course of a year (252 days). The most efficient way to create that return is to generate 0.072% of return per day. Any additional volatility, with the same average of 0.072% return, negatively impacts total return. To show the impact, we simulated six return streams that all had an average of 0.072% daily returns but with varying daily volatilities between 0% and 5% (5% daily volatility is 79% annualized volatility).

 

For example, with 1% daily volatility (orange line), the daily return flips between 1.071% and -0.926% (average of 0.072%). For 5% volatility (green line), the daily returns flip between 5.066% and -4.922% (also a 0.072% average).

 

Volatility is a cost that requires a higher average return to get to the same goal. Ex-post (after the goal is reached), if you achieve a 20% return from two assets, you do not care about the path (volatility) that led to the 20% return. On the other hand, ex-ante (before the bet is made), it is harder to get to a 20% return with a 5% vol than a 1% vol.

 

The compounding impact of volatility is difficult to conceptualize but is important to understand when making investment decisions. This graphic is hopefully a helpful tool to remember how volatility causes us to raise the return bar for an asset.

 

As a life-long fundamental investor, acknowledging volatility as a risk is a tough pill to swallow. In my career as an analyst, I made forecasts over long periods of time, and I did not care about the path, just the outcome. However, before the bet is made, if I have two assets with the same probability-weighted return of 20% and one has a volatility of 10% and the other 30%, should I be indifferent or should I factor that in to how I size my bet? And if so, how? These are interesting questions that we’ll continue to explore in future posts.

 

August 31, 2021

Caveats in Compounding

 

“Compound interest is the eighth wonder of the world. He who understands it, earns it; he who doesn't, pays it.” – Albert Einstein

 

“Compounding is the most powerful force in the universe” – Albert Einstein

 

“My wealth has come from a combination of living in America, some lucky genes, and compound interest.” – Warren Buffett

 

Compounding really is the 8th Wonder of the World. In a recent analysis, we were comparing the CAGR (Compound Annual Growth Rate) of two portfolios and noticed two unique qualities of compounding that are important to remember when using CAGR:

 

1. Small differences in CAGR can compound to large differences over time

2. A 1% difference in two small CAGRs is not the same as a 1% difference in two large CAGRs

 

Small differences in CAGR can compound to large differences over time

 

Imagine you have two portfolios. One generating 8% per year and another 9%. The 1% difference seems trivial. Because of compounding, it is not.

 

Screen Shot 2021-08-31 at 3.19.42 PM

 

The 10-year performance difference is a non-trivial 20.8%. This means that finding investments that may seem marginally different when compared at the small scale of a year, can have profound differences over time.

 

One interesting fact was that the total difference over ten years was 20.8%, which is not the same as the 1% difference compounded over 10 years, which is 10.5%. This leads to the second unique quality…

 

A 1% difference in two small CAGRs is not the same as a 1% difference in two large CAGRs

 

If we bump the performance slightly up but keep the difference 1%, the total difference grows from 20.8% to 22.6%.

 

Screen Shot 2021-08-31 at 3.19.51 PM

 

That 1% difference gets to compound a bigger base and thus results in a larger total return difference. This is counterintuitive. An investor may be indifferent between a 23% and a 24% return while being sensitive to a 2% versus 3% return. The later seems much more meaningful because the relative difference is 50%.

 

In the graph below, the difference between a 2% and 3% return is $12.5M (12.5% on $100M fund) over 10 years. The difference between a 23% and 24% return is $62.1M! They are both 1% differences, but they are not created equal.

 

Screen Shot 2021-08-31 at 3.21.54 PM

 

Compounding is amazing but can be amazingly difficult to conceptualize. As an investor, your job is to be a professional compounder. Keep your tools sharp by remembering that CAGRs don’t tell the whole story. To get a better sense of the return stream, compare the CAGR to the total return for a period of time and then perform some basic sensitivity analysis. This allows the compounding impact on returns to present itself in a way that is easier to put into perspective and help you make better decisions.

 

July 31, 2021

Gaining Confidence in Your Confidence

 

Alpha Theory helps managers streamline the capital allocation process by combining all the investment-process inputs into a model that calculates an optimal size (OPS) for each position. While the primary inputs are quantitative including price targets and probabilities, there is also a qualitative perspective that is just as important to capture.

 

Alpha Theory helps managers create a Confidence Checklist which contains the more subjective aspects of each manager’s investment process. The individual Checklist items are combined into a Checklist Confidence Score for each security. Formalizing these mental rules and tracking their performance over time creates a feedback loop through which our clients can learn which questions are most important for generating an excess return.

 

We wanted to investigate if the Checklist Confidence Score was a predictive signal of forward returns. After rigorous analysis of 500,000+ checklist scores, we found a statistically significant signal at the 99% confidence level that showed having a confidence checklist results in positive forward returns. This demonstrates why it is important to explicitly capture and formalize checklists into an investment process.

 

The Confidence Checklist is a combination of the qualitative, statistical, and fundamental metrics that normally are kept in a manager’s mental model. We think of this mental model as everything that is not clearly captured by the price targets and probabilities. There are infinite possibilities for checklist items, and after more than a decade of helping managers make the most optimal decisions, we are able to help build a meaningful and impactful checklist with our managers to help them find more alpha in their qualitative ideas.

 

80% of Alpha Theory clients have checklists that are built with customized inputs to fit their process, each of these inputs can take on several values. For example, Management Team could have a drop-down that consists of selections such as Strong, Neutral, and Weak which contribute to the overall confidence score according to the weight applied by the selection.

 

Each checklist item has a selection, and the total weights combine to create confidence, for example, a final score could be 85%. The confidence checklist score then adjusts the optimal position size and provides the base optimal position before any other factors are applied.

 

We can see that having a confidence checklist for each position is an important factor in investing. When thinking about how to improve your fund’s performance, think about how your own qualitative checklist contributes to the decision-making process. Is scoring consistent across names? Do you have a way to measure the importance of a checklist item? While you can’t quantify everything, these results prove that adding a little science to the art of investing can improve future returns.

 

June 30, 2021

Joe Knows: Why Active Managers Should Codify Their Process

 

A colleague of mine forwarded an article titled “All Active Managers Should Run Systematic Replicas of Their Portfolios” by Joe Wiggins, who is head of portfolio management at Aberdeen Standard. Well Joe, the folks at Alpha Theory agree. I have highlighted a brief section, but the whole article is worth a read:

 

In its most basic form, all that is required is a set of portfolio construction rules (number of positions, position sizes, concentration) and criteria about when to buy or sell securities. This can be as simple or complex as is desired, provided it can be managed and maintained by a computer with minimal human involvement.

There are three key reasons why such an approach should be valuable to active fund managers:

 

Idea Generation: Although not its primary purpose, it can function as a buy and sell idea generation tool that is more sophisticated than a screen or filter. If you continue to hold a stock that the systematic version of our strategy has sold, you should be able to justify why.

 

Noise Cancelling: The most impactful feature of the approach is the ability to observe investment decisions being made absent much of the noise that influences human judgement. There are a multitude of factors that lead us to make inconsistent and erratic choices. Running a systematic version of a fund removes this issue by focusing solely on the rules prescribed.  How much of the potential loss in rigour and detail is compensated for by the removal of noise?

 

Identifying Value-Add: Active fund managers often struggle to convey what their true value-add or edge is. Too often it is overly generic (‘growth at reasonable price’) or suitably vague (some kind of ‘secret sauce’ or ‘art’). This is a problem. If fund managers are attempting to sell a skill at a high price, it would be helpful to know what it is. Running a systematic version of a fund can be incredibly beneficial in this regard. 

 

May 28, 2021

We’re Getting Better All The Time

 

When we do our year-end review with clients, a bulk of the conversation is about the performance of the systematic portfolio built by Alpha Theory versus the client’s actual returns. The conversations are always informative but, as you might imagine, the systematic portfolio doesn’t always outperform. We did some recent analysis to understand the frequency of systematic portfolio outperforming.

 

We started by looking at individual positions and took all positions, all time, all managers and simply looked at the lifetime returns of the security in the actual and systematic portfolio. In this case, the systematic position outperformed 55% of the time. That’s a pretty good batting average.

 

We then rolled that up to the analyst level. For analysts with at least 10 positions over time, the hit rate rose to 66%. Said another way, the analyst would have been better off 60% of the time if they would have sized all of their positions using the systematic method versus what actually occurred.

 

The next step was to roll up to the client level and simply comparing the return of the actual and systematic portfolio. The first cut was by year. In any given year, a fund has a 67% chance that the systematic portfolio will outperform its actual portfolio.

 

Then, we rolled it up for each client on an all-time basis. For example, for a six-year-old client, we would compare their six-year actual and systematic portfolio returns. In this case, the systematic portfolio outperformed 76% of the time. 

 

Finally, we rolled all clients together into an actual and systematic portfolio over our clients’ nine years of historical data. 100% of years, the systematic portfolio outperformed.

 

All Funds All Time – 100%

All Funds By Year – 100%

By Fund All Time – 76%

By Fund By Year – 67%

By Analyst All Time – 66%

By Ticker All Time – 55%

 

This data is starting to resonate for clients as the correlation between actual and systematic has increased over time. In 2015, the average correlation was 35%, today it is 57%. The difference between 57% correlation and 100% is why there is a difference between the actual and systematic. The challenge is that the systematic portfolio requires diligence and higher trading activity than typical for our uber-fundamental managers. The best solution may be a hybrid where the fundamental manager does what they do best, fundamental research. Then a systematic overlay is applied to construct the portfolio, manage risk, and ensure the best execution. And, so, we’ve decided to partner with our clients and build that strategy through CenterBook Partners. We look forward to sharing more about it as it develops.

 

March 30, 2021

Capital Allocators Book release by Ted Seides

Our friend Ted Seides has recently released a great book titled “Capital Allocators” and WE’RE IN IT! The book distills the learnings and best practices of his 180+ podcasts and is a treasure trove of great insights. There are four things that make the book special:

1. Ted gets amazing people.

2. Many of these people don’t publish their thoughts and this is our only access to them

3. Ted has distilled the best of these learnings into a “toolkit” you can apply to your own investing.

4. ALPHA THEORY IS INCLUDED!

 

See below for the section on Alpha Theory (italicized paragraph is edited to focus on Alpha Theory):

 

Cameron Hight was a frustrated former hedge fund manager at a smaller shop who felt he did not have the requisite tools to improve their own skills. He set aside managing money to create a software company that would help portfolio managers.

 

Cameron Hight had an insight that has helped hedge fund managers big and small optimize portfolio construction. He believed markets move so quickly that a portfolio manager cannot consider all the variables to optimize position sizing in real time. His business, Alpha Theory, strives to make the implicit explicit by putting numbers and probabilities on position sizing decisions.

 

Alpha Theory uses the investment team's research to calculate risk and reward in real time. A thorough analyst already has models and probability scenarios for the potential path a stock might take. Absent new Information, each movement in the stock price changes the attractiveness of risk and reward. Alpha Theory models conviction-weighted sizing based on the investment team's research and compares the result to the actual portfolio position size. Over 15 years of operation, Cameron has teams of data showing that his seemingly simple tool has added substantial returns for clients who employ it in their practice.

 

His data also revealed an important conclusion about many fundamental managers. Good active managers perform far better in their larger positions than they do in smaller names. Alpha Theory wrote “The Concentration Manifesto," preaching that managers and allocators would both be better served if managers focus on more concentrated portfolios of their best ideas.

 

Summary

 

Data analysis almost never gives an allocator the answer, but the tools employed are useful in measuring risk and return at the portfolio and manager level, and in making informed judgements about manager selection. The availability of data and the entrepreneurs at the forefront of assessing it enable CIOs to be more informed. Asking the right questions may reveal managers who eschew modern technology and are a step behind the pack.