(866)-482-2177

sales@alphatheory.com

REQUEST A DEMO

SYSTEM REQUIREMENTS


Please note the following System Requirements. Further, please limit the number of open applications (particularly price streaming applications) while logged in to Alpha Theory™.


Recommended System Specifications
Processor: Dual Core or Quad-Core 2.4GHz or faster
RAM: 4GB+
Browser: Google Chrome 30+
Screen Resolution: 1280 x 1024 or greater
Internet Access: Business Class High-Speed


Minimum System Requirements
Processor: Intel Pentium-M 2.0Ghz or equivalent
RAM: 2GB+
Browser: Google Chrome, Mozilla Firefox, Internet Explorer 9+ (without Compatibility View), Safari
Screen Resolution: 1024 x 768 or greater
Internet Access: High-Speed

Subscribe to Alpha Theory content

Alpha Theory Blog - News and Insights

30 posts categorized "Analytics"

September 19, 2019

Superforecasting and Noise Reduction in Financial Analysis

 

Alpha Theory and Good Judgement Inc. hosted a Superforecasting workshop this week with several Alpha Theory clients attending and learning about noise reduction techniques. Warren Hatch, President of GJI, led the discussion on how to reduce noise in forecasting. Warren began the discussion with an overview of the characteristics of Superforecasters and what leads to good individual forecasts. We then shifted to how we can use team dynamics to improve forecast accuracy.

 

Warren started with examples pulled from other noise reduction workshops and showed how the team methods reduced noise (measured by the standard deviation of estimate) and increased accuracy (measured by Brier Score). We did our own example using Good Judgement Inc. software to ask questions of our group that led to a valuation of NFLX:

How many subscribers will Netflix have at the end of 2020?

What will be Netflix's revenue per subscriber in 2020?

What will be Netflix's net margin in 2020?

What will be Netflix's PE multiple in 2020?   

 

We compiled the initial results and compared them to current. We then had a chance to review other contributors forecasts and rationales and vote on the ones we thought were best. Next, the “team” discussed the highest vote-getting rationales and quickly identified an expert in the room. Through the noise reduction exercises and discussion, we narrowed our forecast range (reduced noise) and hopefully improved accuracy. We’ll know in a year when we see if NFLX is at $296.00.

 

Slide1

 

Thanks to Warren and team for putting on a great workshop for Alpha Theory clients. Please contact us with any questions.

info@alphatheory.com

 

September 5, 2019

Noise: The Maker of Markets and Madness

This article is written in collaboration with Warren Hatch, CFA (Good Judgment Inc.).

 

Reflect on your own investment process and ask this question: “Given identical sets of information, would two different analysts come to the same conclusion? Would an analyst come to the same conclusion on different days?” If not, there is noise. Where there is noise, there is room for improvement.

 

As investors, we accept that market participants have different opinions about the value of securities. That’s what makes a market. Even within investment teams, a diversity of views is a good thing.

 

The problem is how we combine those views. All too often, different people will use the same process and information to come up with the same (often wrong) answer. You can even ask the same person the same question at different times and get wildly different answers. All this noise is a bad thing. It adds volatility to the data, wastes our time by drawing attention away from the underlying drivers of value, and impairs the quality of our investment decisions. 

 

One option is to ignore the noise, take an average of everyone’s different views, and call it a consensus. That’s what we do with consensus earnings estimates. Or GDP forecasts. Or Wall Street dinner surveys. The wisdom of the crowd works, and over time this is certainly better than relying on any one individual.

 

A better option is to reduce the noise, not just ignore it. New and exciting research findings by Nobel laureate Daniel Kahneman and others show us concrete steps we can take to identify and reduce noise.

 

Kahneman often describes noise by citing work he performed for a large insurance company. In the example, two different insurance adjusters were given the same insurance claim and came up with the two substantially different payouts. The difference in those claims is noise. The question becomes, what level of subjectivity should there be in an insurance claim?

 

The problem is that humans are unreliable decision makers; their judgments are strongly influenced by irrelevant factors, such as their current mood, the time since their last meal, and the weather. We call the chance variability of judgments noise. It is an invisible tax on the bottom line of many companies.

 

A firm whose employees exercise judgment does not expect decisions to be entirely free of noise. But often noise is far above the level that executives would consider tolerable—and they are completely unaware of it.

 

Obviously, investors are different than insurance adjusters. They don’t get an insurance claim form with all relevant information. But noise has been studied and repeatedly demonstrated to impact decision making across fields. Kahneman further distinguishes noise from cognitive bias:

Screen Shot 2019-09-05 at 9.35.38 AM

To appreciate the distinction, think of your bathroom scale. We would say that the scale is biased if its readings are generally either too high or too low. If your weight appears to depend on where you happen to place your feet, the scale is noisy. A scale that consistently underestimates true weight by exactly four pounds is seriously biased but free of noise. A scale that gives two different readings when you step on it twice is noisy. Many errors of measurement arise from a combination of bias and noise. Most inexpensive bathroom scales are somewhat biased and quite noisy. – Noise: How to Overcome the High, Hidden Costs of Inconsistent Decision Making” – Kahneman, Rosenfield, Gandhi, Blaser (HBR October 2016)

 

So how should an investor reduce noise? By increasing process. Kahneman’s “radical solution” is to go all the way and replace human judgment with algorithms. These models can be straightforward: select 6–8 key variables, assign equal weights, and devise the simplest possible rule to combine them. Where replacing human judgment completely with algorithms isn’t viable or even desirable, there are other solutions.

 

Alpha Theory’s work with clients shows how this process can work. In a matter of 30 minutes, a “simple model” can be created to generally codify the 5-10 most important attributes of a firm’s process. Backed up by experience and empirical results, these simple models can reduce noise and increase returns—and the improvement is immediate and lasting.

 

Good Judgment Inc, the commercial company built on the research findings by Professor Philip Tetlock and “Superforecasting,” helps clients boost forecast accuracy with a suite of tools to cut through the noise. At recent Superforecasting workshops, for instance, group accuracy improved by an average of at least 20% after just 15 minutes of structured discussion.

 

Noise is the enemy of good decision making. The separate but complementary work at Good Judgment Inc and Alpha Theory show there are steps you can take now that can reduce noise and boost your bottom line.

 

August 1, 2019

The Concentration Manifesto for Shorts

 

We were reading the great research that comes from our friends at Novus recently and saw a reference to a paper written by Della Corte, Kosowski, Rapanos (2019). This paper analyzes 1.7 million short positions from 585 managers that contributed to the European Union Short Disclosure Information dataset from 2012-2018. They found that highest quintile conviction shorts (P5 - as measured by position size) outperformed lowest quintile conviction shorts (P1). In fact, the highest conviction shorts were the only cohort that had a mean return that was negative on an absolute basis (positive contribution for shorts).

 

Panel A - Equally-weighted Portfolios

 

After applying a six-factor model, the alpha of a strategy going long the low conviction and short the high conviction had an alpha of 11%. Ideally, the results would show a gradual declination between P1 and P5, but P4 does not follow that trend. Nevertheless, there is a demonstrable skill in short selection for the largest position sizes and provides further support for the Concentration Manifesto.

 

Download full version of The Concentration Manifesto

 

July 1, 2019

Brief Buys…Slow Sells

 

Sean Stannard-Stockton, who runs Ensemble Capital, sent us a link to a recent paper titled “Selling Fast and Buying Slow on return contribution of buy and sell decisions on over 4 million trades across 700+ portfolios. The result is that the portfolio managers in this study were great buyers and lousy sellers. On top of that, they compounded the problem by slowly buying, when they would have made more money by quickly buying, and quickly selling when they would have lost less money if they would have slowly sold.

 

Alpha Theory does this by design. It starts (buys) positions quickly and sells them slowly. We’ve measured a 4% difference between our clients’ ROIC of 6% and the 10% ROIC if they’d have traded according to Alpha Theory’s recommendations. We further break that down into initial position sizing and trading, where we find that about half, or 2%, comes from better trading. The “Selling Fast and Buying Slow” paper finds 170bps from better buying and selling decisions, which is close to the trading difference we’ve measured for our clients*.

 

Excerpt from “Selling Fast and Buying Slow” - Is there skill in buying and selling?

 

We examine this question using a unique data set containing the daily holdings and trades of sophisticated market experts—experienced institutional portfolio managers (PMs). Our data is comprised of 783 portfolios, with an average portfolio valued at approximately $573 million. More than 89 million fund-security-trading dates and 4.4 million high-stakes trades (2.0 and 2.4 million sells and buys, respectively) are observed between 2000 and 2016.

 

While the investors display clear skill in buying, their selling decisions underperform substantially. Positions added to the portfolio outperform both the benchmark and a strategy which randomly buys more shares of assets already held in the portfolio by over 100 basis points per year. In contrast, selling decisions not only fail to beat a no-skill random selling strategy, they consistently underperform it by substantial amounts. In our preferred specification, PMs forgo 70 basis points per year in raw returns.

 

Why would a majority of portfolio managers appear to exhibit skill in buying while at the same time underperforming substantially in selling? At face value, the fundamentals of buying and selling to optimize portfolio performance are similar: Both require incorporating information to forecast the distribution of future returns of an asset. Skill in both decisions requires the investor to look for relevant information and integrate it into the forecast. However, there is a reason to suspect that selling and buying decisions involve different psychological processes (Barber and Odean 2013). Recent work from the lab is consistent with this discrepancy: Buying decisions appear to be more forward-looking and belief-driven than selling decisions in an experimental asset market (Grosshans, Langnickel, and Zeisberger 2018). And indeed, anecdotal evidence from our sample points to PMs thinking differently about the two decisions; extensive interviews suggest that they appear to focus primarily on finding the next great idea to add to their portfolio and view selling largely as a way to raise cash for purchases.

 

We utilize a unique dataset and find evidence that financial market experts—institutional investors managing portfolios averaging $573 million—display costly, systematic biases. A striking finding emerges: While investors display skill in buying, their selling decisions underperform substantially—even relative to random sell strategies. We provide evidence that investors use heuristics when selling but not when buying, and that these heuristic strategies are empirically linked to the documented difference in performance.

 

As shown in Section 4, the comparison of trades on earnings announcement versus nonannouncement days suggests that PMs do not lack fundamental skills in selling; rather, results are consistent with PMs devoting more cognitive resources to buying than selling. When decision-relevant information is salient and readily available—as it is on announcement days—PMs’ selling performance improves substantially. We propose a mechanism through which overall underperformance in selling can be explained by a heuristic two-stage selling process, where PMs limit their consideration set to assets with salient characteristics (extreme prior returns) and sell those they are least attached to (low active share assets). A proxy for this heuristic strategy is associated with substantial losses relative to a no-skill random selling strategy.

 

The question remains of why professional PMs have not learned that their selling decisions are underperforming simple no-skill strategies. While we can only speculate, the environment in which fund managers make decisions offers several clues. As Hogarth (2001) notes, the development of expertise requires frequent and consistent feedback. While it is feasible to generate this type of feedback for both buy and sell decisions, anecdotal evidence from our interviews with PMs suggests that decisions are overwhelmingly focused on one domain over the other. In terms of time allocations, our understanding is that the vast majority of the investors’ research resources are devoted to finding the next winner to add to the portfolio. Moreover, standard reporting practices are well-suited for evaluating the performance of buying decisions: Purchased assets are tracked, providing salient and frequent feedback on the outcomes of buying decisions. This process appears successful in producing expertise—purchased assets consistently outperform the benchmark. In comparison, paltry resources are devoted to decisions of what to sell, and the relevant feedback is largely lacking: Assets sold are rarely if ever, tracked to quantify returns relative to potential alternatives such as our random sell counterfactual.

 

A recent paper by our friends at Essentia Analytics titled “The Alpha Lifecycle” confirms this conclusion with a different data set and a different approach.

 

Essentia is wrapping up a 5-month analysis of this phenomenon (Alpha Lifecycle), involving data from 42 portfolios over more than 10 years. The conclusions are clear: alpha has a beginning, a middle and an end. It tends to decay over time, reducing — or even reversing — the benefits it offered early on. Active managers who wish to deliver sustained alpha in their portfolios need to understand their own alpha lifecycles and adjust their investment decision-making processes accordingly.

 

Dominant Lifecycle from Essentia Analytics

Picture1

 

Alpha has a lifecycle and tends to decay over time — frequently causing managers who fall in love with their stocks to suffer. On average, managers we analyzed experienced a 400 basis point peak-to-trough decay in return on each position. 

 

These papers are great examples of the value of feedback for firms that want to improve. Capturing data, analyzing that data, and changing behavior based on empirical evidence is akin to Tiger Woods hitting golf balls while hooked up to a TrackMan and adjusting his swing to maximize the attributes that give him the greatest chance of success. The managers of the future are already adopting regimens that treat their process like that of an elite athlete. Those that don’t will get left behind.

 

*** We recognize this is not a perfect apples-to-apples comparison as our analysis measures the trading value as the difference between the Alpha Theory return and the return of keeping a consistent position size from beginning to end of the holding period.

May 1, 2019

Increasing the Probability of Success - Part 1

 

I was discussing with a new client how analysts should approach probabilities. Probabilities are used in calculating probability-weighted returns by multiplying them by the client’s scenarios of price forecasts to come up with a probability-weighted return.

 

The probability piece is the most subjective part of the probability-weighted return (see our “False Precision” blog post that discusses why it is important to set probabilities), so we came up with several approaches to see what fit best for their firm. I thought I’d share them with anyone that may be struggling with probabilities:

 

1. Fixed Probabilities (Distribution)

Analysts come up with price targets that match the part of the forecast distribution associated with the probabilities. In this example, all positions have a “fixed” 20%/60%/20% probability framework. The goal is to come up with price targets that match those buckets (i.e. what is the 20% risk price target?).

Probability of Success-1

 

This method pulls price targets associated that reflect the probability-weighted outcomes associated with a broad range of outcomes associated with different probability “buckets”. An analyst would iterate the assumptions in their financial model to estimate the extreme outcomes (two 20% probability buckets at the end) and the higher probability outcomes (60% probability bucket in the middle). The result is a price target that blends the possible outcomes in each bucket by their associated probability. Another way to think of this is a cumulative probability distribution.

Normal CDF

 

For example, the analyst may associate-5% sales growth and 10% EBITDA margins as the 20% cumulative probability outcome, 25% sales growth and 40% EBITDA margins as the 80% cumulative probability outcome, and 60% growth and 55% margins as the 99% cumulative probability. There would be many other points in between (represented by the green dots) where the analyst would apply different assumptions in their model.

 

The benefits of this method are that the probabilities are fixed and require no subjective assessment. This method also allows for highly-sensitive models with extreme outcomes to be reflected in the resultant probability-weighted return. The downside of this method is that it is time-intensive and allows no flexibility in the probabilities.

 

April 1, 2019

Why Price Targets are Broken and an Easy Method to Fix Them

 

I used to carefully calculate a price target for every asset I invested in. I was, after all, a sell-side analyst for many years and the price target was a staple. But when I used price targets to actually deploy capital, I was less than satisfied. I always had a nagging feeling that something was missing. It took reading a book on poker theory to wake me up.

 

Great poker players calculate price targets by determining the amount of money in the pot. But they do not stop there. Great poker players determine the probability of winning the pot and combine that with pot size and the amount they have to bet (risk) to determine a probability-weighted return (i.e. if they were to play the exact same hand situation 1,000 times, the return they would expect). Why wouldn’t I do the same thing for every investment I make? Take my price target and combine it with my estimate of downside risk and multiply each times my best guess of the probability of each event occurring.

 

Yes, the probability of winning a hand of poker is different than determining the probability of a stock going from $20 to $40. Poker has aleatory probabilities, which means they are defined by observable statistics and investing has epistemic probabilities meaning that probabilities cannot be determined by historical observation (these are words learned from listening to Ronald Howard, Stanford Business School professor that has studied decision making for the last 40 years). Investors describe the same aleatory and epistemic probabilities with different definitions. Definable probability is called risk, and an indefinable probability is called uncertainty. Uncertainty does not mean we should not use probability, because we are using our “confidence” to influence the investment decision anyway.

 

Gene Gigerenzer describes it like this in his book “Calculated Risk”, “Degrees of belief are subjective probabilities and are the most liberal means to translate uncertainty into a probability. The point here is that investors can translate even one-time events into probabilities provided they satisfy the laws of probability – the exhaustive and exclusive set of alternatives adds up to 100%. Also, investors can frequently update probabilities based on degrees of belief when new, relevant information becomes available.”

 

Many firms have a spreadsheet with price targets for each stock in their portfolio. Their price target represents the value the stock should achieve assuming their thesis is correct. But what if their thesis is wrong? Their price target assumes a 100% probability that their thesis comes true. If that is not the case, then downside risk has to be part of the equation. And if the chance of upside or downside is not a coin-flip, then probability must be assessed. These are the metrics that an analyst should be trying to tease out of their fundamental research because they describe the true expected payoff from the investment. Price target does not give you a probability-weighted return! Anything less than a probability-weighted return requires you to rely on your mental calculator to combine profit, risk, and conviction level.

 

Price Target is the most common measurement used by fundamental money managers to evaluate asset quality. The Price Target represents an analyst’s best estimate of value and is a synthesis of their research. The Price Target is then compared to the assets current price to determine if there is a significant enough dislocation of value to provide the fund an opportunity to profit. The Price Target is dynamic because it can be adjusted as the analyst receives new fundamental data. It can also be used as a trading tool that notifies the fund when to enter and exit positions.

 

For all of its attributes, one inherent flaw has made Price Targets impractical for money managers. Price Target only explains the most likely scenario and thus assumes a 100% probability of its outcome being true. For example, Company ABC is trading at $20 and has just hired a new CEO who is known for cutting costs and improving gross margins. Company ABC has historically had margins and multiples below industry norms. So, in your Price Target, you give the company the benefit of industry margins and multiples and determine the company should be worth $30. This is an implied return of 50% and sounds like a solid story. However, the company is a generation behind in product development, so it may be difficult to generate equivalent margins and the company may have to spend on R&D to catch up with the industry. Additionally, there is an even riskier scenario that the industry continues to outpace Company ABC in product development and their competitive position actually disproves to a point where margins are severely impaired. Maybe these other outcomes are not as likely, but they must be accounted for in the measurement of asset quality.

 

This is where many portfolio managers will assess that the asset has great potential upside to $30 from $20, but there are substantial risks that prevent the fund from taking the exposure that generally would be given to an asset with 50% potential return. In this method, the portfolio manager was forced to use heuristics and mental calculation to adjust for risk. But why force yourself to be a human computer when you have all of the relevant information to make a more accurate decision? The first step is to appreciate that the firm’s thesis does not have a 100% probability of occurring. Once a firm indoctrinates that tenet, it is easy to see that all 50% return assets are not created equal.

 

Let’s use our Company ABC example to fully describe the asset using the analyst’s research. The thesis is that the new CEO will improve margins and the company will receive a multiple in line with the industry. The analyst believes there is a reasonable chance that this occurs, say 50%. The analyst also calculates that if the company was forced to spend more on research and development to catch up from a product standpoint, the stock would be worth $22 and this has a lower chance of occurring, say 30%. Lastly, the chance that the industry advances their lead on Company ABC’s product, severely impacting margins, is about 20% and would probably take the stock down to $10. We have now described the full breadth of our research and we can synthesize it without heuristics or mental calculation:

 

Screen Shot 2019-03-29 at 9.29.58 AM

 

The 18% Probability-Weighted Return (PWR) explains the full breadth of your research, is an accurate measurement to make portfolio decisions and is an apples-to-apples comparison of all assets. But this is just the beginning of the benefits of PWR because assets are now being measured by how much return you gain for a given level of risk.

 

We will continue with our analysis of Company ABC. The analyst is in San Francisco and has just exited a one-on-one with Company ABC’s CEO. He finds out that product development is going ahead of plan at a lower cost and the product should be industry-competitive in the next few months. Let’s evaluate how Price Target and PWR would each deal with this new information. For Price Target, things are going according to plan so we wouldn’t raise our target above $30. With PWR we can confidently assess that our probability of success has increased and our probability of overspending on R&D and falling behind the competition has decreased. We would quickly adjust our assumptions:

 

Screen Shot 2019-03-29 at 9.30.11 AM

 

Using PWR we see exactly how much better our idea is given the new fundamental data and can easily see how much better the position is than it was before. The portfolio manager can be confident in adding to the position and have a sense for how much.

 

The process is so much better using Probability-Weighted Returns. Using Price Targets is certainly better than guessing, but it leaves important information needed to make the right decision. If you’re using Price Targets today, take the time to reassess your approach. Start with downside targets with fixed probabilities (50%/50%) and then, over time, add differentiated probabilities to get the full impact of Probability-Weighted Returns on your portfolio.

 

March 1, 2019

Your Position Size is Wrong: A Plea to Put Down the Mental Calculator

 

Hedge funds throw away half of their potential returns by not explicitly calculating probability-weighted return. After working for a fund and having numerous conversations with hedge and mutual fund managers over the past decade, it is obvious that an overwhelming majority of funds’ mistakes come from poor estimation of risk-reward. In fact, most funds have not explicitly defined an upside price target, downside risk target, and conviction level for each investment in their portfolio. This is because most fund managers trust that they can manage the portfolio in their head. They analyze and discuss the upside, downside, and conviction level for every investment, so they assume these factors’ influence is carefully measured into every decision. But I would posit that there is a distinct difference between factoring in upside, downside, and conviction level through mental calculation and measuring it with probability-weighted return. Why would you trust your mental calculator for such an important decision? Could you imagine a bungee jumper that knows the height of a bridge, tension of the bungee cord, and weight of the jumper but just estimates the correct length of the bungee cord? Absolutely not. For every jump, a calculation is performed to make sure that easily avoidable risk is eliminated. Investors all too often skip the “bungee cord” calculation of probability-weighted return and end up assuming undue risk.

 

Empirical research and common sense prove that probability-weighted return is the optimal method to measure an asset’s quality. But most firms do not use probability-weighted return because it questions the output of their mental calculators. Researchers in Behavioral Finance and Neuroeconomics have cautioned investors for over 30 years that their brain is poorly designed to make financial decisions. Armed with this knowledge, investors still do not adjust their process to eliminate known decision-making frailties. In most cases, these shortcomings can be eliminated by calculating a probability-weighted return for every investment.

 

A quick example from a meeting with a portfolio manager highlights the problem of not using probability-weighted return. I was working with a successful fund manager when I asked for the logic behind the largest position in his fund. He told me that it was a company that he knows well, and he is sure they are going to beat earnings. I asked for his upside target if they beat earnings and the probability of it occurring. His best estimates were a profit of 10% and a probability of 90%. I then asked him to explain what would happen if they did not beat earnings. He described a dire scenario where the stock would be down at least 20% because the Street was expecting a beat. I quickly took his estimates and calculated a probability-weighted return of 7%. This caused the manager to change his exposure to the relatively weak idea. He had all of the correct information in his head, but his mental calculator was being corrupted by his over-confidence in his thesis. If this happened with the largest position in the portfolio, you can guarantee that there are other inefficiently sized positions. 

 

It is all too common that funds perform the intense research to drive the ball 99 yards down the field but do not “punch it in” by explicitly defining upside profit, downside risk, and probability. Probability-weighted return takes into account the full breadth of your fundamental research and creates a discipline that treats each position like it is brand new every day and performance-draining oversight is eliminated. As you can see from the example, calculating probability-weighted return is easy and only requires that the firm explicitly define upside, downside, and probability and then compare the probability-weighted sum to the current price. 

 

Once you calculate a probability-weighted return for every investment in the portfolio, you will quickly point out position sizes that do not match your research. Probability-weighted return becomes the synthesis of your research, the common vernacular of investment discussions, and the anchor for decisions.  Your portfolio generates greater returns because you are continuously improving the portfolio to give more exposure to the firm’s best ideas while constantly pruning the weakest. Return is only half the equation. A portfolio constructed with probability weighted return has considerably less risk. Every decision is now made in the context of downside potential. If the downside risk increases, the probability-weighted return falls, which in turn lowers the position size.

 

The benefits of probability-weighted return position sizing are profound and because the process is based on common-sense and sound math, it will become the de facto standard in the coming years. Some of the brightest fundamental managers in the world have been utilizing this discipline for years and your firm can capture the benefits of probability-weighted return in as little as a few weeks. Make a commitment to put down the mental calculator and you are guaranteed to make better decisions.

 

February 8, 2019

Alpha Theory 2018 Year in Review

 

THE STREAK CONTINUES!  For the seventh consecutive year, Alpha Theory clients have outperformed their peers, more than doubling the returns of the industry average over the same period. This year, our clients beat the primary Equity Hedge index by 3.9% despite missing out on 0.9% of return if they more closely followed the model they built in Alpha Theory. 

 

Screen Shot 2019-02-06 at 12.32.26 PM

 

From a global perspective, Alpha Theory clients and optimal sizing outperformed major indices.

Screen Shot 2019-02-06 at 12.34.12 PM

 

Despite the difficult year for equity funds, including our clients, who averaged a decline of 3.0%, they still outperformed their peers who experienced an average decline of 6.9%.

Screen Shot 2019-02-06 at 12.35.01 PM

 

Clients would have done even better if they would have more closely followed the model they built in Alpha Theory.

Screen Shot 2019-02-06 at 12.37.47 PM

 

It was a particularly satisfying year to post these results. The streak of momentum fueled markets transitioned into one with much higher volatility in 2018 and it was great to see that our tool is effective at driving alpha in both types of market conditions.    

 

2018 INDUSTRY TRENDS

Some new trends have started to gain traction across the industry in 2018.  While some larger funds have been headed in this direction for years, the change we saw in 2018 was more widespread adoption—even in smaller funds.  As we talked to hundreds of prospective clients and allocators in 2018, we noticed three major trends in how the most successful PMs are changing their investment strategies:

    1. Leveraging more alternative data sources in their research.

    2. Acute focus on repeatable processes around research, risk, and position sizing.

    3. Emphasis on capturing data that can leverage statistical analysis and machine learning.

 

At first, these three trends seemed unrelated. It was only recently that we realized that they are deeply connected by one dominant trend: the reduction in available alpha due to the ubiquity of research data, increased number of analysts, decreased number of publicly available securities, and the rapid rise in computers ability to find market inefficiencies faster than humans. This is making it virtually impossible to gain a sustainable edge through traditional “stock picking.” Put simply-- the largest traditional source of alpha has almost completely dried up.

 

We are seeing this trend in our batting average data as the average of our clients converges towards 50%. The good news is that there is still alpha out there to be harvested and our data bears that out.  Supporting point 2 above, as you will see in the tables below—our most process-driven clients (as represented by our most active clients based on usage) strongly outperform our most passive users.  We also have several clients who are making deep dives into their historical forecasting data to determine which of their analysts have the forecasting best track records and teasing out the strengths and weaknesses of the poor performers so they can target specific areas of improvement. 

 

Our clients are a self-selecting cohort who believe in process and discipline; process orientation goes together with Alpha Theory software that serves as a disciplining mechanism to align best risk/reward ideas with rankings in the portfolio. Shown below, the most active users as measured by frequency of update, research coverage, and correlation with the model have the highest ROIC.

Screen Shot 2019-02-06 at 12.39.42 PM

 

PROCESS ENHANCES PERFORMANCE

Alpha Theory clients use a process to reduce the impacts from emotion and guesswork as they make position sizing decisions. Alpha Theory highlights when good ideas coincide with largest position sizes in the portfolio. This rules engine codifies a discipline that:

    1. Centralizes price targets and archives them in a database

    2. Provides notifications of price target updates and anomalies

    3. Calculates probability-weighted returns (PWR) for assets and the portfolio as a whole.

    4. Enhances returns

    5. Mitigates portfolio risk 

    6. Saves time

    7. Adds precision and rigor to the sizing process

    8. Enables real-time incorporation of the market and individual asset moves into sizing decisions.

 

DISCIPLINED USAGE REDUCES RESEARCH SLIPPAGE

Alpha Theory’s research not only suggests that adoption of the application by itself leads to improved performance, but actual usage intensity further enhances results.

Usage intensity is determined by:

    1. Percent of Positions with Research

    2. Correlation with Optimal Position Size

    3. Login Frequency

 

OPTIMAL POSITION SIZING REDUCES RESEARCH SLIPPAGE

Comparing clients’ actual versus optimal returns shows:

 

HIGHER TOTAL RETURNS
ROIC is 4% higher.

 

IMPROVED BATTING AVERAGE
Batting Average is 9% higher. Explanation: many of the assets that don’t have price targets or have negative PWRs are held by the fund but recommended as 0% positions by Alpha Theory. Those positions underperform and allow Alpha Theory’s batting average to prevail.

Screen Shot 2019-02-06 at 12.41.17 PM

1. Measured as the average full-year return for clients where full-year data was available, adjusted for differences in exposure, net of trading costs

2. Before trading costs

 

PRICE TARGETS REDUCES RESEARCH SLIPPAGE

Alpha Theory has further found that ROIC for assets with price targets is 4.8% higher than for those without price targets. Some investors chafe at price targets because they smack of “false precision.” These investors are missing the point because the key to price targets is not their absolute validity but their explicit nature which allows for objective conversation of the assumptions that went into them.  Said another way, the requirements of calculating a price target and the questions that targets foster are central to any good process.

Screen Shot 2019-02-06 at 3.46.32 PM

 

Finding alpha will not become easier. It is imperative that the funds of the 21st century develop plans to evolve into new realities. Data and process are critical to that evolution. Let Alpha Theory help you and your team grow to meet the challenges of tomorrow.

 

January 5, 2019

Valuing Momentum: Part 2

 

I’ll highlight one major article written by Cliff Asness and his team at AQR, published in May 2014 (it’s also worth checking out “What Works on Wall Street” by O’Shaunassy and their fund strategies which combine value and momentum and have solid long-term track records). The AQR piece titled Fact, Fiction and Momentum Investing evaluates some of the most prominent myths regarding Momentum and uses empirical research to refute those myths. In doing so, it gives a compelling account showing why the marriage of value and momentum are potent partners. Here are a few excerpts to give a sense of their conclusions:

 

As we’ll show in this essay, value and momentum work better when used as complements, and it is the combination of the two we stress and most-strongly recommend. We are fans of both momentum and value but bigger fans of their combination (and not fans of myths at all).

 

Evidence for Momentum

The (momentum) return premium is evident in 212 years (yes, this is not a typo, two hundred and twelve years of data from 1801 to 2012) of U.S. equity data,3 dating back to the Victorian age in U.K equity data,4 in over 20 years of out-of-sample evidence from its original discovery, in 40 other countries, and in more than a dozen other asset classes.

 

1

 

88% of returns positive for momentum and 89% for value.

 

2

 

Israel and Moskowitz (2013) show that the long and short side of momentum is equally profitable using 86 years of U.S. data as well as 40 years of international equity data, and another 40 years of data from five other asset classes outside of equities. Everywhere they looked and in every way, they could not find any evidence that the short side profits were systematically larger or more important than the long side.

 

Benefits of Momentum and Value Combined

Sharpe ratio and percent of years with positive returns increase with a 60% value / 40% momentum strategy.

 

Group 2

 

Suppose, despite all of the evidence to the contrary and our strong belief it’s positive, momentum had a zero expected return going forward. Would it still be a valuable investment tool? The answer is clearly, though perhaps surprisingly, yes. The reason is because of momentum’s tremendous diversification benefits when combined with value.

 

The diversification benefits are so great that even a zero expected return would be valuable to your portfolio! The logic is simple. Since value is a good strategy and momentum is -0.4 correlated with it, one should expect momentum to lose money based only on that information. Yet, the fact that it does not lose but in this assumed case breaks even makes it a valuable hedge. (We note that using the definition of value in Asness and Frazzini (2013) dramatically increases the magnitude of this negative correlation (to -0.7) and the power of combining value and momentum. Following their methodology, the results of this section would be far stronger.)

 

But, there’s an even simpler and equally effective way to mitigate these crashes, as we mention repeatedly: combining momentum with value. This combination has effectively eliminated these crashes in our long-term sample evidence — and not just those for momentum but also the crashes that can occur for value investing. In other words, the diversification benefits of combining momentum with value don’t just appear during normal times, but also during these extreme times, which makes their combination even more valuable. For example, Asness and Frazzini (2013) show that the combination of value and momentum did not suffer as badly in 2009. Going the other way, in 1999 momentum helped ameliorate value’s pain. Both factors have worked well over the long-term, but neither has a Sharpe ratio of 10, meaning that both will have hard times occasionally, but when combined together they will have fewer hard times. Using Kenneth French’s data, we can show similarly that these very poor episodes for momentum and value are ameliorated. The diversification benefits between momentum and value are evident, even during these extreme times. For example, the worst drawdown over the full sample is -43% for value, -77% for momentum, but only -30% for a 60/40 combination of value and momentum.

 

By the way, we fully recognize and acknowledge that the past ten years have not been great for momentum, with the 10-year return for UMD (Momentum) falling in the 7th percentile of rolling 10-year returns (going back to 1927). At the same time, the past ten years have not been great for value, either, with the 10-year return for HML (Value) falling in the 5th percentile of rolling 10-year returns. That, of course, makes the prior 10-year return of the 60/40 combination of the two low (2nd percentile), but still positive (12%). You know a strategy has a pretty great history when the 2nd percentile return is still positive.

 

Summing up the points from the AQR paper:

 - Momentum works better with value (negatively correlated with each other)

 - The better the value mechanism the better the whole portfolio performs (see the bolded section in the excerpt above)

 

This is where our clients shine. They are great value estimators and their research is not easily systematized. What should be systematized is the translation of that research into a portfolio and a new push for Alpha Theory will be to give our clients tools to incorporate momentum. 

 

Active manager's search for alpha is more difficult today than it has ever been. There is an existential requirement for active managers to leverage the tools and evidence around them and maximize the return they get from their research. To that end, over the coming months, you will see Alpha Theory develop new functionality to better account for momentum in position sizing. We welcome your input as we embark upon this journey.

 

December 21, 2018

Valuing Momentum: A Fundamentalist Coming to Terms with Momentum - Part 1

“When events change, I change my mind. What do you do?”

 – Paul Samuelson, Nobel Laureate Economist

 

Alpha Theory clients are value investors. Value investors, in general, like to keep things simple. They like to buy quality, underpriced securities and sell the opposite. The idea of momentum as a positive factor is an anathema to value investor thinking.

 

I’m a value investor. I originally wanted to keep Alpha Theory pure of subjective influences like “conviction level” but changed my mind after reading “Zen and the Art of Motorcycle Maintenance” and “The Checklist Manifesto”. Now we have checklists which combine subjective and objective elements into a Confidence Score that impacts position size. It’s a major improvement to Alpha Theory.

 

Momentum is the next step in Alpha Theory’s evolution. After being asked enough times by clients to investigate how we could incorporate momentum into Alpha Theory’s model, I started doing due diligence to determine if this was a good thing to add. It is.

 

We first analyzed historical Alpha Theory client data. We analyzed the performance of positions after they went down by 10%, 20%, 30%, etc.

Artboard

Returns were negative no matter how we cut it after a stock had taken at least a 10% loss. A position that underperforms the market by 10% loses another 3% to the market over the next 3 months and over 7% over the next 3 years. A stock that goes down 50% more than the market underperforms the market by 5% over the next 3 months and over 17% over the next 3 years! I didn’t believe the data. So, we ran it against stocks in our clients’ portfolios that went up 10%, 20%, 30%, etc.

Artboard Copy

The same phenomenon appears in the opposite direction. Our clients would have been better off selling losers and adding to winners. That’s contrary to Alpha Theory’s model which is mean-reverting and has created positive alpha over time for clients trading around positions. Something didn’t foot.

 

Looking into the data, the mean-reverting trading suggested by Alpha Theory was positive because stocks didn’t travel in straight paths. As they oscillated, positive alpha was created. In addition, clients reassess their valuation work after a stock goes up or down, which would allow the position to adjust with new information.

 

After examining our internal data, I searched for external research to confirm or refute our findings. Momentum is a well-researched phenomenon and the conclusions largely support the case that it is a positive, sustainable factor in stock returns (there is also research showing momentum’s positive influence for other asset classes as well).

 

In our next post, we’ll discuss Cliff Asness’ paper on momentum and what it means for Alpha Theory and value investing.