(866)-482-2177

sales@alphatheory.com

REQUEST A DEMO

SYSTEM REQUIREMENTS


Please note the following System Requirements. Further, please limit the number of open applications (particularly price streaming applications) while logged in to Alpha Theory™.


Recommended System Specifications
Processor: Dual Core or Quad-Core 2.4GHz or faster
RAM: 4GB+
Browser: Google Chrome 30+
Screen Resolution: 1280 x 1024 or greater
Internet Access: Business Class High-Speed


Minimum System Requirements
Processor: Intel Pentium-M 2.0Ghz or equivalent
RAM: 2GB+
Browser: Google Chrome, Mozilla Firefox, Internet Explorer 9+ (without Compatibility View), Safari
Screen Resolution: 1024 x 768 or greater
Internet Access: High-Speed

Subscribe to Alpha Theory content

Alpha Theory Blog - News and Insights

40 posts categorized "External Articles"

January 31, 2013

Kelly Criterion in Practice Part 1

A friend of mine recently forwarded an article by Kyle Mowery of GrizzlyRock Capital where he discusses the Kelly Criterion and how his fund implements it for position sizing. First off, I'll say kudos to GrizzyRock Capital for a thoughtful approach to position sizing. I've written numerous times about the benefits and deficiencies of the Kelly Criterion and Mr. Mowery's article does a good job of laying out the implementation and some of the benefits and detriments of Kelly. I'd like to use Mr. Mowery's article as an opportunity to discuss some of the benefits of a disciplined approach to position sizing while discussing some of the limitations of Kelly.

Mr. Mowery correctly highlights that his fund's use of Kelly helps increase portfolio potential returns and reduce behavioral bias. The later aspect of behavioral bias is the benefit I find to be the most important attribute of adopting a process for sizing positions. Basic questions like, "how much can we make, what is the downside risk, and what are the probabilities of each", must be answered before any asset is placed in the portfolio. These questions are imperative to the fund's success and can be overlooked or poorly accounted for if not required as an input to the model. Potentially flawed position sizing derived by instinct and heuristics are highlighted by an optimal position size. Granted there may be legitimate reasons to have a position size other than the suggested optimal, but at least with a model, the difference is highlighted and justified.

Equal weighting is a model that many firms employ to counter the effects of behavioral bias. Mr. Mowery discusses the pros and cons:

Some allocators elect to equal-weight investments given uncertainty regarding which investments will perform best. This strategy creates a basket of attractive investments that should profit regardless of which investments in the basket succeed. This method benefits from simplicity and recognizes the future is inherently uncertain. Drawbacks of the strategy include underweighting exceptional investments and overweighting marginal ideas.

I would add that equal weighting suffers also from the cliff effect and static rebalancing. The cliff effect is simply that the best idea and the 20th best idea all get 5% exposure then the 21st best gets 0%. That drop off the cliff is clearly suboptimal. Second, equal weighting is static in that it either rebalances positions back to equal weight as prices change or it lets them ride. Either way, the impact of falling risk/reward as prices rise is not accounted for until the position goes from equal weight to 0%. Trading around positions is a huge benefit of a position sizing model that can add large amounts of alpha. Equal weighting simply misses much of the trading benefit.

Mr. Mowery goes on to discuss allocating capital to ideas with the most potential:

Another strategy is to allocate large amounts of capital to the investment ideas with the most potential. This methodology suggests investors should invest proportionally according to their ex-ante return expectations. The advantage of this methodology is matching prospective return to investment size. However, this strategy breaks down when allocators are incorrect about future investment return or risk prospects.

I'm not sure here if Mr. Mowery is talking about the return to the upside case or an expected return which is probability-weighted and includes downside. Either way, the argument against this method, "this strategy breaks down when allocators are incorrect about future investment return or risk prospects" isn't a successful counterpoint for why Kelly is better because Kelly will also be wrong if the inputs are wrong.

Kelly Formula Based Position Sizing. The Kelly Formula is great, but it is my belief that the Kelly Formula is sub-optimal to expected return-based sizing for portfolio management because it assumes that 100% of the bankroll can be bet on any one investment and it requires bimodal inputs (upside and downside only). Kelly's base assumption that 100% of capital can be allocated to a single bet necessitates that the formula is naturally cautious when sizing a position that has potential loss. It is my belief that expected return based position sizing (controlled for distribution width) is superior to Kelly.

I recently ran a Monte Carlo simulation comparing the Alpha Theory position sizing technique to a myriad of common position sizing methodologies including Kelly Criterion (Optimal F), Up / Down Ratio, Equal Weighting (and by proxy 14 Markowitz Mean-Variance Modern Portfolio Theory systems - Two studies of Markowitz Mean-Variance systems show that mean-variance maximization does not beat Equal Weighting (DeMiguel et al (2006) / Jobson-Korkie)).  Alpha Theory measured success by measuring the amount of Portfolio Expected Return added per 1% of portfolio exposure.  Alpha Theory beat the closest methodology, Kelly Criterion, by 18%, Up / Down Ratio by 52%, Equal Weighting by 48%. 

Kelly Criterion is the superior method for generating the maximum long-term geometric expected return when the whole portfolio can be wagered on a single investment.  However, portfolios are comprised of multiple investments and thus Kelly Criterion under bets good expected returns because it's trying to protect against complete loss of capital and over bets poor expected returns with very high probability of success.  Because portfolio investing has inherent capital protectors by limiting position size maximums, Kelly Criterion breaks down. 

-To prove this out I performed a Monte Carlo simulation which randomly created 10,000 portfolios of 50 stocks

-Randomly assumed that analyst's upside, downside, and probability estimates were up to 50% inaccurate

-Random variables included: assets, scenarios, success/failure of analysis, and position size and expected return parameters

-Alpha Theory (Expected Return adjusted for distribution width) created the optimal portfolio 7,074 times out of 10,000 (71%)

-Alpha Theory was 53% better than the next best method, Kelly Criterion

Kelly Maximization of Long-Term Geometric Expected Return. I have seen several workarounds that use the Kelly Formula to construct a portfolio but most focus too heavily on the bet size of each individual investment. If John Kelly were alive today, I imagine he would probably tell us that the formula is a shortcut and the more important concept is finding the portfolio that maximizes long-term geometric expected return. That was the assumption that I made when I constructed my own Kelly calculator. The first step was scrapping the Kelly Formula and coming up with a way to account for investments with multiple scenarios and loss less than 100%. I could not figure out a way to make a closed-form solution, which is one of the best attributes of the Kelly Formula. I had to create an open-form calculator that used an iterative formula using the Solver function in Excel (there is a similar calculator at http://www.albionresearch.com/kelly/). With my new calculator I could create any investment with various economic outcomes and probabilities and derive the bet size that would give me the maximum expected return over the long-term (geometric). I made an assumption that I could not bet more than 100% (-100% for shorts). In reality, a fund could leverage investments and receive higher returns but for this portfolio example I assumed no leverage.

Trial #1. I plotted out the Kelly bet for a bunch of random investments and noticed the Kelly bet did not match up with the position size I would have expected for the portfolio. This is because the Kelly bet was not considering the portfolio. However, I did find that expected return was a good predictor of portfolio position size (example below).

We have 3 potential investments with which to build our portfolio. If I look simply at the Kelly Bet, I would maximize Investment #1 and #2 because they are 100% versus 80% for Investment #3. But the Expected Return for Investment #3 is higher than #1 and #2. This is the point where I hypothesized that I could compute the Maximum CAGR (Compound Annual Growth Rate) by investing the Kelly Bet of each investment, calculating the CAGR (14%,15%,16%), and then use the CAGR as a way to determine the correct position size. This certainly seemed to point in the right direction but it still did not feel right to have them so closely sized. I decided that the ultimate method would be to skip the calculation of individual bets and calculate which bets would maximize the expected return of the portfolio (Uncorrelated Portfolio bet size in chart). As you can see, to maximize the portfolio's return, the best allocation was to bet 43% on Investment #1, 6% on Investment #2, and 51% on Investment #3. This array of bets is how I came to the conclusion that the original Expected Return was a great predictor of portfolio position size.

Correlation in Trial #1. But then I thought about the Central Limit Theorem and I realized that diversification makes a difference when assets are uncorrelated. But what if they are correlated? The benefit surely must be reduced. I subsequently built a string of payoffs where the gains and losses of Investment #1 and #3 occur in the same period (#2 doesn't matter because it always goes up 15%). When I recalculated the Correlated Portfolio position sizes, I got 0%, 48%, 52%. No exposure to Investment #1 in the Correlated Portfolio when the Uncorrelated Portfolio suggested a 43% position size.

This tells me that the correlation inside the array of outcomes has a large bearing on position size. What I needed to do is ensure that each array properly matches the inter-correlation amongst assets in the portfolio. At this point, I'm still working on that issue but maybe a starting point is the historical correlation and beta of each asset to the portfolio and other assets. Next, build thousands of hypothetical arrays of returns for each asset based on the scenario analysis. Finally, pick the set of hypothetical arrays that is most closely aligned with the inter-correlation of assets. From there we can iterate position sizes or use an optimization function that finds the portfolio with the maximum CAGR.

Speaking of maximum CAGR, see how both portfolios have higher Portfolio CAGR (24.1% and 21.6%) than any of the individual investments (14%, 15%, 16%)? This is the benefit of portfolio construction, which in this case is 5 to 10% of return.

Check back tomorrow for the 2nd Trial and conclusion.

November 29, 2012

Moneyball for Money Managers

Baseball is the birthplace of “Moneyball”. Other sports soon followed once the concept of Moneyball was proven and made public. General managers from basketball to football to soccer to hockey now employ statisticians in their front office. Drug and energy exploration companies have been playing their own form of Moneyball for years before baseball caught on. Now politics is in the game (see full LA Times article here). This recent LA Times article gives a glimpse of how the Obama campaign used their own brand of Moneyball to help win the election which will change campaign strategy for evermore.

“… the goal was to rank individual voters in the swing states based on their likelihood of voting for the president or of being persuaded to vote for him, to volunteer for his campaign and to vote early. The Obama campaign developed separate models for each.”

What strikes me about this article isn’t that politics is using Decision Theory to win elections, it’s that money management, in general, still does not use it. In baseball, Moneyball showed how picking players with the highest on-base percentage improved team success. In money management, a fund must ensure their best ideas are their largest positions to improve success. But most firms don’t effectively measure idea quality. They don’t “rank individual voters.” They don’t compare the “on-base percentage.” If they did, they would have a spreadsheet that had every investment idea ranked by Expected Return and scored by other qualitative and quantitative factors. But instead, most firms just use instincts to manage the portfolio. In fact, most firms don’t have a systematic way to size positions. I can hear the drug and energy geeks now, “and they get paid the big bucks.”

Alpha Theory is “Moneyball” for asset managers. Alpha Theory’s software captures a firm’s price targets and probabilities, then highlights the position sizes that are over or under-weighted based on those targets. It factors in liquidity, volatility, time horizon, sector exposure, etc. to give the manager a repeatable process for sizing positions. This saves the portfolio manager’s time, reduces emotional decision making, and helps you stay on top of what your analysts are thinking. Just ask the simple question: What is your 6th largest position? Is it your 6th best idea? What is your upside reward and downside risk? If you don’t know, a little Moneyball could go a long way.

September 30, 2012

The End of the World as We Know It?

I happened to read two articles in the same week talking about the end of investing as we know it. The articles, “Why It is No Longer a Good Idea to Be in The Investment Industry” and “What Business Is Wall Street In?” were written by two authors, Nassim Taleb and Mark Cuban respectively, whom most would call “love him or hate him” kind of guys. While they are known for stirring up controversy, some of their points seem reasonable.

In the “Why It is No Longer a Good Idea to Be in The Investment Industry” by Nassim Taleb, he explains the phenomenon that as the number of participants in investing has increased, the number of “lucky fools” increases making it harder for “skilled investors” to outperform the sheer number of “lucky fools” because there are only so many possible allocations. Additionally, he shows empirically (although a bit over my mathematical head) that in fat tail distributions (like the financial market) the phenomenon is amplified making it even more difficult for “skilled investors” to outperform the “lucky fools.”  There are assumptions that may nullify the proof in real life like allocators decision making processes and the assumption that most investors are random returners and not negative returners (worse than random) which opens the door wider for skilled investors. But no matter how you cut it, his overall point that as the number of investors increases, the “lucky fools” will grow in number and make it harder for allocators to look past to the truly skilled players.

The “What Business Is Wall Street In?” article by Cuban describes some of the reasons why investing has moved away from its roots of capital creation and into a market of quick-reflex participants (computers) looking for patterns and temporary arbitrage in the transfer of capital during the capital creation process. This story isn’t new and I’ve heard it repeated by almost every manager I work with, but I do like Cuban’s analogy of Traders and Hackers. The similarities are striking, especially their rationalization of why they do what they do. Also, Cuban has a prescription for getting parts of the market back under control. It sounds interesting, but policy certainly isn’t my forte.

So I come away from both articles agreeing there are distinct challenges for skilled investors from “lucky fools” and “hackers.” But I’m interested in knowing your thoughts. Does the sheer number of investors make it impossible for the skilled investor to outperform the lucky fools and subsequently get funding? Is there a benefit to the system from high frequency trading? If there is no benefit, is it ok to regulate or curtail it? If so, is a tax on high frequency the right call or is there some other prescription? This is a healthy conversation for those that like to think of themselves as “skilled investors.” It seems like something has to give and I really don’t want it to be a sacrifice of those that care about cash flow for the sake of those that care about quote flow.

July 19, 2012

Investing on Empty

How do we make portfolio decisions? How do you ultimately make the decision to buy IBM and not Dell? To make Google a 4% position and Microsoft a 2% position? To add to Apple and trim some Oracle? Could we write down or explain to someone exactly how you made these decisions? Most likely not because they required a good dose of experience-based intuition. Well a good article titled ‘Tired Investing’ by our friends at Cabot Research, walks through how our decisions are influenced by our state of mind.

Excerpt from ‘Tired Investing’ by Cabot Research:

    While your intuition and judgment may be spectacular they rely on a limited and easily depleted reservoir of psychic energy and this presents serious risk that most managers do not factor into their decision making. Baumeister brings this point home with the following: "The ease with which we have been able to produce ego depletion using small laboratory manipulations suggests that the extent of the resource is quite limited, which implies that it would be seriously inadequate for directing all of a person's behavior, so conscious, free choice must remain at best restricted to a very small proportion of human behavior."

    For portfolio managers the lesson is clear: Your best may be terrific but you can't count on being your best at every junction without help. And the support top managers employ is their investment process. Knowing when to slow down, think twice, benchmark to outside views and seek independent input are some of the process elements that guard against ego depletion, as well as a host of other emotional and cognitive biases.

    Investment decisions can pop up at any time throughout the day. What precedes them, we now better understand, can dramatically alter the choices made and their impact on performance.

    Experience enables managers to develop judgment and self awareness that can help counterbalance the effects of ego depletion. Adherence to a well calibrated investment process, however, can guide frenetic inter-day decisions towards choices that more often reflect intention, consistency and quality. The alternative may lead you towards tired performance.

Interestingly the best method to counteract the effects of poor decisions made from mental exhaustion is to create a process for decision making. This is Alpha Theory’s bread-and-butter. “Why did I decide to buy IBM and not Dell?” – well it is right there in Alpha Theory. “Why Is Google a 4% position and Microsoft a 2%?” – that’s also right there in Alpha Theory. Process isn’t a panacea for making the right decision but it sure can help avoid the wrong decision.

November 04, 2011

The Cup is Half Full – Kahneman Style

After suggesting a solution and getting no negative feedback - “Gentlemen, I take it we are all in complete agreement on the decision here…Then I propose we postpone further discussion of this matter until our next meeting to give ourselves time to develop disagreement and perhaps gain some understanding of what the decision is all about.” – Alfred P. Sloan, former Chairman of General Motors

 

Alpha Theory revolves around questioning our own decision making. Much of the proof that decision making is fraught with foibles comes from Nobel Prize winning work done by Daniel Kahneman and the late Amos Tversky. Kahneman has an article out on Bloomberg.com about the Optimistic Bias called “Bias, Blindness and How We Truly Think” that is an important read for anyone managing money. Actually, it’s a good read for anyone that ever makes important decisions.

One of my favorite parts of the article is actually an idea from an “adversarial collaborator” of Kahneman’s named Gary Klein. His idea is the premortem:

When the organization has almost come to an important decision but hasn’t committed itself, it should gather a group of people knowledgeable about the decision to listen to a brief speech: “Imagine that we are a year into the future. We implemented the plan as it now exists. The outcome has been a disaster. Please take 5 to 10 minutes to write a brief history of that disaster.”

As a team converges on a decision, public doubts about the wisdom of the planned move are gradually suppressed and eventually come to be treated as evidence of flawed loyalty. The suppression of doubt contributes to overconfidence in a group where only supporters of the decision have a voice. The main virtue of the premortem is that it legitimizes doubts.

Furthermore, it encourages even supporters of the decision to search for possible threats not considered earlier. The premortem isn’t a panacea and doesn’t provide complete protection against nasty surprises, but it goes some way toward reducing the damage of plans that are subject to the biases of uncritical optimism.

This concept is very similar to the risk target required in each Alpha Theory analysis. The evaluation of downside is critical and it is important to create a process to make sure that an optimistic bias does not overwhelm logical decision making.

August 18, 2011

SuperFreakonomics on the Macro Economy

Freakonomics” is a New York Times best-seller written by Steven Levitt, a professor of Economics at Chicago, and Stephen Dubner, New York Times writer, that discusses how our common assumptions about what are true are many times wrong. The book was a smash hit and was followed up by “SuperFreakonomics.” In “SuperFreakonimics”, Levitt, an economist, explains why they do not discuss macroeconomics in the book:

“Mainly because the macroeconomy and its multitude of complex, moving parts is simply not our domain. After recent events, one might wonder if the macroeconomy is the domain of any economist. Most economists the public encounters are presented as oracles who can tell you, with alluring certainty, where the stock market or inflation or interest rates are heading. But as we’ve seen lately, such predictions are generally worthless. Economists have a hard enough time explaining the past, much less predicting the future. (They are still arguing over whether Franklin Delano Roosevelt’s policy moves quelled the Great Depression or exacerbated it!) They are not alone, of course. It seems to be part of the human condition to believe in our own predictive abilities-and, just as well, to quickly forget how bad our predictions turned out to be.”

This is a good reminder for investors to focus on their core competencies. As a stock picker, macro forecasting is probably not a productive use of energy. It seems more manageable to properly gauge the prospects of an individual company and not those of the overall economy. As Bill Ackman of Pershing Square says:

"We spend little time trying to outguess market prognosticators about the short-term future of the markets or the economy for the purpose of deciding whether or not to invest.  Since we believe that short-term market and economic prognostication is largely a fool's errand, we invest according to a strategy that makes the need to rely on short-term market or economic assessments largely irrelevant."

Of course the macroeconomy is critical to the success of any individual company. It’s just that the market and economic direction are multi-variable equations with thousands of inputs.  You can find two Nobel Laureate economists with well-defended theses for divergent directions of the US economy.  If they cannot figure it out, why should you try?  Mental capacity is a precious commodity and should be directed to what is knowable. This does not mean we ignore where we currently stand economically. Just be mindful to redirect your energy if you slip towards macro forecasting.

July 19, 2011

Modern Portfolio Theory Stacked the Deck

I have often made the case to clients that diversification and volatility are portfolio management distractions. Not because they are uniformly irrelevant, but because industry dogma gives them a status well above their merit. Our industry uses diversification and volatility as yardsticks of comparison, so funds are naturally incentivized to alter their behavior to maximize their performance based on these measures. If a potential investor gauges a fund’s performance using return per unit of volatility, Value-at-Risk, Beta, tracking error, and diversification – guess what happens? You get lots of fund managers building portfolios with too many positions and avoiding volatility. Not surprising then that our industry has been increasingly dominated by high diversity / low volatility funds since the advent of Modern Portfolio Theory (average fund has 140 positions - study by Pollet and Wilson).

Scott Vincent of Green River Asset Management recently published an article titled “Is Portfolio Theory Harming Your Portfolio?” In it, he describes how Modern Portfolio Theory (Efficient Frontier – Markowitz, CAPM – Sharpe, and Efficient Market Hypothesis – Fama) has changed the shape of the investment industry from stock picking funds to super-diversified quantitative or quasi-quantitative funds. Volatility gained acceptance as the standard measure of risk for one primary reason, it was measurable (see answers to questions 2 and 10 in “Great Investor Mentality Quiz”). But being measurable doesn’t make it right. In “Is Portfolio Theory Harming Your Portfolio?”, Vincent explains:

Amazingly enough, there’s not much empirical “proof” as to why we should use variance as a measure of risk, yet it plays a critical role in almost all large financial transactions. It seems that academicians needed a way to quantify risk to fit mathematical models and they grabbed variance, not because it described risk very well, but because it was the best quantitative option available. But just because it is convenient, and it carries a certain intuitive appeal, doesn’t make it right.

If volatility is not a very good proxy for risk then are our historical judgments of active managers wrong? Yes. Do we need to change the way that we judge managers? Yes. In fact, there are half a dozen “risks” that are more important than volatility. I’m often surprised by investors that care more about volatility than leverage. I certainly believe the latter is more indicative of potential risk (i.e. Asian Financial Crisis, Mexican Financial Crisis, Russian Financial Crisis, S&L, Junk Bond, Sub-Prime Mortgage, et. al. – see article comparing Sub-prime and Junk Bond). Volatility can be tough to stomach, but potential downside loss is true risk. As Vincent says (concept described in “Eight Mistakes Money Managers Make” presentation):

Risk is often in the eye of the beholder. While “quants” (who rely heavily on MPT) might view a stock that has fallen in value by 50 percent over a short period of time as quite risky (i.e. it has a high beta), others might view the investment as extremely safe, offering an almost guaranteed return. Perhaps the stock trades well below the cash on its books and the company is likely to generate cash going forward. This latter group of investors might even view volatility as a positive; not something that they need to be paid more to accept.

Recognize that there is more than one measure of risk and that volatility is not a synonym for risk. Risk is a combination of downside potential, liquidity, time horizon, sector exposure, leverage, market correlation, and volatility (and probably several more). Just like a pilot cannot look at one gauge to fly the plane, a portfolio manager cannot look at one measure of risk to manage a portfolio.

Another major point of “Is Portfolio Theory Harming….” is that diversification is not only over-rated, but it becomes corrosive at a certain point:

The appeal to diversification, according to quantitative finance, is the idea that it allows us to enjoy the average of all the returns from the assets in a portfolio, while lowering our risk to a level below the average of the combined volatilities. But since we can’t call volatility risk and we can’t reliably predict volatilities or correlations, then how can we compile diversified portfolios and claim they are on some sort of efficient frontier? These super-diversified portfolios may be inefficient -- it may be possible to earn higher rates of return with less risk. It may be that by combining a group of securities hand-selected for their limited downside and high potential return, the skilled active manager with a relatively concentrated portfolio has greater potential to offer lower risk and higher returns than a fully diversified portfolio.

Even if we were to make volatility reduction paramount, the case for extreme diversification does not hold true. A study by Fisher and Lorie concludes that, “Roughly 40 percent of achievable reduction is obtained by holding two stocks; 80 percent, by holding eight stocks; 90 percent by holding 16 stocks.” Other studies by authors such as William F. Sharpe, Henry A. Latane' and Donald L. Tuttle make similar statements.* Needless to say, it is hard to argue that 100 positions is necessary for volatility reduction.

But honestly, the more damning case against super-diversification is time:

A fund manager’s job is to identify assets that are priced “inefficiently,” where the market has ostensibly made an error and a stock is available at a level that allows for relatively little risk versus expected return. But finding inefficiencies and maintaining a portfolio is difficult work and requires resources (a manager’s time and brain power, among the most important of these). Resources are not unlimited (most importantly a manager’s time). Therefore, the amount of resources devoted to each specific investment varies inversely with the amount of investments owned in the portfolio. The more positions added to the portfolio, the less likely a manager is to capture these difficult-to-find inefficiencies because he/she has less time and other resources available to do so.

I have used the concept of “mental capital” for years with clients. I ask the client how many hours a month it takes an analyst to cover an investment. For example, let’s say 10 hours. Then we’ll also assume that the analyst has other ideas that are being considered for the portfolio and for each existing investment, they spend another 5 hours working on new ideas. That works out to 15 hours for each portfolio position. If we assume each analyst works about 150 hours a month (excludes time staring at the P&L and filling out March Madness pools), that means each analyst can cover about 10 names with 10 on the watchlist. That means a fund with a team of four can reasonably cover 40 names. But a majority of funds end up with 80 positions meaning that something is being sacrificed for the sake of diversification. More than likely, the portfolio ends up with a mix of insignificant positions that take just as much time as the “core” positions, but have very little impact on the portfolio’s returns. Very rarely will the 50bps position have a large impact on portfolio returns. If it does not matter, get rid of it because it is a drain on mental capital.

All of these facts lead to the question, how do low diversity / high volatility portfolios perform? In fact, fairly well, granted that we do not have a good way to “risk adjust” portfolio returns given that we are no longer using volatility. However, Vincent highlights, “Multiple studies indicate that funds which are more actively managed, or more concentrated, outperform indexes and do so with persistence (Kacperczyk, Sialm and Zheng (2005), Cohen, Polk, Silli (2010), Bakks, Busse, and Greene (2006), Wermers (2003), and Brands, Brown, Gallagher (2003), Cremers and Petajisto (2007)). While we need to acknowledge that because we can’t measure risk, these studies, like any empirical work, need to be taken with a grain of salt. It is nonetheless interesting that if we compare the studies that focus on teasing apart the influence of more active, concentrated management, to the broad all-inclusive studies, there’s a large change in the signal received.”

Funds with the highest Active Share [most active management] outperform their benchmarks both before and after expenses, while funds with the lowest Active Share underperform after expenses …. The best performers are concentrated stock pickers ….We also find strong evidence for performance persistence for the funds with the highest Active Share, even after controlling for momentum. From an investor’s point of view, funds with the highest Active Share, smallest assets, and best one-year performance seem very attractive, outperforming their benchmarks by 6.5% per year net of fees and expenses. – Cremers and Petajisto (2007)

Basically, volatility is a distraction, diversification is a drag, and active concentrated management is a superior method of investing. That is music to the ears of Graham & Dodd’er out there. In a world where the dogma is against you, hold fast that the truth (i.e. common sense) is on your side.

Finally, I have saved my favorite quote of Mr. Vincent’s for last because it describes Alpha Theory perfectly, “The degree of concentration in a fund should reflect the confidence a manager has in the inefficiencies found, and the weight of those investments should reflect the probability of success as well as the level of asymmetry present in the prospective return profiles of the assets.” Right on Mr. Vincent, write on.

 

*If volatility reduction was the game, then holding 8 positions would get you almost home. But that would mean that the average position size would be 12.5%. I believe that diversification can be approached from another angle that involves downside tolerance. Start by asking, what is the maximum position size I am willing to take? Let’s say it is 6% of fund value. And if the minimum position size is 1% and position sizes are scaled linearly then a 100% gross exposure fund would have about 29 positions (6% max position size - 1% min position size = 5% / 2 = 2.5% midpoint + 1% min position size = 3.5% average position size – 100% gross exposure / 3.5% average position size ≈ 29 positions).

June 22, 2011

Debunking Dividend Dogma

As anyone that reads my blog knows, I believe there is a general misunderstanding of dividends in our industry (Institutional Investor Article, Article with Dr. Laffer, Mauboussin Article). My basic point is that you cannot create value by paying a dividend. At best, dividends are a zero-sum equation. And if you include taxes, dividends are actually a net drag to investors. Based on this fact, I believe that companies with excess cash should repurchase shares instead of paying dividends.

Two recent articles, “Buyback or Dividends?” by Stephen Taub in Institutional Investor and “Understanding Compounding: Berkshire’s Not-So-Hidden Dividend Contrarian Secret” by my friend Arthur Clarke, give additional evidence to support buybacks over dividends. “Buyback or Dividends?” summarizes a recent S&P article by Todd Rosenbluth by stating that companies with a disciplined buyback program outperformed dividend paying companies over the past three years. I don’t put much stock in these results because of the three year time horizon and assumption that a cash distribution policy is the primary driver of returns when, in reality, returns are much more complicated (see “A Mathematician Reads the Newspaper” or “How to Lie with Statistics” for a myriad reasons why this generalization of the study is inaccurate). That being said, I believe there are some salient points, including “What counts is the amount of company’s cashflow distribution, not whether it is paid out in dividends or buybacks” and the conclusion that the net effect of share value is zero. Matter cannot be created or destroyed and there is no reason to believe that dividends and buybacks are excluded from this physical tenet.

The more profound article is “Understanding Compounding” by Arthur Clarke. Some of the logic will be familiar to frequent readers of the Alpha Theory blog because it is similar to a piece by Michael Mauboussin, but Mr. Clarke brings two very interesting analogies to bear. One, comparing dividends to the cashflow from a bond. This allows for easy compartmentalization of the dividend stream to calculate a Yield to Maturity which shows the deleterious impacts of taxes and poor reinvestment of cashflows. The second analogy is equating a dividend to a zero cost basis sell of shares. This is a great concept that brings home the impact of taxes on dividends. Both analogies contradict the fallacy that dividends are a good use of company cash.

For those of you still on the fence about dividends, I understand. Dividend dogma is powerful. Just ask the old baseball managers that roundly disregarded Billy Beane when he showed them a better way to pick players (Moneyball article). If you want to be enlightened, take some time and read these articles. Or better yet, analyze a dividend-paying company using Enterprise Value and I believe you will agree that a transfer of cash from one pocket to another does not create value.

April 12, 2011

Understanding Dividends - Michael Mouboussin Edition

As many who read my articles know, I am a big fan of Michael Mouboussin and not a big fan of dividends (more specifically, industry dogma surrounding dividends). So it was nice to read an article by Mr. Mouboussin which coherently makes the case that our industry looks at dividends through distorted lenses. From Mr. Mauboussin's recent article, "The Real Role of Dividends in Building Wealth":

"If you listen to the press or read missives from investment firms, you might conclude that dividends play a prime role in capital accumulation. In fact, well-known strategists have pointed out that dividends have accounted for 90 percent of equity returns over the past century. This statistic is potentially very misleading and warrants further examination. Here's the ending without the plot: price appreciation is the only source of investment returns that increases accumulated capital over time.

The cause of the confusion is that analysts do not distinguish between the equity rate of return and the capital accumulation rate. Depending on the choices of the shareholder, the rates can be very different. Understanding the distinction is essential for assessing past results and for thinking about satisfying future financial obligations."

My contention has always been simple, "dividends do not create value." It was my belief that for non-taxable accounts, dividends are fairly neutral, but Mr. Mauboussin makes a compelling case that dividends may even be a drag in non-taxable accounts because most of us do not actually reinvest the full dividend back into the equity. His analysis is thought provoking and definitely worth a read, The Real Role of Dividends in Building Wealth as are most of his writings which can be found on the Legg Mason website.

If you looking for some of my other articles on dividends, you can find them at:

  1. Institutional Investor
  2. Dr. Laffer Article

March 15, 2011

Will We Ever Learn?: The Scary Similarities of the Subprime Mortgage and Junk Bond Crises

I have never claimed to be a market historian, but the obvious similarities of the Subprime and Junk Bond crises are staggering even to the casual observer. Maybe it is a confluence of my recent reading of Sorkin’s “Too Big to Fail”, Lewis’s “The Big Short”, and Klarman’s “Margin of Safety” that brings the parallels into clear focus, but I am floored by our ability to have two vastly identical crises in the course of two decades.

While reading chapter 4 of “Margin of Safety” I turned to my disinterested wife after each page and proclaimed, “this junk bond thing is almost identical to the subprime crisis.” The narrative is plagiarism of the same financial horror story (see the chart below). In the search for higher yields, investors relax standards and issue debt to people (subprime) / companies (junk bond) that have no ability to pay back their obligations. Diversification and low correlation amongst low-quality borrowers was used as a justification for reducing the risk inherent in individual risky loans. Due to the demand for higher yielding assets, investment banks concentrated human and financial capital at staggering rates into packaging and selling subprime / junk bonds.

Capital available to finance these shaky deals increased with the ability to resell structured products like mortgage-backed securities and collateralized debt obligations (subprime) / collateralized bond obligations (junk bond). Retail banks and institutional investors (subprime) / thrifts and savings and loans (junk bond) created ready capital sources to soak up the buy side of any high yielding deals. CDO-focused funds (subprime) and high yield mutual funds (junk bond) added additional fuel to growing capital being given to undeserving borrowers.

An escalation of creative financing was needed to allow lower and lower quality standards including Pick-a-Pay, Alt-A, No Doc, Interest-Only (subprime) / zero-coupon and pay-in-kind (junk bond). The ratings agencies had to play dumb or be dumb to allow packaged subprime mortgages and packaged junk bonds to be magically rated investment grade. To justify these ratings they used historical models that assumed house prices could not fall (subprime) or historical junk bond default rates and no refinancing issues (junk bond). Finally, all of these lending machines were picking up speed as the empirical evidence was flying in the face of all that were willing to look as illustrated by subprime defaults growing from 2005 on and MBS, CDO, and CDS prices staying stable (subprime) / junk defaults rising in the late 80s at the same time the pace of new junk deals was accelerating.

Is our memory so short that we cannot remember the financial chaos created by the junk bond market in the 1980s? Some remembered because there were many smart investors that made the connection and made substantial bets on how the subprime story was going to end. Many issues coalesced to allow both bubbles to form and I certainly do not have the prescription to prevent it from happening again, but the first place I would focus my attention is the flawed incentive structure that paid the participants of the junk bond market to make foolish bets. The incentive to take outsized risk for short-term gain has not changed substantially in the past 20 years and has probably even become more acute with the increase of financial engineering and the repeal of Glass-Steagall in 1999. As my friend Dr. Art Laffer says, “Incentives are the key to understanding economic behavior.” Maybe we should stop paying bonuses on this year’s returns and instead pay a three or five-year rolling percentage of returns. That could discourage some of the short-termism that manifests financial crises.

We are just fortunate the Credit Default Swap market was nascent in the 80s or the Junk Bond crisis would have been compounded like the Subprime Crisis of this decade. Will we ever learn?

Similarities of the Subprime and Junk Bond Crises

Junk Bond Market (1980s)

Subprime Market (2000s)

Search for higher yielding assets

Search for higher yielding assets

Issuance of debt to companies that did not have the cash flow to pay back obligations

Issuance of debt to people that did not have the cash flow to pay back obligations

Thrifts and Savings and Loans willing to invest in junk bond backed obligations (over 1000 banks failed1)

Retail banks and institutional investors willing to invest in subprime backed obligations (230 banks have failed to date2)

Investment banks deploy substantial human and financial capital towards junk bond market

Investment banks deploy substantial human and financial capital towards subprime market

Hypothetical diversification and low correlation of underlying loans created perception of lower risk securities

Hypothetical diversification and low correlation of underlying loans created perception of lower risk securities

Structured products like collateralized bond obligations (CBO) to finance further investment

Structured products like mortgage-backed securities (MBS) and collateralized debt obligations (CDO) to finance further investment

High yield mutual funds created additional liquidity

MBS and CDO focused funds created additional liquidity

Disregard for empirical evidence: junk default rates increasing while new issuance of junk bonds accelerating

Disregard for empirical evidence: sub-prime default rates increasing but MBS, CDO, and CDS prices remaining stable

Creative financing: Zero Coupon and Pay-in-Kind

Creative financing: Pick-a-pay, Alt-A, No Doc, Interest Only, Option ARMs

Issuers paying ratings agencies for ratings on new issues

Issuers paying ratings agencies for ratings on new issues

Ratings agencies allowing junk bonds to be packaged together to create investment grade securities

Ratings agencies allowing subprime to be packaged together to create investment grade securities

Ratings agencies used historical models without a scenario for weak economy, no refinancing, and default rates higher than historical levels

Ratings agencies used historical models without a scenario of declining house prices

1 Many of the bank failures of the 80s and 90s were due to bad commercial and real estate loans, not just junk bonds

2 The full impact of commercial loan losses have not been realized at this point