(866)-482-2177

sales@alphatheory.com

REQUEST A DEMO

SYSTEM REQUIREMENTS


Please note the following System Requirements. Further, please limit the number of open applications (particularly price streaming applications) while logged in to Alpha Theory™.


Recommended System Specifications
Processor: Dual Core or Quad-Core 2.4GHz or faster
RAM: 4GB+
Browser: Google Chrome 30+
Screen Resolution: 1280 x 1024 or greater
Internet Access: Business Class High-Speed


Minimum System Requirements
Processor: Intel Pentium-M 2.0Ghz or equivalent
RAM: 2GB+
Browser: Google Chrome, Mozilla Firefox, Internet Explorer 9+ (without Compatibility View), Safari
Screen Resolution: 1024 x 768 or greater
Internet Access: High-Speed

Subscribe to Alpha Theory content

Alpha Theory Blog - News and Insights

31 posts categorized "Risk-Adjusted Return"

June 21, 2016

How Good Are My Analysts? Building a Better Hedge Fund Through Moneyball & Superforecasting

Traditionally, measuring hedge fund analyst skill has been an opaque process mired in ambiguity and subjectivity.  It is often misconstrued and tainted by portfolio manager influence in the form of sizing decisions, liquidity constraints and other non-analyst determinants.  But, in the same way Moneyball revolutionized evaluating baseball player value by prioritizing on-base percentage over batting average, Alpha Theory has distilled the key indicator for predictive aptitude. Alpha Theory invented the Alpha Theory Accuracy Score to introduce radical transparency into the rating of forecasting skill for hedge fund analysts.

P&L is Yesterday’s Batting Average

Using the Moneyball analogy, quantitative disruption of baseball player evaluation changed the way players are paid by isolating the player skill that contributes most to team wins. Using that data, managers now pay athletes in proportion to the amount of that winning skill they individually possess.  As such, the key metric for baseball player value evolved from batting average, to the more predictive on-base percentage, or OBP. 

Specifically, OBP has a 92 percent correlation with runs scored compared to batting’s 81 percent, making it more predictive.  Also, OBP’s 44 percent correlation year-to-year is more persistent than the 32 percent correlation of batting.  The predictive reliability and performance consistency make OBP a superior metric to forecast wins for baseball teams.  OBP’s disruption of batting average is an apt metaphor for the way Alpha Theory’s Accuracy Score will transform analyst ranking and assessment today.      

In 2016, analysts are still primarily rated by the profits and losses their investments generate for the fund, or P&L.  But making money on an investment is a misleading measure of analyst skill.  Beyond its tendency to be distorted by portfolio manager discretion, P&L performance, both good and bad, often masks the integrity and quality of investment processes.  Thus, P&L often misleads portfolio managers into thinking lucky analysts are actually skilled and vice versa.

For example, take these two analysts:

How good is my

Looking at the table above and using P&L to measure skill, Analyst #1 would be exceptional and Analyst #2 would be sub-par.  But Analyst #1 and #2 had the same forecasts, so their forecasting skill is actually identical.  P&L does not translate into forecast skill because analysts do not have ultimate control over position sizing; the portfolio manager does!

More Science, Less Art                                                                                                                         

Inspired by the ideas presented in the groundbreaking book, Superforecasting: The Art and Science of Prediction, Alpha Theory’s Accuracy Score delivers quantitative insight into a qualitative blind spot for portfolio managers.  Authored by Wharton Professor Phillip Tetlock and Dan Gardner in 2015, Superforecasting applies a Brier Score-inspired approach to quantifying predictive skill.  The Brier Score was created by meteorological statistician, Glenn Brier, in 1950 and measures the accuracy of probabilistic outcomes.  Superforecasting applies Brier’s methodology to only binary, or yes/no, outcomes.  

The New Standard

Alpha Theory’s Accuracy Score is an algorithmic solution that measures analysts’ predictive skill over a 0 - 100 percent range, where 100 is the best.  Scores are calculated on a per-forecast basis and then averaged per analyst.  The Accuracy Score algorithm transforms point estimate price targets and probability forecasts into an implied probability distribution, enabling each forecast to be independently scored.  By distributing multi-faceted outcomes across a range of probabilities, the Accuracy Score can measure forecasting skill for any price along the distribution.

The distribution of scores across our Alpha Theory clients is shown below.  The results follow a normal distribution, which further validates the Accuracy Score’s efficacy in rating analysts’ ability to forecast future price movements.

Screen Shot 2016-06-21 at 9.41.12 AM

Good forecasts are the most essential component of fund success and critical when portfolio managers are sizing positions.  Using a data-driven approach to determine which analysts make the best forecasts allows managers to apply those forecasts with greater confidence, leading to better position sizing and superior performance.

The Good Judgement Project

In 2011, the Intelligence Advanced Research Projects Activity, a U.S. government research organization, sponsored a geopolitical forecasting tournament that would span 4 years. The IARPA tournament enlisted tens of thousands of forecasters and solicited more than 1 million forecasts across nearly 500 questions related to U.S. national security.

A group called the Good Judgement Project entered the competition, engaged tens of thousands of ordinary people to make predictions, and the won the tournament. The GJP’s forecast accuracy was so persistent that IARPA closed the tournament early to focus exclusively on them. In fact, GJP was able to find a select group of “Superforecasters” that generated forecasts that were "30 percent better than intelligence officers with access to actual classified information.” 

Ways to Improve Forecasting Skill

The main findings of the GJP and the book that followed are especially relevant to investors. The research in Superforecasting indicates that predictive accuracy doesn’t require sophisticated algorithms or artificial intelligence.  Instead, forecast reliability is the result of process-oriented discipline.  

This process entails collecting evidence from a wide variety of sources, thinking probabilistically, working collaboratively, keeping score and being flexible in the face of error. According to the book, the 10 traits that most Superforecasters possess are: 

    1.  Intelligence - above average, but genius isn’t required

    2.  Quantitative - not only understand math but apply it to everyday life

    3.  Foxes, not hedgehogs - speak in terms of possibilities, not absolutes

    4.  Humility - understand the limits of their knowledge

    5.  System 2 Driven - use the logic-driven instead of instinct-driven portion of their brain

    6.  Refute fatalism - life is not preordained

    7.  Make frequent and small updates to their forecast based on new information

    8.  Believe that history is one of many possible paths that could have occurred

    9.  Incorporate internal and external views

    10. CONSTANTLY SEARCH FOR WAYS TO IMPROVE THEIR FORECASTING PROCESS

Accountability = Profitability

Organizations cannot improve without systematic and data-driven assessments of their personnel.  Take Bridgewater Associates, for example.  One of the primary factors driving the persistent outperformance of Ray Dalio’s storied fund has been the institutional commitment to radical transparency and accountability.  Similarly, Alpha Theory’s Accuracy Score illuminates blind spots and holds analysts accountable through the precise measurement of predictive skill. For funds that lack the time, inclination or internal resources to create their own probabilistic forecast-grading models, Alpha Theory’s Accuracy Score fills the void.

To this end, Alpha Theory is exploring areas of collaboration with the leadership of Good Judgment Inc. (a spin-off from the Good Judgement Project in “Superforecasting”).  As the competitive landscape for investment capital tightens, discretionary managers must leverage probabilistic data to survive.  Alpha Theory’s Accuracy Score is a mission-critical asset that can help funds compete in the current investment landscape, improving operating inefficiencies and better aligning analyst pay with their intrinsic value to the firm.

May 31, 2016

All in a Day’s Work – Mental Capital Allocation

A Portfolio Manager acts as the final arbiter of what is a good idea and what is not. The Portfolio Manager’s acumen is what investors are ultimately paying for. They need to know each position almost as well as the analyst to be able to properly position it against the rest of the positions in the portfolio. The problem is that most PMs don’t have the time to carry that responsibility because they’re invested in too many positions.

We can prove that PMs don’t know their positions well enough by breaking down a PM’s performance over the course of a year. Let’s assume that the average portfolio manager has 2,500 work hours a year to dedicate to stock analysis (50 weeks times 50 hours). The amount of time dedicated to stock analysis is determined by calculating the percentage of time dedicated to raising capital, communicating with existing investors, running the business, and staring at the P&L.

If the typical portfolio consists of 100 positions and has 50% turnover, then the PM had to understand 150 positions over the course of the year. Now, let’s assume that a PM evaluates two ideas for each position they put in the portfolio. That doubles the number of positions analyzed to 300.

All in a day

The result is 6 hours (basically a day) per year for each position in the portfolio. A PM is ultimately in charge of answering a simple question, “Do I believe my analyst’s assumptions?” 6 hours is hardly enough time for due diligence of that sort.

Don’t take my word for it. Do the math for yourself:

    1. HOURS: Measure your yearly hours using the steps above

    2. ACTUAL POSITIONS: Measure historical average positions per year

    3. ANALYSIS TIME: Measure the time it takes to properly analyze a company (take an example of work you felt really good about)

    4. THEORETICAL MAX NUMBER OF POSITIONS: Divide #1 by #3

    5. RESULT: If Actual Positions is greater than Theoretical Max Number of Positions, then you have too many.

Self-reflection is a key to success because it shows us the proper steps to improve. Reflect on how you allocate your Mental Capital and take the time to figure out if you have the time.

April 25, 2016

The Misperception of False Precision for Price Targets and Probabilities

“Objectivity is gained by making assumptions explicit so that they may be examined and challenged, not by vain efforts to eliminate them from analysis.” – Richards Heuer, Psychology of Intelligence Analysis.

Some investors chafe at price targets because they smack of “false precision.”  These investors are missing the point because the key to price targets is not their absolute validity, but their explicit nature which allows for objective conversation about the assumptions that went into them.  Said another way, price targets improve the investment process because they foster great questions and force the team to be able to defend the methodology behind their calculations.

If “false precision” is the concern, then probabilities are doubly damned. They not only require precise estimation, but are considerably more subjective than price targets. The problem with this argument is that probability is an assumption in the process even if you don’t make it explicit. No investor believes they’re 100% right, so there is always a chance of success and failure. In an implicit process, probabilities are expressed with words like “pretty confident” or “high likelihood”, but they are still an estimate of probability. One that is easily misunderstood and difficult to judge. For the same reason that you should require explicit price targets, you should require explicit probabilities.

Not only do explicit price targets and probabilities help investors make better decisions in the short term, they also allow firms to use historical analytics to measure where they are making mistakes and find ways to avoid them.  Great PM’s should be able to leverage information like:  My analysts assume they’ll be right 74% of the time but are actually only right 51% of the time. I will adjust ask them to adjust their assumptions accordingly. 

To make the differences more concrete, I’ve created an example conversation. Problems are better framed when price targets and probabilities are explicitly defined. Compare these conversations and choose the one that you’d prefer as a portfolio manager (I’ve bolded the differences in the two pitches):

Implicit: I think we should add Baidu to the portfolio because overall earnings are being depressed because their non-search business lines are losing money. The current search business is solid, as Baidu holds a natural monopoly in China like Google does in the U.S.A. and their prospects are even better because their users are rapidly moving from 3G to 4G mobile, which will dramatically increase search demand. Their two money-losing lines are independently worth $10+ billion, but are currently not assigned value. The risk is that they can’t monetize the two non-search businesses and they become big money losers. I think there is a high likelihood this is double based on the sum-of-the-parts over the next year or so and has limited downside based on net cash and conservative earnings and multiple. The Risk-Reward is compelling.

Explicit: I think we should add Baidu to the portfolio because overall earnings are being depressed because their non-search business lines are losing money. The current search business is solid, as Baidu holds a natural monopoly in China like Google does in the U.S.A. and their prospects are even better because their users are rapidly moving from 3G to 4G mobile, which will dramatically increase search demand. Their two money-losing lines are independently worth $10+ billion but are currently not assigned value. The risk is that they can’t monetize the two non-search businesses and they become big money losers. I think there is a 70% chance the stock could hit $350 based on sum-of-the-parts over the next year and has downside to $150 based on net cash and conservative earnings and multiple. The Probability-Weighted Return is 50%.

Your discretion as a portfolio manager is not lessened by explicit assumptions. In fact, the explicit assumptions are examined and challenged in a way that is not allowed by the vagaries of implicit assumptions. What “high likelihood” means to you may be different than your analyst. Your interpretation of “limited downside” may be different. Explicit assumptions aren’t designed to make you guess at the unknowable. They are designed to make sure the investment case can withstand the stress of being explicitly defined. If they can’t, you are investing at your own peril.

This post is an extension of the 2014 post – Explicit Lyrics – Why Implicit Assumptions Are Dangerous.

March 21, 2016

Ruminations on Risk

We present this month an interview with Alpha Theory Advisors head Benn Dunn.  Alpha Theory Advisors is the consulting arm of Alpha Theory, providing investment process engineering and thought leadership, outsourced risk management, research leadership and tactical portfolio management guidance to numerous alternative investment firms currently managing approximately $6Bn in AUM across all asset classes. Prior to joining Alpha Theory, Benn served as the Head of Risk Management at the CR Intrinsic Investors unit at S.A.C. Capital Advisors and Chief Risk Officer at Weiss Multi-Strategy.

  1. What is making this market environment so different from others where mid- to high-single digit pullbacks have occurred? Index performance does not appear outright disastrous but we have heard of many hedge funds seeing drawdowns disproportionate and unexpected relative to their exposure levels and typically conservative stance vis-à-vis directional bets.

Things have been changing rapidly versus last year, in that there are many hedge fund-specific issues, even outside of macro and market dynamics, starting to occur.  We are seeing a perfect storm where multi-manager platforms that tend to run (by mandate) market-neutral had levered up in some cases very substantially, and then saw their risk models come apart given an increasingly damaging – and self-perpetuating – unwind in momentum but also other segments.

Crowded growth names like LinkedIn, Tableau and others during earnings season saw declines of as much as 50% in a day – unprecedented in their stock histories – after reporting ugly results.  A widely circulated note from Cornerstone in January essentially fit this increasingly talked-about narrative (that momentum was set to unwind), and that only contributed to the perfect storm.  The short of it is that many risk models, which had until last week been predicated on being market-neutral, factor-neutral, sector-neutral, etc., no longer held up as such.

When this kind of thing starts to happen, the primary and pervasive response is to cut gross exposure.  A big part of the stealth correction and behind-the-scenes damage in the early part of this year can be attributed to reductions in gross, and the result was that crowded names with “small exits” became very dangerous.

  1. What advice, as a result, are you giving clients?

While the contrarian in me suggested some buying of beaten-up but quality assets, I also suggested clients be somewhat defensive, and indeed keep gross exposure in check, for a few reasons.  There is still a very unclear macro environment at the moment, whether it’s the Fed, China, energy company debt problems, Mid-East geopolitics, etc.  The list of reasons the market could go down remains much longer than the list of reasons the markets should go up.

In addition, asset allocators, who tend to avoid over-staying their welcomes and sometimes redeem first and ask questions later, seem in some cases to be pulling out of the multi-manager platforms that were supposed to behave more neutrally.  So that has added to what seemed for too much of this year to be a self-perpetuating risk environment, and the reduction in gross exposure that can have knock-on effects.  One note I saw out of Morgan Stanley in February highlighted that gross exposure started the year at the highest it had been since 2008.  And yet we have been hearing some very well-known and substantial market-neutral platforms have experienced drawdowns that are very atypical for their style and approach.

  1. What can stop the bleeding or draws a line in the sand?

The downside is sustained until CROs (Chief Risk Officers) get their funds’ risk levels back in line and some of these multi-managers are done cutting gross exposure.  Some funds will eliminate individual sector groups for violating drawdown limits, so capital exposed comes down to risk levels that are within appropriate limits.  And finally, some stocks may go to equity values that are below any rational level associated with even liquidation values.

  1. What might happen at the macro level to help the dust settle?

One silver lining here of late is the weakening dollar.  If the narrative was China being forced to weaken its currency and emerging markets hurting due to the strengthening dollar, then these problems start to get alleviated as the dollar backs off its recent strength.  Oil prices go up, along with other commodities.  Of course while all this helps, there are some big market-neutral platforms that saw drawdowns of as much as 5%-20% for January, and this on top of at least one immediately prior weak year.  So many allocators will pull back from these platforms and try to high-grade their books even as some of the dust is settling.

  1. If the contrarian in you has you tempted to do something, what is it?

The contrarian in me is wanting to back up the truck and buy equities of companies without debt maturities in the near term or significant debt at all, where there is a sustainable business model, positive free cash flow, no need for access to capital markets.  These are stocks that represent babies being thrown out with the bath water.  The next trick is whether a fund has something closer to permanent capital – or at least locked-up capital, because that allows a fund to wait for and survive the bottom and benefit from the inevitable reversion to the mean.

  1. If a fund does not have permanent capital and is somewhat levered and net long exposed, what is the most appropriate advice?

One has to de-gross or take down overall exposure.  Of course, this exacerbates the downside among crowded names, where everyone is selling the same things at once.  But the problem with a levered fund is that risk becomes existential, there is actual business risk for a fund, and my job as a CRO is to prevent that above all.

  1. Should funds reduce net long exposure or should they focus on gross?

Over the past couple of months gross represents risk.  When the multi-strategy funds are unwinding or their longs have sold off well more than their shorts, they can't just cover their shorts and let themselves get off-balance on that score.  So they are just de-grossing; they cannot organically take one side of the book down without doing so on the other side.  What this means is that even with an up tape, funds can still get hurt badly if they own some crowded names.

  1. Are there some over-arching themes to be aware of in addition to all this?

It’s been talked about for some time now that liquidity has become extremely limited among some stocks in the market, especially as you go down the market cap ladder.  This causes exits to be very narrow and even if only one fund has to unwind its book, there are simply not enough incremental buyers to take on the stock for sale if the selling has to be fast and sizeable in nature.

There can be secondary and even tertiary effects where the fund exiting a set of names has overlap with names of a second fund that cause the second fund to become stressed, which in turn could impact a third fund holding only a couple of the second fund’s assets.

For a fund with longer-dated capital, this kind of forced or artificial liquidation can represent an opportunity.  Going through the big holdings of what are known to be highly stressed funds and correlating to market volumes and trader scuttlebutt to build a sense for completions to unwinds, name-by-name, is not an uncommon practice.

  1. For funds that lack a CRO (or the means to pay for one), what advice would you give?

There are available tools that can help a fund measure its exposure to any number of factors; even Bloomberg can be of help.  One should be aware of some common mistakes and pitfalls from a risk standpoint: beta, thematic or industry mismatch.  Being long software and short semis – even if beta matched – can prove to be a sector mismatch; one can be too heavily long growth and short cyclicals without being completely aware of such nuances and potential consequences.  There are things that can go wrong on the earnings front that can shift how the fund appears to a risk model, even overnight.

  1. What other risk variables are commonly overlooked?

The Barra model now counts so many factors (I think there are something like 64 industry groups alone).  There is country exposure, basis risk, and even temporal variables to be aware of.  For instance, the short energy/long consumer trades people put on may appear appropriate at first, but the consumer names only see their up move resulting from low energy prices with an extreme lag (up to6 months to a year).  A fund can have balance sheet mismatches, where one is long companies that need access to capital and short some less levered ones.  Or, a fund could be long a few European banks and short some U.S. financials, where the market’s treatment of those different categories cause downside from a geographic mismatch.

  1. What aspects of Alpha Theory can a PM make use of to be more risk-aware and mindful of exposures?

The Confidence Checklist is certainly one element of the application that can allow users to score or grade for risk factors of any kind.  Re-underwriting one’s price targets – even to incorporate recessionary earnings scenarios or market multiples – can also be a potential help.  Just focusing on individual company outlooks and prospects for their own sake and ignoring some of the noise can be a constructive exercise that Alpha Theory forces.  Analysts should be making these company-by-company assessments regularly.

September 28, 2015

How I used Alpha Theory to help my girlfriend land her dream job

We welcome our very own VP of Business Development, Jason Cooper, for this guest post:

As the Head of Sales for Alpha Theory, I get to spend my days acting as a diagnostician/thought partner to wickedly intelligent people (Hedge Fund PMs and analysts), helping them unpack, assess and formalize their investment process to achieve greater discipline, speed their path-to-action and improve outcome quality and consistency. In short, I get to help smart decision-makers make even smarter decisions. We’re really good at what we do and it’s a ton of fun!

In my personal life, however, I’m less accomplished in this area. Like many (a/k/a all?) in committed relationships, I attempt, regularly and unsuccessfully, to provide objective and impactful counsel to my better-half on life decisions, large and small. For low-stakes decisions, my opinions are solicited and just-as-quickly dismissed. No (little) fuss. No (little) muss. As the stakes increase, however, so does the likelihood that I’ll receive “negative feedback” that in some cases can be quite severe (interestingly enough and somewhat counterintuitively, exercising the “no opinion” option actually can yield the highest severity negative feedback of all). Anyway, I’d pretty much resigned myself to a lifetime of the observed dynamics until I had a breakthrough – a “hack” if you will – worth sharing. The following is a true story. Identities have been withheld to insulate myself from any potential regulatory/legal liability or potential verbal/physical retaliation:

Recently my better half sought my advice on a high-stakes decision. As a mid-level employee for the online division of a large, listed retailer, she had become increasingly unhappy in her role so she began looking for new opportunities, quickly catching the attention of a hiring manager for a new economy darling who was eyeing her for a more senior position with significantly higher pay. Successfully (and discreetly) progressing through a grueling, multi-round interview process that has become something an urban legend in Silicon Valley circles, she was notified that she had been selected as one of the final candidates for a full-day, on-site, round-robin format selection interview. She got this notice on Friday afternoon and the interview was set for the following Monday. The problem? She had been preparing all quarter for a presentation to senior management scheduled on the same day as the interview, a presentation that had been on the calendar for months and was understood to significantly inform her annual performance review and, by extension, influence her career trajectory.

What to do? Option #1: Bag the presentation – at significant professional risk - for the chance at her dream job. Option #2: Miss the job interview, play it safe, nail her presentation and continue her unhappy existence until the next promotion?

The decision, emotionally charged, was long on variables and short on complete information. Rational thinking had been abandoned. I did my best to listen and suggest ways to frame the decision. Sparing the details, in sum, it didn’t go well. Finally, when all else seemed lost, I gasped “let Alpha Theory tell you what to do!” In my amazement, she immediately calmed, processed this unusual suggestion and agreed. We spent the next 30 minutes defining and structuring the scenarios, associated payoffs, probabilities, time horizons, other key assumptions (like what annualization methodology we’d use, how we’d account for reinvestment rate and the width of the potential distribution of returns, etc.) and how we’d judge the results and act (i.e. like a poker player). 

  • Positive expected return: Bet (prepare a colleague for the presentation and go to the interview).
  • Negative expected return: Fold (miss the interview and take your chances with the existing job).

I plugged everything into Alpha Theory. Here were our results (salary details have been altered):

     Help Girlfiend Land Job

               

Clear. Unambiguous. Unemotional. Actionable. Fast. Alpha Theory said “go to the interview”. I emailed her the screen shot above and, to my amazement, she instantly and calmly replied, “Okay. I’ll do it”, which was followed shortly thereafter by an enthusiastic one-line email reading, “I need Alpha Theory for my life!”

Of course you know how this story ends: she pushed off the presentation to a colleague and went to the interview, nailed it and got the offer two days later (which she accepted). This story illustrates the importance of frameworks and using them explicitly (vs. implicitly) to help drive a decision process that yields higher quality outcomes (with lower stress). This was a straight-forward, point-in-time binary decision (go vs. don’t go) and nonetheless was difficult to answer in the face of such high-stakes, so many subjective variables, the human condition (daily decision-fatigue, general fallibility). Can you imagine how much harder it is for investors who are trying to ‘solve for’ a similarly complex primary buy/sell decision but also the second order decision of “how much”? I can, because I see it every day and I’m amazed how many sophisticated investors effectively try to “white-knuckle it” using their mental heuristic when they don’t have to.

Next time you or your significant other have a life decision, try using Alpha Theory. You may not achieve true objectivity and we can’t guarantee a right answer, but the structure and discipline will help drive a more ‘subjectively rational’ decision and maybe…just maybe…no significant other will be injured in the making of this decision.

July 18, 2014

Probability Inflation: The Risk of Ignoring Batting Average – Part 2

In my last post, I discussed how the probability of success in investing is grossly overestimated compared to historical batting average (funds assume 75% of success vs. their batting average of 55%). This causes miscalculations in portfolio allocation. In this post, I will discuss how Alpha Theory has been partially controlling for this phenomenon through our proprietary risk-adjusted return calculation (Alpha Theory Risk-Adjusted Return). Our method basically averages the arithmetic and geometric probability weighted returns. Note the difference between the Arithmetic and Geometric returns below.

Geometric return more heavily weights the downside node and reduces the risk-adjusted return for assets with high probabilities of extreme loss (see “Which Way is Up?” for further explanation). Not only does the geometric adjustment highlight the same gradation of the lower probabilities between the 75% and 55% arrays, it also is more intuitive to a portfolio manager. When surveyed, we found that portfolio managers would not size the top three positions equally when only shown the arithmetic return. There is an inherent heuristic that portfolio managers employ to control for this phenomenon. The problem is that heuristics are easily miscalculated and inconsistently applied.

Alpha Theory uses a hybrid Arithmetic/Geometric return to calculate the Alpha Theory Risk-Adjusted Return. This allows managers to properly account for portfolio impact and avoid heuristics. Downside is more impactful than upside in a compounding vehicle like a portfolio, so overemphasis of downside is practical. To properly allocate capital, it is critical for funds to first, make explicit forecasts of price targets and probability and then second, properly account for the asymmetry of upside and downside. This is difficult (impossible really) to accomplish without a systematic approach and we encourage every fund to capture and calculate risk-adjusted return for every investment in their portfolio.

In portfolio management, preventing loss is paramount. Using realistic probabilities, more closely in line with the fund’s historical batting average, and Alpha Theory Risk-Adjusted Return, will properly skew the portfolio towards those assets that have large asymmetry and little downside. In a compounding vehicle, like a portfolio, avoiding these “bad bets” will generate higher long-term geometric expected return, the ultimate goal of portfolio management.

June 17, 2014

Probability Inflation: The Risk of Ignoring Batting Average – Part 1

My company, Alpha Theory, started performing historical analysis of clients’ data about six months ago. We’ve only finished work on a dozen or so clients but one trend seems to be consistent, probability inflation. Let me explain what that means. Our clients use Alpha Theory to estimate the risk-adjusted return of each investment and use that information to properly allocate capital across their portfolio. Part of estimating risk-adjusted return is assigning probabilities to various potential outcomes. What we find is that clients generally have probabilities of success (scenarios where they make money) that fall in the 70-80% range. The issue is that their historical batting averages are more in the 50-60% range (batting average is how often they ACTUALLY make money on their bets).

"Of the almost 100 U.K. and U.S. fund managers in Investment Intelligence’s database, Chaban says, the best hit rate he’s seen is 64 percent; the median is just over 50 percent. " - Taras Chaban, chief executive officer of Investment Intelligence Ltd. (Bloomberg article)

Inflated probability of success causes two issues. One, the risk-adjusted return is inflated. Two, the probability of loss is too low which results in an underestimation of risk. The net effect is overly optimistic assumptions and bets on assets that are too risky.

Here’s an example. Below we have three potential investments with equal 75% probabilities of success. As you can see, the return and risk characteristics vary dramatically for each asset but the risk-adjusted return is a constant 30%. Imagine a portfolio manager deciding between this set of investment options with inflated probabilities and determining to weight all three positions equally.

                       

Now imagine a portfolio manager, presented with the same investment choices, but with more realistic probabilities of success (55% instead of 75%). There is no way that these would be equally sized. In fact, #3 wouldn’t even be considered.

 

The return reduction for investment #1 is 8%, which is meaningful, but nothing like that for #3, which falls from 30% to -5%! Assuming a 55% probability of success creates an entirely different (and more realistic) set of investment possibilities for the portfolio manager to choose between. The complexion of the portfolio changes to lower probability of downside bets. I strongly encourage funds to use more realistic probabilities (average closer to historical batting average). If not, they will almost certainly suffer from overexposure to the #3s of the investment world.

Be on the lookout for next month’s post discussing Alpha Theory’s novel approach to calculating return and its relevance to the issue of Probability Inflation.

 

January 28, 2014

Explicit Lyrics - Why Implicit Assumptions are Dangerous

“Objectivity is gained by making assumptions explicit so that they may be examined and challenged.” – Richard Heuer, Psychology of Intelligence Analysis

Alpha Theory asks investors for a few basic inputs (used to calculate an expected return):

• How much can I make if I’m right?

• How much could I lose if I’m wrong?

• What are the probabilities of each?

When I tell folks that they MUST have these forecasts to make investment decisions, I often get a response of “sure, I can come up with them, but I have no idea if they are going to be right.” They’re basically conceding that since they’re not sure if they’re going to be accurate, then they’re not going to do it. The problem with that logic is that firms are using something to pick stocks. Position sizes don’t come out of thin air. When pressed to describe how a decision is made, these firms will describe a process that sounds very familiar to the expected return calculation. They “generally” come up with a price target. They discuss and debate downside risk. They talk about conviction level. My belief is that managers feel better about discussing the inputs in the abstract or implied sense, rather than making them explicit because they can’t be sure how “right” their explicit assumptions will be. If they do make the inputs explicit, they would rather have them all componentized on a sheet, instead of combined into a single expected return. I believe this is because of the misconception that one bad input spoils the whole calculation.

Granted, a bad input reduces the efficacy of the result, but doesn’t nullify it. But this train of thought still misses the point. The real issue is that the same good or bad inputs are going into the managers own “mental” calculation of expected return and position size. The “garbage in-garbage out” dilemma dominates whether the process is explicit or in the manager’s head. Only by making the calculation explicit do you avoid the cognitive errors of mental calculation (see the quote at the beginning of this article). Intuition and instinct and experience aren’t mitigated by making inputs explicit, they’re just externalized so they can be properly weighed and judged.

Try an experiment. Talk through a portfolio position, going through every aspect you find relevant and ask the manager, “what is the expected return and what is the right position size?” Now do the same thing and determine an explicit reward price, risk price, and the probability of each. Use those to calculate an expected return and position size. See which process is more accurate, more repeatable, and more easily monitored. I believe you’ll find that the explicit process gives you greater confidence, better communication, and improved returns with less risk.

I’ve been “spreading the gospel” about using expected return in portfolio management for eight years and have had over 2,000 meetings. I’ve noticed a change in investor mentality over that time and the biggest shift is the attitude towards process. In the beginning, I had to convince managers that they needed an explicit process to be successful. Now, my anecdotal estimate is that half of the managers I meet with already realize they need to create a more explicit process. The “chasm” has been crossed and the advantages gained from using an explicit process to pick and size stocks is moving from a competitive advantage to a cost of doing business. If a fund is still relying on instinct and heuristics to manage the fund in a few years they are going to get left behind by those that embrace process. As an analog to the shift towards process, look at the adoption of Moneyball in all sports over the 90s and 00s. Moneyball went from a competitive advantage to a cost of doing business in a matter of a decade. But unlike sports franchises which can weather long droughts of poor performance, a fund that doesn’t lead will cease to exist. Good research and stock selection will always be paramount to success. But great process is the only way to make sure great research turns into great results.

July 31, 2013

Alpha Theory Best Practices (Part One)

I have been working in the investment industry for my entire sixteen year professional career and have had the opportunity to meet and advise hundreds of portfolio managers, analysts, and traders. In addition, I’ve read as much psychology, investment, and decision process research as I could get my hands on. Because of this background, clients often ask for best practices I’ve observed. I’ll usually rattle off a few that are top of mind, but I thought a more thorough list of the Best Practices was warranted.

Some of the best practices are narrowly applicable to Alpha Theory but most are broadly applicable to investing and even to decisions we make in everyday life. The best practice list is a living document that continues to grow and improve. I suspect that I’ll never stop refining this list but I believe there are a few central tenets:

1. Process is important

2. Good decisions can have bad outcomes…and vice versa

3. Emotion is the enemy of good decisions

4. Only explicit assumptions can be properly judged and evaluated

5. A simple model almost always beats an educated guess

 

Over the coming months, I will memorialize some of the Best Practices through a series of blog posts. I’ll start out with what we’ve observed to be the single most important Best Practice:

Best Practice #1: RISK-ADJUSTED RETURN

“Objectivity is gained by making assumptions explicit so that they may be examined and challenged” – Richards Heuer, CIA Head of Analytic Methods and author of Psychology of Intelligence Analysis

It will come as no surprise to anyone that knows Alpha Theory’s work that using Risk-Adjusted Return1 is the first Best Practice. So many ills are healed by using Risk-Adjusted Return that it is unacceptable for a fund not to use it. Here is a litany of reasons why:

1)      Decision Tool. Risk-Adjusted Return is the ultimate culmination of the research process. Every piece of information gathered through the research process can be incorporated into a probability weighted analysis. Every new piece of information will alter it. A Risk-Adjusted Return effectively conveys the learned information in a form that can be used to make subsequent decisions like, should I buy this asset, and if so, how much?

2)      Explicit. Risk-Adjusted Return is explicit (see quote above). In a conversation without a Risk-Adjusted Return, the important data can lose context because the listener (or reader) is required to build their own mental model of how to think about risk and reward. Explicit estimates of reward, risk, and probability allow for the information learned through the research process to have context with regards to how they impact either the risk, reward, or probability.

3)      Accountability. Explicit estimates create accountability and auditability. Implicit assumptions can be misinterpreted or allow equivocation. Accountability is gained when estimates are written down, tracked, and audited.

4)      Downside. Downside is disproportionately more important than upside in a fund because of compounding. Downside estimation is critical to position sizing and is often given short shrift. Of course, downside is discussed in research overviews, but is it effectively accounted for? Risk-Adjusted Return requires an explicit estimate of downside that must be justified and defended.

5)      Thesis Myopia. It is easy to get lost in the story of an idea and forget about the value. Stories are enticing but without value, there is no inefficiency to take advantage of. When an analyst is forced to describe both reward and risk, the myopia that comes with focusing on a single thesis is stripped away and the importance of price paid becomes paramount.

6)      Maximize Fund Return. If the goal of the fund is to maximize Risk-Adjusted Return then it is imperative that it be calculated for each position. How else could you calculate the Risk-Adjusted Return of the fund?

This is the first installment of Alpha Theory Best Practices. Stay tuned for more over the coming weeks and months (maybe even years). As we release these, we’d love to know some of your Best Practices and where you may disagree with our conclusions.

 

1 Risk-Adjusted Return in this context refers to a probability weighted return with an estimate of reward, risk, and probability of each.

March 18, 2013

March Madness 2013 - Your Mom Always Wins

I first posted the blog below in 2012. As I'm preparing my brackets for 2013, I was referencing this post as a reminder of how to operate the machine I built last year. First step, download data for 2013. I typed in KenPom.com to get Winning Percentage scores to drive my March Madness Machine. KENPOM.COM WAS DOWN!!! Apparently, I'm not the only one heading to the KenPom site to get my stat fix (apparently the NCAA site was down for a little while too). So, what do I do? Be patient and wait for the mad rush to subside? No, I do a Google search for KenPom to see if I can find a backdoor. No luck, but I do come across this article that may change everything: KenPom vs Your Mom

The article was written by Peter Tiernan who has been using stats to analyze March Madness for 22 years (this according to his bio) and is an expert that writes for CBS during the tourney. The point of the article was to compare the new BPI system to KenPom. No need to go into results here, but basically no real difference. The more interesting section of the article was the comparison of KenPom to Your Mom. "Your Mom" is basically, just picking the higher seed in every round.

"Okay. So I did one other analysis yesterday—and I think it’s more relevant than determining round-by-round prediction accuracy. This analysis compared the accuracy of KenPom to YourMom in filling out your entire bracket and living with the consequences of lost games in previous rounds. I was able to go back nine years for this analysis. What did I find? Using KenPom efficiency data would’ve predicted 376 of the 567 tourney games played between 2004 and 2012. That’s a 66.3 percent accuracy rate. And how would’ve YourMom done? Amazingly, two games better—for a 66.7 percent accuracy rate. That’s right. Picking by seeds and margin beats out using KenPom."

Your Mom wins!!! Actually, this isn't that surprising. I've filled out my bracket using KenPom and Vegas odds for many years and I always notice that I'm pretty much picking the higher seed. Very rarely (and I'm not counting 9 vs. 8) does the underdog have a higher KenPom Winning Percentage, so I pretty much end up filling out my bracket picking the higher seed. I knew I ended up picking more favored seeds than the average bracketeer but now I have some emperical evidence to support my choice. 

Here's the rub, most NCAA pools only pay 3 places. Over the long-run, I will almost certainly be better than average, but randomness places in the the top 3 spots more often than not. For example, imagine I'm competing in a pool with 99 others. Assume all 99 are randomly generated and assuming my method is the superior statistical method, I still have a very low probability of finishing in the Top 3. This goes down even further if there are 1000 participants. 

What does all this mean? I'm sticking to small office pools or pools with Cinderella points (extra points for picking upsets) and I will not be offended when someone calls me a moron for picking every favorite.

2012 Blog Post:

AUTHOR’S NOTE: My second child was born less than 24 hours ago but I felt like I had to get this out by tip-off. Please excuse any errors.

 

I love college basketball. I’m a graduate of UNC-Chapel Hill (home of Michael Jordan) and grew up a fan living thirty minutes away. Needless to say, I spend a little too much time filling out brackets and watching hoops during business hours this time of year. And in the spirit of all things Alpha Theory, I have a systematic approach to filling out my NCAA brackets. But my system needs a little fine tuning. I’ll give a little background to set up the problem and hopefully someone will have an answer.

GENERATION ONE. Creating a systematic approach to fill out the brackets requires good input. From 2008-2011, I took Vegas odds for each team to win the national championship to serve as a proxy for team quality and strength of the path they’ll have to travel. For an example of the calculation, see the chart below. Kentucky is the favorite at 8/5 odds. If I bet $5 on Kentucky and they win, I receive $8. That assumes that 8 times out of 13 (8+5) Kentucky will win or 61.5%1 (8/13). The next step was to calculate the percentage for every team in the tourney, sum up all the percentages, and divide the individual teams win percentage by the sum of all the percentages to get a true probability of winning the tourney2. The next step was to use those probabilities to create a forecasted probability of winning for one team versus another. For example, if Kentucky (29% chance of winning it all) plays Missouri (4.6% chance) then the adjusted probability of Kentucky winning is 86% (29% / (29% + 4.6%)). At this point I could have filled out my brackets using a random generation (i.e. use a random number generator to pick a random number between 0 and 100 and if it falls above 86 then Kentucky loses, and if it falls below, they win. Or I could have just used Vegas probabilities to pick the winner which pretty much means picking the Vegas favorite in each round. But here is the problem, Vegas odds don’t give the granularity necessary to pick the early round games because they make very little differentiation between teams (see the clusters of odds below).

 Pic1
 

 GENERATION TWO. While looking for a solution, I came across KenPom.com. This is a sophisticated site steeped in the teachings of Bill James, the pioneer that created the statistics that led to the Moneyball movement. KenPom creates an adjusted winning percentage that controls for a multitude of factors. This winning percentage allows for comparison of teams across the spectrum. Additionally, in the KenPom blog there is reference to the Log 5 method which takes two teams’ winning percentages to determine the probability of one team beating the other. This is the breakthrough I was looking for.

Pic2


This allowed my NCAA bracket generator to have much more precise winning probabilities and thus a much more accurate forecasting engine. Here are the brackets based off KenPom adjusted winning percentages and Log 5-based probability of head-to-head success:

Pic3

THE PROBLEM. Now here is the rub. Let’s say that I’m playing in an NCAA pool that has Cinderella points (1 point added for each point of difference in the seed). I can calculate an expected return using the probability of winning (i.e. Wichita St. (#5 seed) versus VCU (#12 seed) – KenPom is 93.5% and 80.4% respectively). Using Log 5 we calculate a 77.1% chance that Wichita St. wins the game. The expected return for Wichita St. is 77.1% * 1 pt = .8pts and the ER for VCU is 22.9% * (1pt for win + 7pts for seed) = 1.8pts. In this case you would pick VCU because they give you an expected value of 1.8 versus .8 for VCU. If the tournament were one round, this method would maximize expected return. But if I pick VCU to beat Wichita St. in the first round, then I can’t use Wichita St. in future rounds. If I wouldn’t have chosen Wichita St. or VCU to move beyond the second round, then I should definitely pick VCU in the first round because there is nothing lost for not having the right team lose in the second round. But if I would have had Wichita St. win a future round, which I would have in this case, then I’ve done myself a disservice by eliminating them in the first round to maximize first round expected return.

I’m looking for a good way to account for this dynamic. I’m assuming someone with a good grasp of Bayesian/Stochastic tools may have a solution. Anyone have a suggestion? I would be glad to send my sheet along to anyone that would like to try and take a stab at solving the problem or would just like to use the sheet. Please help me solve this March Madness Math.

1 Clearly 61.5% is too high a chance of Kentucky winning the tournament. But if Vegas used realistic probabilities, then they wouldn’t make money from the spread. But how unrealistic are the probabilities that they use? Let’s assume that Kentucky is the best team in the country by a wide margin and figure out the odds of winning the six games necessary to win the national championship:

First game – 100%, no chance they lose. If they play 100 times, they don’t lose once.

Second game – 95%, they’ll be playing an 8/9 seed, play 100 times, they lose 5.

Third game – 80%, playing a 4/5 seed

Fourth game – 70%, playing a 2/3 seed

Fifth game – 65%, playing a 1/2 seed

Sixth game – 60%, playing a 1/2 seed

 

Multiply them all together and the probability of winning is 21% for the best team in the country. Even if I raise the winning percentage to 80% for all the rounds from the third round on, it is still only 39%. So 62% odds are off the chart and tell you how expensive it is to do prop bets in Vegas. What this also tells us is that fans really shouldn’t be upset when their team doesn’t win the national championship. The odds of the best team in the country are only slightly better than rolling a die and landing on 1. Take solace in the fact that single elimination tournaments are subject to all kinds of luck and be happy that your team is dancing.

 

2 Sum is greater than 100% because Sportsbooks make their money from the spread. If the odds were reflective of the real probability, then Vegas would just breakeven…and we can’t have that can we? For example, Kentucky is forecasted by Vegas to have a 62% chance of winning it all, but adjusting by the sum total of all teams’ odds of winning, which is 215%, the true Vegas probability of Kentucky winning it all is 29% (61.5%/215%). So instead of paying $1.60 (8/5) for each dollar bet, Vegas should actually be paying $3.50 for each dollar bet. Needless to say, the futures bet on NCAA champs is a real suckers bet.