(866)-482-2177

sales@alphatheory.com

REQUEST A DEMO

SYSTEM REQUIREMENTS


Please note the following System Requirements. Further, please limit the number of open applications (particularly price streaming applications) while logged in to Alpha Theory™.


Recommended System Specifications
Processor: Dual Core or Quad-Core 2.4GHz or faster
RAM: 4GB+
Browser: Google Chrome 30+
Screen Resolution: 1280 x 1024 or greater
Internet Access: Business Class High-Speed


Minimum System Requirements
Processor: Intel Pentium-M 2.0Ghz or equivalent
RAM: 2GB+
Browser: Google Chrome, Mozilla Firefox, Internet Explorer 9+ (without Compatibility View), Safari
Screen Resolution: 1024 x 768 or greater
Internet Access: High-Speed

Subscribe to Alpha Theory content

Alpha Theory Blog - News and Insights

38 posts categorized "Risk Management"

April 06, 2018

Positive Skew is Negative for Active Managers

 

Let’s play a game. In this game, there are 10 random poker chips in a bag. 9 of these chips will give you a return between -8% and +8% on the money that you bet. The 10th coin will give you a 100% return. The distribution of returns for this game has a positive skew.

 

Screen Shot 2018-04-06 at 9.29.11 AM
 

If offered to put money down on this proposition you would take it because you would expect a 10% return if you could play the game over and over.

 

Now let’s add a wrinkle. Your goal isn’t just to make a positive return, you have to beat the bag. The bag puts 10% of their money on each chip and pulls them all. Voila, a 10% return. One last wrinkle, you can only pick one chip at a time.

 

How many times out of 10 would you beat the bag? Only 1 in 10. 90% of the time you would lose to the bag. It doesn’t matter if we expand the number of chips as long as the bag maintains the same positive skew (we could increase the to 100 chips and you get to pick 10, 100 chips and you pick 1000, etc.)

 

By now, you’ve probably guessed that the bag is the market, the chips are stocks, and you are, well, you. This is the game we play when trying to beat an index. True, you can be better than the market at figuring out the good chips but given that initial conditions for a random game means you lose 9 out of 10 times, it’s really hard to beat the market. Add fees and the likelihood of beating the market goes down even further.

 

Positive Skewness has gotten a decent amount of press over the past year because of the championing of JB Heaton who wrote a paper1 researching the impacts of positive skew on manager underperformance. Heaton’s paper is similar to research from Dr. Richard Shockley in 19982. See below for an article written by Bloomberg News on the topic.

 

Picture1

Source: Bloomberg News (“Lopsided Stocks and the Math Explaining Active Manager Futility” by Oliver Renick)

 

Given that many of the conversations active managers have today revolve around active versus passive, “positive skew” should be top of mind. This is my push to increase awareness.

 

Given that active managers can’t change market skew, what should we do? We could measure skill in a different way. Let’s say I want to measure a manager skill. If I take all of the stocks of the markets they’re investing in and then randomly build 100,000 portfolios with the same number of securities as the manager. I can then plot where that manager falls on the distribution and give them a Z-Score for how far away from the norm they are. I could do the same thing for hedge funds by randomly buying and selling securities in the same universe as the investor.

 

I’m not saying that this excuses active managers from underperforming passive strategies, but it should at least be a more realistic assessment of their skill. My hope is that positive skew becomes just as common an explanation as fees when discussing active manager underperformance. Only by knowing the causes, will we be able to make changes that allow active managers to outperform.

 

1 Nicholas Polson and Jan Hendrik Witte; Hendrik Bessembinder of Arizona State University

2“Why Active Managers Underperform the S&P 500: The Impact of Size and Skewness,” published in the inaugural issue of the Journal of Private Portfolio Management. One of the original authors of the study is Richard Shockley.

 

March 12, 2018

Capital Allocators Podcast with Ted Seides: Moneyball for Managers

 

Learn how to enhance your investment results in this great podcast from Ted Seides and his guests, Clare Flynn Levy from Essentia Analytics and Cameron Hight from Alpha Theory.

This conversation covers the founding of these two respective businesses, the mistakes portfolio managers commonly make, the tools they employ to help managers improve, and the challenges they face in broader adoption of these modern tools. The good news is the clients of Essentia Analytics and Alpha Theory have demonstrated improvement in their results after employing these techniques. If you ask Clare and Cameron, you may develop a whole new appreciation about the potential for active management going forward.

 

LevyHight-FINAL

 

By creating a disciplined, real-time process based on a decision algorithm with roots in actuarial science, physics, and poker, Alpha Theory takes the guessing out of position sizing and allows managers to focus on what they do best – picking stocks.

In this podcast, you will learn how Alpha Theory allows Portfolio Managers convert their implicit assumptions into an explicit decision-making process. 

 

To learn how this method could be applicable to your decision-making process:

 

LISTEN NOW

 


 

 

March 02, 2018

Size-Based Batting - A Different Perspective on Stock Selection

 

How do you determine if an investor is a good stock picker? One commonly used measure is to count the number of positions that make money (winners) divided by the total number of positions. This metric is commonly called a Batting Average, analogizing stock picking with baseball hit-rates.

The problem with Batting Average is that several inconsequential positions that lose money can really bring down the total. We saw this with our clients. They have historically outperformed other funds (every year for the past six) but have a batting average, adjusted for the move in the bench, of only 51%.

We decided to take a different approach and measure the total exposure of positions that made money versus the total gross exposure of the fund. For instance, if 60% of a fund made money on an alpha-adjusted basis and the fund was 120% gross exposed, then the fund had a Sized-Based Batting Average of 50% (60/120).

Our clients had a Sized-Based Batting Average of 54% versus the non-sized based average of 51%. That means that our clients were good at selecting investments and at sizing them, but they were harming their overall returns with small losing investments.

Alpha-Adjusted Batting Average1

 

Screen Shot 2018-03-02 at 10.09.00 AM

 

In the table above, Size-Based Batting, while not perfectly consistent, is generally better from year-to-year for our clients (exceptions being 2012 and 2015).

We’ve performed other analyses that have proved this point, specifically that our clients’ positions under 1% dramatically underperform the rest of the portfolio, but Sized-Based Batting presents a compelling way to highlight the “small position” issue (see the “Concentration Manifesto” for other issues with small positions).

In our profession, it is incredibly difficult to detangle skill from luck and, as cathartic as it would just rely on returns, returns are actually negatively correlated with next year’s returns for most funds (i.e. funds that outperform in year N have a higher likelihood underperforming in year N+1 – there are multiple research sources that analyze mean reversion in funds, here is one).

Sized-Based Batting is a nice addition to the allocator’s tool bag for finding managers with stock picking skill. In much the same way, managers should use Sized-Based Batting as a way to highlight their strengths and compare it to traditional Batting Average as a way to potentially point out weaknesses.

 

1 S&P 500 for US securities and MSCI WEI for non-US securities

2 Why is “All Time” so low compared to each year? Reason #1: There are many more observations in the more recent years which skew the overall results to be more similar to the more recent years. Reason #2: There were many assets that were losers over “All Time” while being winners for multiple years (small win in 2015, a small win in 2016, big loss in 2017 = 2 winning period vs 1 losing but a loser in the All-Time bucket).

 

 

February 07, 2018

Alpha Theory Case Study: Top Performing Funds of 2017

Alpha Theory’s clients have historically outperformed (see 2017 Year in Review from last month), but 2017 was special as our most active client was also the 2nd best performing equity fund. We have worked with them since their launch, and their focus on discipline and process is a testament to how to build a fund. If you would like to learn more about the client, their challenges, their solution, and the data supporting their process, check out the Case Study.

 

DOWNLOAD NOW

 

 

January 05, 2018

2017 Year in Review

 

Alpha Theory’s product helps investment managers reduce emotion and guesswork in position sizing. The result is reduced errors and improved returns. For six consecutive years, Alpha Theory clients have outperformed their peers (see table below – we use the benchmark of Major Equity Hedge Index because 86% of Alpha Theory clients are hedge funds). Our clients have consistently outperformed their competitors, more than doubling their returns over the period.

 

Graph1

*Totals are not including 2017 data

In 2017, our average client generated 18.9% returns and, when it is released, I anticipate that we’ll beat the Hedge Index again. These results are consistent with other blog posts we’ve written highlighting our clients in 3rd party rankings: Reuters / WSJ / Novus.

 

NEW 13-F ANALYSIS

This year, we expanded our analysis through a new 13-F dataset with all publicly filing funds. The upside of using this dataset is it enables us to compare results against every reporting fund in 2017. The downside is it only includes the US equity long positions. The results indicate that once again, Alpha Theory clients outperform their peers.

The average Alpha Theory client performance in 2017 (13-F data) was 27.6% vs 19.9% for all others (3013 total funds with over 20 positions). That’s almost one full standard deviation higher (8.8% standard deviation) than the mean and has a Z-Score of 2.03 (statistically significant above the 95% confidence level).

Even more interesting was the individual performance results of our clients, one Alpha Theory client was the 2nd best performing fund in 2017 (this client thanked us more than once for our contribution to their success) and four clients landed in the top 40 performers.  We also had six of the top 100, and 10 of the top 200. Statistically, we’d anticipate less than 1% in all categories because Alpha Theory clients are less than 1% of all funds. Instead, as in previous periods, there is a concentration of Alpha Theory clients amongst the top performers.

Graph2

Simply put, Alpha Theory clients outperform their peers. The traits these firms share are discipline, intellectual honesty, and process focused. They gravitate to Alpha Theory because it is their tool kit to implement and measure that process.

 

PROCESS EQUALS PERFORMANCE

Alpha Theory clients use process to reduce the impacts from emotion and guesswork as they make position sizing decisions. Alpha Theory highlights when good ideas coincide with largest position sizes in the portfolio. This rules engine codifies a discipline that:

1. Centralizes price targets and archives them in a database

2. Provides notifications of price target updates and anomalies

3. Calculates probability-weighted returns (PWR) for assets and the portfolio as a whole.

4. Enhances returns

5. Mitigates portfolio risk 

6. Saves time

7. Adds precision and rigor to sizing process

8. Real time incorporation of market and individual asset moves into sizing decisions.

DISCIPLINED USAGE REDUCES RESEARCH SLIPPAGE

Alpha Theory’s research not only suggests that adoption of the application by itself leads to improved performance, but actual usage intensity further enhances results.

Usage intensity is determined by:

1. Percent of Positions with Research

2. Correlation with Optimal Position Size

3. Login Frequency

 

Graph3

1.Measured as the annualized ROIC where data was available, for a sample of 48 clients, 12 for each quartile

 

OPTIMAL POSITION SIZING REDUCES RESEARCH SLIPPAGE

Comparing clients’ actual versus optimal returns shows:

HIGHER TOTAL RETURNS
ROIC is 4.5% higher.

IMPROVED BATTING AVERAGE
Batting Average is 8% higher. Explanation: many of the assets that don’t have price targets or have negative PWRs are held by the fund but recommended as 0% positions by AT. Those positions underperform and allow AT’s batting average to prevail.

 Graph4

1.Measured as the average full year return for clients where full year data was available, adjusted for differences in exposure, net of trading costs

2.Before trading costs

 

ALPHA THEORY CLIENTS OUTPERFORM NON-CLIENTS
Alpha Theory clients have outperformed Major Equity Hedge Indices every year since Alpha Theory started collecting historical data. While our clients are a self-selecting cohort who believe in process and discipline; process orientation goes hand-in-hand with Alpha Theory software that serves as a disciplining mechanism to align best risk/reward ideas with rankings in the portfolio.

 Graph5

PRICE TARGETING REDUCES RESEARCH SLIPPAGE

Alpha Theory has further found that ROIC for assets with price targets is 5.6% higher than for those without price targets. Some investors chafe at price targets because they smack of “false precision.” These investors are missing the point because the key to price targets is not their absolute validity but their explicit nature which allows for objective conversation of the assumptions that went into them.  Said another way, the requirements of calculating a price target and the questions that targets foster are central to any good process.

Graph6*Long-only as many short positions are hedges and have no price targets

 

December 15, 2017

Superforecasting for Investors: Part 2

Alpha Theory hosted a book club on December 6th with portfolio managers, analysts, and allocators coming together to discuss “Superforecasting” by Phil Tetlock. We were lucky enough to have a Superforecaster, Warren Hatch, moderate and perform forecasting exercises with the group. We spent 2 hours together and only scratched the surface on applying Superforecasting to investing.

 

Here are a few key takeaways:

1. COMMON ATTRIBUTES OF SUPERFORECASTERS:

INTELLIGENCE: Above average but genius isn’t required

QUANTITATIVE: Not only understand math but apply it to everyday life

FOXES, NOT HEDGEHOGS: Speak in terms of possibilities, not absolutes

INTELLECTUALLY HUMBLE: Understand the limits of their knowledge

SYSTEM 2 DRIVEN: Use the logic-driven instead of instinct-driven portion of their brain

DO NOT BELIEVE IN FATALISM: Life is not preordained

CONSTANTLY REFINE: Make frequent small updates to their forecast based on new information (but not afraid to make big changes when warranted)

COUNTERFACTUALS: Believe that history is one of many possible paths that could have occurred

OUTSIDE VIEW: Incorporate the internal and external views

GROWTH MINDSET: CONSTANTLY SEARCH FOR WAYS TO IMPROVE THEIR FORECASTING PROCESS

 

2. IDENTIFYING TALENT: There are identifiable attributes that can be used in hiring and have a profound impact on forecasting skill

 

Active Open Mindedness*

   image from alphatheory.typepad.com

Fluid Intelligence*

image from alphatheory.typepad.com

 

* At a prior book club, we measured participants and the results showed they had the attributes of Superforecasters with high Active Open-Mindedness (3.99 out of 5) and high Fluid Intelligence (8 out of 10 – this is the highest score that the Good Judgment  folks have seen).

Active Open Mindedness (i) and Fluid Intelligence (a) are two measurable traits that managers can use to select talent. In the chart below, the improvement impact of the definable attributes equates to about 40% of their forecasting skill over standard forecasts.

image from alphatheory.typepad.com

3. DEVIL’S ADVOCATE: Firms should appoint a Devil’s Advocate for each investment to expand critical thinking (someone to ask the question, “I see your downside is $40. How is that if the 52-Week Low is $22 and the trough multiple would put it at $25?”)

 

4. OUTSIDE VIEW: Firms should require an Outside View for every investment idea (“While everyone I’ve spoken to says this deal will close, only 20% of deals with one party under SEC investigation close.”)

 

5. REFINEMENT: New information should always be incorporated in forecast (think Bayesian).

 

6. TEAMS MAKE BETTER FORECASTS: Team dialog generally improves forecasting accuracy.

 

7. FORECAST CULTURE: Firms should embrace “forecast” as part of their vernacular and conversations should revolve around how information impacts the forecast.

 

8. MEASURE TO BE BETTER: We all forecast, but we rarely measure. That fact needs to change if we really want to improve.

 

9. CLUSTERING: Break complex topics into individual components that are better able to be forecast and use the combination of the smaller forecasts to forecast the more complex. (ie. Will AAPL break $200 is a complex forecast that can be broken down into Will iPhone X ship more than 400m units? / Will Samsung’s technology outpace Apple’s? / etc.)

 

10. INDEXING: Individual clustering questions can be weighted to come up with a forecast for the complex topic instead of using simple equal weighting.

 

11. DIVERSITY OF FORECASTS MATTER: Forecasts made from similar perspectives are less accurate than those made from multiple perspectives (see Boosting below).

 

12. BOOSTING: If you have three forecasters with different perspectives that all arrive at a 70% probability of an event occurring then the actual probability is greater than 70%.

 

13. GISTING: We didn’t get to spend much time here, but the idea is that complex subjects, reports, presentations, etc. can be distilled down into gists that the team votes on and refines into supergist. Full understanding is never just quantitative or qualitative. Superforecasting is quantitative. Supergisting attempts to provide the qualitative piece. 

 

14. HYBRID FORECASTING COMPETITION: IARPA, the defense agency that sponsored the forecasting tournament that launch the Superforecasters (Good Judgment) is sponsoring a new Man+Machine Forecasting Tournament. For those interested in Forecasting and Machine Learning, this is your spot: https://www.iarpa.gov/index.php/research-programs/hfc

 

November 10, 2017

Predictably Insightful: Recap of the Behavioral Alpha Conference

 

This is a picture of me and Dan Ariely, author of “Predictably Irrational” and five other great books on decision pitfalls we all fall into. Dan was the keynote speaker at Behavioral Alpha 2017 an event put on by our friends at Essentia Analytics and we were proud to help sponsor.

 

Behavioral Alpha

 

The day was packed with great speakers including:

- Dan Ariely: “Behavioral Finance in Practice” 

- Denise Shull talking about “Your Senses, Feelings & Emotions are the Ultimate Dataset”

- Clare Flynn Levy: "Applying Behavioral Finance to Your Own Investment Process" 

- Fireside Chat with Mark Baumgartner: “Why Asset Allocators Care About Behavioral Analysis” 

- Cameron Hight: “Mistakes Managers Make & How to Fix Them”

- Peer Idea Exchange: Paul Sonkin and Paul Johnson: “Pitching the Perfect Investment:

- Managing the Tensions Between Analysts and Managers” 

- Dave Winsborough: “How the Collective Personality of Your Team Affects Performance”

 

Here’s a quick recap of some of the takeaways:

Dave Winsborough discussed ways that we can build better teams by understanding the goal we’re trying to accomplish, the needed components to accomplish that goal, and measuring the team participants to make sure that the team has all of the necessary components. It’s a relatively straightforward idea that should be applicable to almost any team.

Denise Shull discussed ways we can become better in tune with our feelings and emotions with the idea of learning when and how to leverage those feelings. Learning how to identify our own emotions is a powerful first step towards being able to mute the negative emotions and take advantage of the positive (signals).

Much of the conference was on emotion and bias and how they cause us to make poor decisions. I completely agree, but that’s not my expertise. I spent much of my time talking the processes that help mitigate bias. This primarily involved making our assumptions and decision process explicit so that they can be judged and analyzed.

Dan Ariely gave several fascinating anecdotes like how casinos are the best at applying behavioral tools, how company internal satisfaction surveys have predictive power for stock performance, how Intuit is giving teams time and money to try bold new initiatives to help them get over the risk of projects that fail, a weight scale that doesn’t show your weight (but tracks it over time) is a much better way to lose weight than one that gives immediate feedback that is subject to good-habit-breaking volatility, and how people in the next to the lowest tax bracket are the ones most opposed to minimum wage hikes because it could push them into the lowest rung of society. His major takeaway was the bias and personality are tough to eliminate so you have to create habits, rules, and routinized behaviors that help us do the things we say we want to do (very Alpha Theory😉).

Clare Flynn-Levy showed how investors can make better decisions by capturing some basic information about themselves and their decisions. Taking the time to tie those data points together can help us better understand when we make good decisions and when we make poor decisions. By understanding these cues when they’re happening we can take advantage of the positive and avoid the negative.

Mark Baumgartner discussed his time at the Ford Foundation and Institute of Advanced Studies and some of the things he’s seen in the managers he evaluates. He said that about 10% of the managers he meets have some form of structured process around behavioral science, decision making, portfolio management, position sizing, etc. He believes that the primary value of a manager isn’t based on these processes, but he believes there is a lot of easy to pick up alpha form implementing process. He would like to see his managers embrace it more actively but says the industry moves glacially while the products that help improve the process are evolving very fast.

The room was full of managers and allocators. There was a self-selection bias, but the crowd truly embraced the concepts for how to be better using the behavioral science discussed during the day. In fact, the crowd asked amazing questions and one of my favorite parts of the day was from a member of the audience that was expanding on his thoughts about the difficulties of capturing alpha. He said the number of investors has increased from 5,000 to 1 million over 50 years. How do you reverse that trend when it is one of the highest paid professions, where you get to work with amazing people, research a broad range of interest, get to meet leaders in industry, academics, and government, and be exposed to an array of amazing ideas? If I’m ambitious and at the top of my class, why would I not pursue that profession.

Hmmmmm, maybe we can ask Dan Ariely if he has some creative way to change that behavior.

 

October 06, 2017

Poker: Art vs Science

 

“People describe poker as a game of art and science. Both intuition and science have merit, but the best players approach the game very quantitatively.” – Liv Boeree, Professional Poker Player

Our COO, Graham Stevens, and I met over a poker table. We’ve been playing together for many years and he was recently watching an Oxford Lecture Series video by Liv Boeree that he turned me on to.

Liv is a very successful poker player with a physics degree from the University of Manchester. She was discussing the use of Game Theory Optimal (GTO) play and the use of GTO Tables to break decisions down into ranges of hands based on different situations to aid poker players in knowing the optimal decision (Bet (big or small), Check, Fold).

She stated that the best players in the world all employ GTO. And even though all players assume their opponents are playing GTO, it is incredibly difficult to exploit those predicted decisions because they are optimal. In an interesting exchange in the video, Igor Kurganov, another very successful poker player who was in the audience, said that intuition (playing the player instead of playing the cards) factors into his decisions, but only to a small degree. He said that the best intuition can do is change a 50/50 bet to a 55/45.

The parallels to investing and Alpha Theory are clear. At Alpha Theory, we allow firms to build their own Game Theory Optimal system to figure out the “optimal” amount to bet on each position in their portfolio. And we find that firms that use intuition instead of their model lose to the hypothetical model performance about 75% of the time.

The reasons portfolio managers choose to vary from their model are numerous, but have a common theme; there is an intuition that the model isn’t capturing. Granted investing is not poker. Poker has a finite set of variables and permutations comparted to the seemingly infinite number of variables to consider in investing. But even still, just like in poker, the world-class players are going to be the ones that are following the model and only making small tweaks for intuition.

**Do I practice what I preach? A note on my own poker play. I do not play GTO because I have not memorized the tables. I know some of the shortcut rules for when to bet and fold pre-flop and I can do a rough calculation of pot odds post-flop but that’s the extent of my skills. If my buddies would let me pull out my computer while I’m sitting at the table, I would follow GTO.  If I were playing for a living, I would learn and follow the model.

 

September 14, 2017

Asset Manager Reliance on Human Judgement vs Machine

Asset management is the industry most reliant on human judgement according to a recent Price Waterhouse study on Data Analytics.

 

Screen Shot 2017-09-14 at 10.14.59 AM

 

Asset managers rely on human judgement 3x more than the next industry. For an industry with some of the best and brightest, we seem to be far behind. There is no expectation that this will happen overnight, but at a bare minimum we need to be experimenting with ways to enhance our judgement with machines.

Alpha Theory has been doing just that for over 10 years and our clients have outperformed the average hedge fund by over 2x. Getting started is not hard. Adopting “machine” does not require a wholesale change as all of our clients operate with Man + Machine. What it does require is an acceptance that Man alone is generally inferior to Man + Machine and a cultural embrace of the “machine” as an enhancement to the daily judgements we all make.

The reliance on human judgement will fall over time for asset managers. Do not be the last the change.

 

 

August 18, 2017

Man Versus Model of Man: Lewis Goldberg

I recently read an article by Jason Zweig and saw a reference to Lewis Goldberg’s, “Man Versus Model of Man” paper on Expert Studies in the 1970 Psychological Bulletin. There are hundreds of published studies that have a similar theme. Give an expert any and all available data that they want and ask them to make a judgement germane to their field of expertise (examples include Oncologist – how long will a patient live, Parole Board – who is most likely to recidivate, Wine Expert – price of wine at auction, etc.)

The experts tell the scientist which variables are most important in their decision and the scientist goes off and builds a model and compares the model’s results to the forecasts of the “experts.” Over the past 60 years, hundreds of expert studies have been performed and show that the model beats or ties the expert 94% of the time (1).

There was one of Goldberg’s quote about the use of models versus clinical decision making made me laugh:

Such an enterprise, originally viewed with considerable disdain by clinical psychologists, has recently weathered a period of intense controversy (Gough, 1962; Meehl, 1954; Sawyer, 1966), and may soon become a reasonably well accepted procedure in psychology—if not in medicine, stock forecasting, and other professional endeavors.

Consequently, it now seems safe to assert rather dogmatically that when acceptable criterion information is available, the proper role of the human in the decision-making process is that of a scientist: (a) discovering or identifying new cues which will improve predictive accuracy, and (b) constructing new sorts of systematic procedures for combining predictors in increasingly more optimal ways.

This quote was written 46 years ago yet clinical judgement still dominates psychology, medicine, and stock forecasting. Given the evidence, it is hard to argue against model-based decision making or man + model, but expert judgement still dominates.

The experts that will dominate the future (and are already beginning to do so) are the ones that embrace models as an extension of their own expertise. Models do not replacement human judgement. The parameters models are built upon are determined by experts. Experts also are required to intuit when exceptions to the model are necessary.

My belief is that Lewis Goldberg’s prediction will come true in the next decade as computing power, statistical techniques, software, and zeitgeist have grown to a point where Man + Machine will become the rule instead of the exception.

Here’s a few other great quotes from Lewis Goldberg’s article:

- Mathematical representations of such clinical judges can often be constructed to capture critical aspects of their judgmental strategies.

- The results of these analyses indicate that for this diagnostic task models of the men are generally more valid than the men themselves. Moreover, the finding occurred even when the models were constructed on a small set of cases, and then man and model competed on a completely new set.

- Ten years of research on the clinical judgment process have demonstrated that for many types of common clinical decisions and for many sorts of clinical judges, a simple linear regression equation can be constructed which will predict the responses of a judge at approximately the level of his own reliability. For documentation of this assertion and for details of the methodology, see Hoffman (1960), Hammond, Hursch, and Todd (1964), Naylor and Wherry (1965), and Goldberg (1968). While such regression models have 424 LEWIS R. GOLDBERG been utilized (probably somewhat inappropriately) to explain the manner in which clinicians combine cues in making their diagnostic and prognostic decisions (see Green, 1968; Hoffman, 1968), there is little controversy about their power as predictors of the clinical judgments

 

(1) “Comparative Efficiency of Informal and Formal Prediction Procedures” – William Grove and Paul Meehl, published in Psychology, Public Policy, and Law (1996)