Thursday, November 17, 2011

November 15, 2011

CPDEP - Gumball 2011 Blog

Anyone know Kent Rinehart? Well, his parents and family must be very proud of him, he has achieved big things in his life, no bigger than being able to win our Gumball Challenge. As many of you who have frequently attended the annual CPDEP forum know, every year we run a little contest to see how “awesome” you are at estimating the number of gumballs in an oddly shaped container. This year was no different than past conferences , though there may have been a few more “squeezie Owls” present, which could have been a distraction.

At this year’s forum, those who chose to play the game were asked for their P10 – P90 estimates of the number of gumballs in the container. Then, they were asked to provide an estimate of how many were actually in the jar. Their ballots were recorded, the gumballs were counted and the results were reviewed.

Bottom line, Kent was up to the challenge and used his impressive gumball counting skills to come closest to the actual number. His estimate of 1320 was less than 1% away from the actual number of 1330. When he was asked about his gumball counting abilities he refused to reveal his techniques only saying “I’m multi-talented”. This impressive feat does not go unrewarded, as he has won himself the brand new Kindle Fire.

Now why do we run this game? Well, we are big believers in the Wisdom of Crowds, a book written by James Surowiecki. In it, he suggests that we put too much emphasis on one expert’s opinion and that if you solicit a group of individuals, with some relevant knowledge, and collect data from their responses, you may be very pleasantly surprised by the result of more accurate forecasts. In the case where there is a finite answer (i.e., one where a knowable answer may lie within a definable range, i.e., the number of gumballs in a container, the actual weight of a bull or the population of Milwaukee, etc.), the average of the estimates may be as close or closer to the actual number than any single best estimate.

So for those of you who would like to look through the numbers, here you go:

Number of estimates: 104
Range of estimates: 72-50000

Average/mean of all estimates: 2679
Actual number of gumballs: 1330
Kent’s estimate: 1320
Calculated Mean over just the P10-P90 range of the estimates: 1311

Number of people whose estimate was below the actual result: 60
Number of people whose estimate was above the actual result: 43
Number of P10-P90 estimates wide enough to include the actual result: 30.8%
Number of P10-P90 estimates which missed on the low side: 26.9%
Number of P10-P90 estimates which missed on the high side: 42.3%













Figure A: Distribution of Gumball Estimates
(Shown in Decision Frameworks new decision tree tool – TreeTop™)

So here are some early observations:

1) Chevron needs to revisit its employee optical plan – with estimates between 72 and 50,000, we either had people trying to game the results or was something in what they were drinking?

2) As a result, there were a number of outliers provided in the estimates. You can see from the graph below that there was a big skew to the upside. It doesn’t take very many people to guess extremely high values before the graph gets distorted. In this case, we had more people underestimate the number of gumballs than overestimate, but those that overestimated, way overestimated.

3) Surowiecki’s test failed as the average of all the estimates was way off. In fact, the average of all estimates was more than double the actual number of gumballs. In other tests, we’ve seen this to be much closer.

4) Interestingly (and we’ve done this before with this game), if we take out the outliers by only running an average over the P10-P90 range of all estimates, we see that the average is much closer to the actual result. In this case, if we take out 21 estimates (i.e., the majority of the outliers), focus only on the P10-P90 range of all the estimates, and then average the results, we end up at 1311. Now this is quite thought provoking because there is only one estimate closer than this, and that is Kent’s estimate. You can create your own conclusions from this exercise. All we can say is this result is pretty consistent with other tests and trials of this nature which we’ve run using gumballs.

5) Let’s turn our attention to people’s ability to come up with an effective range. When we look at the ability of people to provide a good P10-P90 estimate (one that would include the resulting number of gumballs), we see that only 30% of the people were capable of providing a solid inclusive range. This means that 70% of people were too narrow with their estimates and missed including the result. Is this unusual? No, especially without some expert interviewing techniques.

One of the big failings we see is that people tend to be too narrow in their estimates. There was one person who had a range as narrow as 36 gumballs. You’d have to be pretty good to be that precise and in this case the individual was off by a factor of 5. How do you avoid this bias? Through effective expert interviewing techniques which teach those providing the estimates to start their thinking from the outer edges of what they know and work inward.

If you have interest in learning more about expert interviewing you can check out one of our free webinars in 2012 and or attend one of our 3-day courses. Additionally, there are great expert interview templates in DTrio which is available to be downloaded from the GIL Option Panel.

In summary, we hope you will stop by Kent’s office and congratulate him on his impressive abilities. We also hope to see many of you again through project work and/or in one of our upcoming courses. Remember, great decisions are derived not divined!

Cheers,
Jeremy Walker
jeremy@decisionframeworks.com