This article was written by Mark Trowbridge and was published in the August 2019 edition of ISM's Inside Supply Management Magazine.
"Lies, Darn Lies, and Statistics"
In 1907, popular author Mark Twain made this colorful phrase well-known. Talking about how statistics can be “beguiling”, Twain actually observed that “There are three types of lies, ‘lies, d_ mn lies, and statistics’” (full quote not given, to protect the innocence of our readers).
Are statistics a helpful tool for interpreting data? Of Course. Statistics do play a very important part in Supply Management. Examples include:
· Proposal Evaluation Weights
· Supplier Performance Scorecards
· Calculation of Savings and Cost Increase Impacts
· Internal Customer and Supplier Surveys
· Discounts on Supplier Pricing
But misapplication of statistical principles can significantly skew results, and perhaps lead an organization to make the wrong business decision. This article will discuss several ways that procurement professionals can serve our employers through better application of statistical processes, and how to avoid common mistakes that are often made in statistical analysis:
Principle #1 – Always Remember the Basis for Calculation. Very innocent mistakes can be made with statistics by choosing the wrong factors in a calculation. One common error is for procurement professionals to calculate the percentage difference between two supplier proposals, and assume that a percentage increase is the same as a percentage decrease.
Example - Supplier A proposes $980,000 for a particular service. Supplier B proposes
$720,000. Question - How much of a percentage difference is this? Answer – It
depends.
Supplier A’s proposal is $260,000 more expensive than Supplier B’s offering. Percentage-wise, it is 36.1% more expensive = [($980,000 - $720,000)/$720,000].
But looked at in reverse, Supplier B’s proposal is $260,000 less expensive than Supplier A’s offering. Same dollar differential, but percentage-wise it is 26.5% less expensive = [($980,000 - $720,000)/$980,000].
Our company was asked by a Fortune 100 company to review the savings claims of a well-known supply chain consulting firm they had used to “source” a multi-million dollar spend category. Through my research, we found that the consulting firm had calculated the percentage difference between an old price and the new price incorrectly, and then extrapolated the percentage out to a larger spend base….thus overstating actual savings by a wide margin for their client. Note that in our little example above, the simple switch of the denominator results in nearly 10% more differential being claimed. Until I was asked to review the calculations, the client hadn’t caught this, and the large consulting firm had been rewarded based on the “savings” they claimed.
Principle #2 – Samplings Must Be Sufficient in Quantity. Ever get worried when the media forecasts which candidate will win a national election, based on a “survey” of just 500 potential voters? (author’s note – yeah, me too). A bit scary when there are more than 300 Million people in the electorate...
A foundational concept in statistics is that to be statistically-valid, a large sampling is almost always better than a small one. You cannot predict data trends accurately with an extremely small sampling. For example, consider the size and makeup of a sourcing team when they “score” supplier proposals. If you have a six person team, don’t let three people score one group of proposals, while the remaining three score another group.
With that small of a population, individual differences will skew the results. Fred might be a member of Group A, but he always sees the glass as half-full and consistently gives out high scores. Kelly might be a member of Group B, but he always sees the glass as half-empty. The right way to score the evaluations is to have all six persons score all the proposals. Otherwise, Fred and Kelly will materially skew the results.
Principle #3 – Samplings Must be Accurate in Quality. Anomalies in samplings can significantly skew the results. For example, consumer surveys conducted over the telephone around the dinner hour are skewed even before the phone is answered. This is because the nature of the sampling has already been altered by the methodology. In this case, the methodology eliminates people who don’t have a land telephone, those having unlisted numbers, those who work evenings, or those who don’t answer the phone during dinner hour.
In the same way, “random samplings” in the supply-chain world must truly be “random”. An example is TQM samplings of products. Too many supply management professionals have found that product quality can vary over time, production lot, manufacturer facility, etc. You can’t just sample the first production run to catch quality errors. The samplings must be performed at random intervals throughout the range you wish to stabilize.
Principle #4 – Scoring Methods Must be Objective in Design. My family’s personal email service is provided by an ISP which occasionally puts surveys on their webpage. The political leanings of the ISP’s management team are clear when they ask questions about a politician like this:
I feel that this politician is doing a (select one)
___ a wonderful job
___ a good job
___ an acceptable job
___ a poor job
Note that the design of the survey forces a participant to choose between four choices. But three of them give the “politician” an “acceptable” or better rating. This faulty design “skews” the survey results to the positive.
This practice sometimes shows up in supplier scorecards or procurement customer surveys. As I train supply management organizations around the world, I hear some great examples. One biased supply chain survey actually included the following “loaded” question: “Has your business operation ever been completely shut down by the procurement group’s failure to deliver materials? (Yes/No)”.
Needless to say, even if several departments answered “yes”, the percentages would still indicate an extremely positive rating!
Principle #5 – Avoid Subjectivity in Rating Scales. Some organizations rate supplier performance variables on a three point scale (Excellent, Average, Poor). Others use a five point scale (Very Good, Good, Average, Poor, Very Poor). But these don’t give participants many choices, and often result in less-than-meaningful statistics.
I personally like a ten point scale which allows the participant to award fractional scores (like “8.5”). This allows the person to carefully select a score which reflects their perception of the supplier’s performance. And the group’s “averages” provide much-more detailed information, as they can see the difference between suppliers scoring 7.5 and 9.5 respectively (either of which would have received an “Excellent” score on a three point scale).
Another helpful technique is to provide the evaluators with written guidelines for scoring
the performance variables. The guideline for a score between 4.0 and 6.0 might be “This
supplier is consistently able to deliver product on-time, but rarely able to deliver earlier than the purchase order delivery date”.
Statistics can have great meaning for an organization, but only when they illustrate accurate and meaningful information.
Copyright © 1999 - 2024 Strategic Procurement Solutions - All Rights Reserved.