It’s Time to Pull Procurement Research Out of the Gutter


For most things in life, there’s the good, the bad and the ugly. So be it with procurement research.

I’ve been doing procurement research for a long time, and have seen some really good stuff from all sorts of firms.

However, I’ve noticed lately a marked increase in quantity of research surveys and an unfortunate commensurate drop in quality.

“So what?” you say. “What’s the problem here?”

The problem is that practitioners are surveyed to DEATH by one other (e.g., advanced firms getting hit up by others) and by third-party firms from numerous provider sectors. And when the research is sketchy, it drags everyone down. I often say that the biggest cost in procurement is the opportunity cost of wasting your time on something relative to what you could be spending your time on. There’s only one truly precious commodity: time. And life is too short for crappy procurement research.

So, I would like to share some helpful strategies and tips to make your research efforts better – whether you’re a practitioner or a provider of any sort. Let’s raise the tide for all our collective boats. Here are some ways to help do that:

  • Make study participants clear on your role as a “researcher.” Too many consultants and software providers use research as a way to value-sell their solutions to the problems and priorities captured in the surveys. There’s nothing wrong with it if prospective survey takers know what they’re getting into. This is also applicable to provider-sponsored research. For our part at Spend Matters, we never give individual study participant data (or contact information) to providers who commission research with us (if a survey is commissioned by a partner and not internally). Once the research is done, the provider can then host the content for download, which is then a straightforward trade of a contact name for the insights. But, research study participants are kept 100% confidential. We try very hard to live by the principals of the Market Research Code of Conduct from the Market Research Society (yes, there is such a thing – and 60 years old no less!) and feel that everyone should strive for this ideal. Unfortunately, much of the traditional media, including those with research/”intelligence” services have devolved into marketing shills for providers – and it’s unfortunate.
  • First, be clear on what your research objective is. What is the problem you are trying to solve? Maybe it’s trying to find better ways to get innovation from your suppliers. In research terms, you may want to specify a hypothesis? For example, you might want to find out what techniques lead to higher spend influence and savings. You can then figure out the best way to get that research (e.g., make, buy, or both). If you’re a research firm, be clear about what you’re trying to do with the research. Do not do a “bait and switch.” And on a related note…
  • Be clear on the type of study it is. “Research,” like other terms (e.g., “benchmark,” “savings,” “category,” “procurement,” etc.), means lots of things to different people. Is it [supply] market research? Is it rigorous academic research? Is it just any type of analysis?
  • Know that research takes different forms and that’s OK – but be true to the format. Short topical research studies are very different than formal benchmark studies. Asking for larger investments of people’s time should be commensurate with expected value (more on this later). For example, I saw one of the many blog posts that get sprayed across various LinkedIn groups.  It was about "piecing together a benchmarking strategy" and the title, of course, piqued my interest given my background and my research pieces on the topic (see “related links” below). After reading it, it ended up not being a post on benchmarking, but actually a “benchmark” survey itself where users would find themselves answering more than 70 questions.  It put "benchmark" in quotes because a real benchmark is more than just a survey instrument blasted to the masses.  So, to return back to my original point, don’t sneak your way into benchmark data collection. If you’re going to run a proper benchmark, design it and roll it out appropriately for what it is. Otherwise, you “drag down” the term benchmarking itself. OK, let’s move on…
  • Don’t overwhelm your study respondents. Good CPOs don’t fill out 99% of massive surveys – they’re way too busy. They’ll do it when their service providers call in a favor, but people are surveyed to death. I’m amazed at the length of some of these things. As an example, there was a former procurement analyst from an established firm who set up his own shop and immediately launched a survey of literally hundreds of data points. But, it’s not just the raw length of these things, but also the quality that I see so often that is troublesome. When I design research I “value engineer” every question to think about not just how to make it focused on the objectives, but also to maximize its clarity and maximize the “ROI” of the collective time of the people who are filling it out. If you do research, work backwards from the objective/hypotheses and the data/insights that you want to provide.
  • Don’t just do a data dump. Stratify the data. Analyze it well and find the trends, correlations and other insights that can help. Stratifying on some type of performance metric or practice adoption metric (or use of composite metrics) is always a good way to tease out some insights.
  • Don’t run from qualitative insights – they’re often the best. You’ll get some great “quotable quotes” too!
  • Make an online survey as effective and efficient as possible:
    • Tell participants how long the study will really take them. Modern survey tools give the survey designer highly predictive estimates on this. If they don’t provide an estimate – be wary. If there is no progress percentage bar – be wary. In both cases, they may not want to scare you off. If they do provide an estimate, they either haven’t tested it with real subject matter experts (similar to “user acceptance testing” in software development), or they’re outright lying. I’ve seen some doozies on this where they say it’ll take 10-15 minutes – and there are 100 data points. Good luck with that.
    • Use some basic best practices. Use conditional logic. Pick the right question types. Provide a “don’t know” option. Use enumerated lists when possible rather than just numeric fields. And for gosh sakes, provide some definitions and help text! Let’s ignore the definitions on savings/avoidance (see links below), but even just something like “cost per PO,” like Procurement Leaders asked for in its above survey (if you go through their survey, you’ll see many other examples). Is it the procurement budget divided by the number of POs? Or the cost of people who process POs? What if they’re outside of corporate purchasing? Which costs to include in the cost pool? Does the number of POs include completely automated ones – or exclude it? You get the point. You need to provide some guidance or you’ll get garbage coming out the back end.
    • Use the right mix of study mechanisms: phone interviews, online surveys, face-to-face meetings, etc. – depending on your objectives.
    • If you use an online survey tool, pick a good one. I think Qualtrics is the best tool out there, but that SurveyGizmo is the best value.
    • Close the loop with study respondents, work as quickly as possible to get insights back to them, and provide a good “ROI” to them (more than just a gift card or chance to win an iPad). I personally agonize over making sure I can deliver value-added insights to the survey taker – regardless of research objective.
  • Most of all, don’t be sketchy, and do no harm. See the first bullet in the beginning of this article. Be clear who you are, what you’re researching (and why), what data/insights you want to gather and the value they’ll get for it.

Then keep your promise and deliver that value. The better we all become at doing research, whether you’re a practitioner or a provider of any form, the more we can lift each other up and lead by example. If I’ve missed anything, or you want to share your thoughts, please do so publicly or privately.

Related Links

A 21-Point Punch List to Diagnose Your Procurement Scorecard [Plus +]

Benchmarking the Procurement Benchmarkers – An Insider’s Guide (Part 1) [PRO]

Benchmarking the Procurement Benchmarkers – An Insider’s Guide (Part 2) [PRO]

Benchmarking the Procurement Benchmarkers – An Insider’s Guide (Part 3) [PRO]

Using DMAIC 2.0 to Blow Up the N-step Procurement Process [PRO]

Supply Performance Management: Critical For Procurement Measurement [PRO]

A Procurement Transformation Cookbook: Part 1 – Creating the Value Menu [PRO]

Top 10 Ways to Radically Expand Category Management Value Creation [PRO]

Don’t Avoid Cost Avoidance (Part 1 – The Rant)

Don’t Avoid Cost Avoidance (Part 2 – The CFO and CPO Fireside Chat)

Why Purchase Price Variance (PPV) Should Be Banished From Procurement Measurements and KPIs [PRO]

Purchase Price Variance (PPV) in Procurement and Savings Strategy: Limitations and One Potential Use[PRO]

Diving Into the Hackett Performance Exchange: Automated P2P Benchmarking for SAP and Oracle ERP Users [Plus +]

Putting GRC, Innovation, and Profit Improvement in Procurement Terms [Plus +]

Procurement and Supply Analytics: Spend Analysis is Only the Beginning [Plus +]

Ask the Expert: Top 10 Benchmarks in Procurement[Plus +]

New Procurement Metrics: Cost Per Outcome in the Private Sector (Part 2) [Plus +]

A.T. Kearney’s ROSMA Procurement Benchmark: The Good, The Bad, and The Implications (Part 1, The Good)

A.T. Kearney’s ROSMA Procurement Benchmark: Is the Metric Ready for Prime Time?

A.T. Kearney’s Procurement Benchmark: Does ROSMA Really Help Elevate Procurement’s Position in the Firm?

A.T. Kearney’s ROSMA: Motivations, Open Standards, and Summary Observations [PRO]

Voices (2)

  1. Pierre:

    Kris, I totally agree. There needs to be much better cooperation across various providers who serve the same industry. Personally, I’m happy to collaborate with any other provider of any form. Doing good research is like “God’s work” – propagating good insights and improving the ‘gene pool’ of insights. A task best done with others.

  2. kris colby:

    Excellent advice, thanks, Pierre. I would also argue that since so much of the research in our industry centers on similar topics (spend under management, effectiveness, utilization of technology, etc) there’s a mutually beneficial path forward in combining much of the collection and compilation. From there, the various analysts could slice, dice and analyze the data as they see fit without asking practitioners to fill out the same information multiple times.

Discuss this:

Your email address will not be published. Required fields are marked *