Editor’s note: This is part of a new Spend Matters series of personal tales from the procurement trenches. Know someone with a great procurement story? Send us a note.
Contingent workforce procurement and management in a professional services company is oftentimes a unique activity. Performance metrics, while similar to other programs, are prioritized differently. Costs are monitored more closely, as they are directly tied to operating margins, and time to fill is a crucial metric. For a large program that hires thousands of contractors yearly, each day of cycle time reduction can add millions to the top line.
For seven years I managed contingent staffing at Wipro. For the last four years, that management was for all contingent staffing globally. This encompassed 52 countries, more than 300 suppliers, more than $400 million in program spend and more than 10,000 in contract staff hired each year.
Free Webinar: Impact of 2017 tech trends in CW/Services Procurement on YOU—Register Now.
Among the contingent staff we hired, 95% were for client projects and would be considered technical staff (software developers, designers, testers, infrastructure staff). They were assigned to outsourced projects, projects that would be delivered at a client site, and in some cases “staff aug” opportunities. This demand type was classified using the simple numeric scale below.
- Type 1 was staff augmentation — time and materials billing; the client selected and managed staff; this was oftentimes competitive
- Type 2 was co-managed — T&M billing; the client selected staff, but Wipro and the client jointly managed them
- Type 3 was outsourced — SOW-based work; typically not at a client location; Wipro selected and managed contingent staff
- Type 4 was consulting — pseudo staff augmentation but typically true consulting that could have a hybrid payment model
Three years ago, as we were reviewing performance metrics, we decided to try evaluating program performance by the type of demand. Prior to this, metrics were measured and reported by geography, business unit or practice — and at the account level for large accounts — but demand and the performance metrics associated were not measured by demand type.
After we began using these criteria, we found that for demand that was competitive in nature (Type 1 or 2 above), the conversion rate of the opportunities was less than 10%. This meant that 90% of my team’s time and my suppliers’ activity provided no return. For non-competitive situations (Type 3 and 4 above), the conversion rate was above 80%.
In addition, we analyzed performance metrics in more detail. While some of the metrics were better in demand types 1 and 2, others were better in demand type 3 and 4. The primary objective was to increase the conversion of opportunities and demand in classifications 1 and 2, while also decreasing the time to fill and increasing the margins we were able to capture. But as we dug deeper into analysis of the data, we found there were areas in demand type 3 and 4 that had better performance. As such, we began resetting performance metrics for each area of demand that we segregated to improve the overall performance of the entire program, not just the problem area identified initially.
Having the data related to performance (or lack thereof) allowed us to then begin to understand the root cause related to the large disparity of performance between the two areas evaluated. As we did deeper and deeper analysis and review of the process used, we began to identify the process and workflow steps that inhibited success in opportunity conversion for competitive situations.
A process of fulfillment was designed that identified the areas of the workflow process that were deemed too slow or cumbersome to prevent agility and speed in responding to these demand types, while also keeping the corporate aspects related to risk management and compliance in place.
The newly designed “rapid fulfillment process” was implemented on a trial basis. After minor adjustments, we rolled it out within the U.S. The result? Performance metrics improved, and the conversion rate of opportunities grew from less than 10% to over 50%. Time to fill was reduced by over 40%.
Due to the documented success and validated improvement in metrics, the process went live globally after six months. Over the course of year one, the newly designed program was credited with contributing $50 million in incremental revenue to the organization, and in year two with over $200 million.
The entire success of this program was based on understanding the differences in demand fulfillment, in how demand was classified and by setting purchasing guidelines that were clearly defined so that an effective classification of demand type could occur. Even though all fulfillment was for contingent staffing, and technically was located within a single category of procurement, the differences within required a different approach.
Contingent staffing is not like buying infrastructure or hardware or office supplies, especially in a services organization. There are subtle differences in demand types that necessitate determining how to classify your demand and how to best fulfill it within each classification. Proper tracking and measurement of the performance metrics should be able to define the targets as well as the objectives desired. In our case, the metrics, and most importantly, the contribution to both top and bottom line, proved the process and approach were sound.