RFQ

I’ve spent enough time in the technical pre-sales process with my customers at PowerVR to understand a bit about how the request for quotation, or RFQ, process usually goes. Sadly, the easy description of the process is often, “terribly”. For a technology IP business like PowerVR, RFQs are everywhere, but primarily between us and our customer, and our customer and theirs. You’d be surprised just how many links there are in the chain sometimes, between my IP and it ending up in the end product you buy.

RFQs almost always contain the types of terms you’d expect for that kind of business: a technology evaluation, licensing terms, royalty rates, support structures, evaluation periods…if it’s something you want to check before handing over money, you can imagine it’s in the RFQ. They’re lengthy documents, packed to the rafters with information, ostensibly so the customer can purchase the best solution for the money they have to hand, for the products they want to build.

The problem I’ve encountered time and time again, which ties in quite heavily to my mini essay on building great benchmarks, is that the technology evaluation is often measured up against markers that have limited, and sometimes no direct, relevance to the end product. I struggle to remember an evaluation process that consisted entirely of benchmarks and other measured items that had relevance to the thing the customer wanted to ship to you in the end.

Technology buyers tend not to understand how to measure and evaluate the performance of what they’re buying in a robust and meaningful way. In my specific field, I’ve seen games benchmarks being used to estimate UI performance, or a single pair of low-level (and poorly written) micro-benchmarks being used to estimate performance of complex workloads, and most gradients in between.

There’s a gap in the market, especially in technology analysis, for a knowledgeable 3rd party to come in and help the buyer select the right technology for the end customer product. I’d rather sell my technology to a company that knows it does exactly what they need it to do, because they evaluated it with rigour and a deep understanding, rather than help them muddle through an overly holistic and haphazard analysis based on contrived and often irrelevant measurements. I’ll cheerfully help PowerVR book revenue no matter how we happen to win it, but when it comes via a bad RFQ process it dulls the lustre.

Bad evaluation of the core technology in the IP space can leave a company compensating elsewhere in the RFQ, in areas that affect the post-sales process and the bottom line of their finances.

For the buyer, a better understanding of what you’re using to evaluate the technology can only result in a better experience for the end user. The benefits of that are often measured in the tens or hundreds of millions of dollars in my field. Certain companies understand that and put a deep trust in the process, with clear end results. Others will have no solid idea how they’re selecting IP, or why, and the end results will be the exact opposite.

The RFQ is the very beginning of the journey from technology to product. It’s where the worst mistakes will be made. So it pays to understand how to execute an RFQ with all the precision and deep full-product understanding you can muster.