Published by Paul Swenson, EVP Business Development, After, Inc. on September 16th, 2016.
After, Inc. is privileged to work with several great OEMs and Retailers in many sectors – including automotive, marine, consumer electronics/appliances, power sports, outdoor power, sporting equipment, heavy equipment, and many others. Often these relationships begin with finding ways to improve warranty programs and extended service contracts.
We always encourage our clients to better understand the accurate cost per unit (CPU) of their warranty and extended service programs (ESP). Accurate CPUs are a requirement for real warranty and ESP strategy development, including reserve levels and earning, pricing, terms, age limits, renewal programs, etc. And here’s the surprise to many in the business: in all the engagements we’ve started in the past 23 years, we’ve only found one program with accurate CPU forecasts. One. The rest of the program CPUs were significantly off – most often forecasted too high, but occasionally too low. This is true for forecasts developed internally, by outside actuaries, and insurance companies. Both ‘too high’ and ‘too low’ have significant impacts, and neither is acceptable.
Sometimes our engagements begin because a company has reached a ‘crisis’ point due to a large rate increase, a calculated shortfall, or not being able to answer external auditor questions. Other times our work in this area is started because ESC and warranty program managers recognize that they can drive competitive advantage and customer/dealer loyalty when they have accurate data driven decision support.
One constant we see in the industry is that traditional actuarial methods and software are wholly inadequate for accurate CPU calculations and forecasts for any liability terms that are longer than 6-12 months. The reasons can be summarized into two areas: statistical methods (science), and knowledge of the business/industry (art).
Traditional actuarial methods often involve the use of ‘loss triangles’ that arrange known variables such as # claims, claim costs, etc. into a table by effective date – thus creating a triangle shaped table. Development factors are then derived from known history to estimate the unknown portion of the triangle. Many of the available software ‘solutions’ use the same rudimentary statistical methods, sometimes with some user inputs that ‘customize’ the output for a particular business.
All of the above work well for industries that are stable in terms of technology, underlying platforms, drive trains, governmental requirements, and so on. However, this doesn’t describe any of the industries in which we deal! So, what can be done to ensure that CPUs are accurately forecasted, risk will never spiral out of control, and that you can rely on your forecasts for strategic decision support?
For over 20 years, we’ve been developing and evolving new quantitative methods that consistently drive accurate forecasts. These methods have been innovated and adopted from other industries including engineering, sociology, marketing, public health, epidemiology, etc. , and include failure time analysis, reliability analysis, censored data, and event history.
Notable methods we use embrace: Kaplan-Meier (nonparametric); Accelerated Failure Time models (parametric); Cox Proportional Hazards (semi-parametric); and other new techniques being co-developed with leading schools and professors.
An example of how inaccurate ‘traditional’ forecasting methods can be comes from a very recent analysis we were asked to perform. A large OEM came to us because their actuary partner was forecasting a multi-million dollar reserve shortfall. They complained to the actuary that this ‘surprise shortfall’ should not happen because they were reviewing with him/them quarterly and the most recent quarterly update showed no such shortfall. So a different actuary from the same company looked at the same data two weeks later and the estimated shortfall was increased by over 100%. The OEM expressed their surprise and shock, so a third actuary from the same company analyzed the same data and the estimated shortfall was increased again, this time by more than an additional 50%. We’re talking millions of dollars, and they didn’t know who/what to believe. The OEM decided not to pursue further discussions with the actuaries in their third-party company (surprise!), and came to us for our input. Our analysis uncovered several errors in both the art and science of the forecasting methodology, and the shortfall (yes, there is one but it is much smaller than their actuarial analysis) is now accurately forecasted and under strategic control. We see this over and over in every sector we touch.
The message is clear and simple. The products and technologies across various industries are constantly evolving, rendering the historical perspective of claim data much more difficult to interpret and treat for accurate forecasting – and it takes both art and the right science. It’s time that the forecasting methods moved from the 50’s into the 21st century.
If you would like to speak with After, Inc. about your warranty programs, please give us a call at 800-374-4728. We’re passionate about the work and are here to help!