The Institute for Clinical and Economic Review (ICER) is a non-profit organization in the US that aims to evaluate evidence of the value of medical tests, treatments and delivery system innovations that can be used to improve and inform the healthcare system.
Unlike the UK, the US does not possess a single health technology assessment (HTA) body, but rather relies on a multitude of different payers, all of which have different measures of the value of new products. ICER was founded in 2006 with the aim of providing a non-legally binding evaluation of the long-term value for money and short-term affordability of new medical products in the US, aligning it with the aim of the National Institute for Health and Care Excellence (NICE) in the UK.
ICER’s influence is rising in the US, as shown in a presentation led by the organization’s Chief Scientific Officer, Dan Ollendorf, at the International Society For Pharmacoeconomics and Outcomes Research (ISPOR) 20th Annual European Congress in Glasgow, Scotland, which indicated that 59% of payers in the US use ICER’s reports as part of their formulary decisions. However, questions have arisen regarding how ICER’s new cost-effectiveness framework differs from the guidelines set out by NICE, which is one of the most stringent evaluators of cost-effectiveness.
ICER and NICE: common ground
On closer inspection, the difference does not appear to be that great. Both organizations share key framework similarities, such as using cost utility analysis as their predominant model, focusing on patient subgroups, and focusing on the same input data (mainly from randomized clinical trials and health utilities), as well as assessing the output in terms of cost per quality-adjusted life year (QALY), clinically relevant endpoints, and total and incremental costs. As such, there seems to be little difference in the approach to a cost-effectiveness analysis in the frameworks of ICER and NICE.
The organizations even adopt similar approaches to special populations and circumstances. NICE has a highly-specialized technologies program that sets out specific criteria for products focusing on rare disease, allowing for a higher cost-effectiveness threshold. NICE also implemented a standard budget impact threshold, which highlights products that are likely to stretch the budget realities of the National Health Service (NHS) by costing more the £20m ($26m) in any one of its first three years on the market.
Similarly, ICER has a program in place for products focusing on rare diseases that also allows for a higher threshold, and uses a standard budget impact threshold based on gross domestic product (GDP) growth and FDA approval volume. As such, although the values and specific details of these policies differ, the fundamental principles remain the same: to increase patient access while monitoring cost.
The key differences between ICER and NICE come not from their structure, but from their environment, mainly differing in their social and political climates. In the UK healthcare system, the key principle is to do whatever maximizes the health gain for everyone. However, in the US, the key principle centers more around fair innings, where everyone has equal potential to get the best care.
Another key difference is seen in tradeoffs. With NICE, as a governmentally-run institute, it is apparent that investments in the NHS come at a tradeoff from investment in a different public sector, such as education or national security. The healthcare system in the US is a business in itself, and demands are independent of government affordability. These differences translate into differences in the evaluation of cost-effectiveness.
In the UK, there is a high need to evaluate the cost-effectiveness of investments to ensure the most effective prioritization of resources, which has become the fundamental core of NICE. Comparatively, given the decentralized healthcare system in the US, the focus is around clinical benefit; cost-effectiveness is a useful tool, but not a necessity in determining the price of a product.
Going forward, it will be interesting to see if the differing social and political climates that define these two organizations will lead to a disparity in model results and appraisals, or if their common underlying structure will result in similar analyses of cost-effectiveness.