Effectively Navigating Denials Management
Most providers are aware that denials are a significant source – perhaps the most significant source – of lost revenue. Understanding this, many seek to manage their claims’ denials more effectively, yet struggle to overcome the challenges of turning oceans of information into insight. Payer denials are often ambiguous and inconsistent, and the sheer volume of denials that wash over the decks of a PFS shop can be overwhelming.
The result is not only the lost revenue of denied claims that are not successfully appealed, but the labor cost to work them and the opportunity cost of not working other claims that might require attention.
By using sophisticated data analysis and technology, providers can understand where they are in the sea of denials and get a good sense of where they want to go. But to reach that destination, they must also deploy the insight and employ good change management techniques.
For ancient sailors, accurately estimating latitude using a sextant and measuring the angle from the horizon to a known celestial object, like the sun or the North star, was a simple task. But longitude proved far more difficult. Calculating longitude was only possible when the exact time was known. Since clocks were unreliable at sea, it was really only possible when the sun was at its zenith, and the difficulty of judging high noon introduced the potential for error, not to mention the impossibility of taking a sighting on a cloudy day.
But the challenges of calculating longitude evaporated when John Harrison invented a chronometer that could accurately keep time, even in the stormiest sea. The ship’s captain could know the exact time, and with the right knowledge and training, could use celestial tables to calculate both latitude and longitude, greatly improving the accuracy of navigation.
Like ancient sailors, many providers know they need to navigate more effectively, yet struggle to overcome the challenges of turning information into insight. Payer denials are often ambiguous and inconsistent, and the sheer volume of denials that wash over the decks of a PFS shop can be overwhelming. The result is not only the lost revenue of denied claims that are not successfully appealed, but the labor cost to work them AND the opportunity cost of not working other claims that might require attention.
An effective denials management process looks something like this:
Most providers are aware that denials are a significant source – perhaps the most significant source – of lost revenue. To better understand those losses, it is important to define two stages of the process: initial denials and writeoffs (or adjustments.)
Initial denials are a payer response (paper remittance or electronic 835 file) indicating no payment or only partial payment for services.
Writeoffs are provider transactions to adjust balances off active AR and record the lost revenue on the general ledger.
There are commonly accepted benchmarks for both of these metrics. Most industry literature would suggest normal performance around a 10% initial denial rate and a writeoff rate of 1%. In other words, 90% of initial writeoffs can be corrected and only 10% are fatal errors.
In our experience, most providers are in the range of 15%-20% initial denial rate (though few can accurately measure this metric), and 2-3% writeoff rate. But as with most benchmarks, comparing these guidelines to actual performance comes with significant pitfalls.
Too often providers fool themselves into thinking they are close to benchmark performance by redefining the inputs and moving process failures out of scope. For example, it is easy to assume a duplicate denial or a non-covered denial has no cash impact and therefore should not be included, but if those categories are in the benchmark then they should be included in the calculation. Furthermore, it is likely that some of those denials – even those that appear to have no cash value -- can be collected. It is not uncommon for a payer to inappropriately deny a claim as a duplicate when there are multiple services on the same day or if a modifier was left off. If a provider ignores these denials, they are ignoring lost revenue.
Another challenge in applying these benchmarks is the lack of specificity. Knowing your overall rate relative to 10% is useful information, but the next level of information – the breakdown by denial type – is even more useful. While variations in performance, payer mix, service mix, etc. means every facility is likely to have a different experience, a common breakdown of the 10% initial denial rate should look something like this:
The previous table can help providers understand where they should be, but it doesn't yet tell them where they are. Like a captain’s sextant and chronometer, providers need good tools to help sort through the constant flow of denials to identify not only where the biggest opportunities are but also the root cause of those problems.
For example, it would be easy to look at the payer with the highest number of denials, or even the largest denial type, and opportunity might pop up there. But it is equally likely that the payer with the largest number of denials is simply the largest payer. Looking at denial rates is a more valuable approach.
The best analytical tools make it easy for users to quickly and logically drill deeper, peeling layers of the onion until the heart of the issue is revealed. The payer scorecard enables you to focus on payer-specific issues. Another technique is to compare volume and financial impacts in two dimension and observe the most significant impacts.
Next, it is important to drill down into details with context. Denials come in many flavors and the various categories have their own details that need to be considered in context. For example, Eligibility, Coordination of Benefits and Authorizations denials reflect front end processes and should be analyzed with the context of who did the registration/verification, what service location did the patient arrive, etc. Medical Necessity and Non- Covered Service denials usually occur at the service level and require CPT Code level analysis. “Lacks Information” is a catch all the payers use to mean almost anything. Usually one or more remarks codes are used to describe in detail what is truly “lacking” for adjudication.
Finally, quantitative analysis is an important step to prioritize and find the most impactful opportunities, but also to get to the step of understanding root cause. To design better processes, you need detail and you need to look at real examples. Your analysis tool needs to provide a mechanism to peel the onion all the way to the heart – account examples you can review in detail.
Once a common problem has been identified and a series of examples have been reviewed, the root cause problem should be clear. In the best case scenario, it might be an erroneous setting in the CDM or bill editor. These should be easily fixed. A more challenging obstacle might be a piece of information that is not being collected at the time of service or elsewhere in the process that is leading to payer denials.
In any case, once the failure point is identified, it is important to engineer solutions to get claims paid. For denials sitting in active AR, it is important to make a quick determination on the likelihood of collection. For example, a billing error is almost certainly correctable, and even a relatively small balance is probably worth working. On the other hand, failure to obtain an authorization for a non-urgent outpatient procedure may be impossible, and even though the claim balance is much greater, the effort required to work it is probably not justified by the expected return.
But simply fixing denials as they occur is like trying to sail into the wind – you can do it, but it requires a lot of work for relatively little return. You can really put the wind in the sails of your collection efforts if you fix processes upstream to avoid the denials altogether. Avoiding initial denials eliminates the need for more resources in PFS, reducing costs while increasing (and accelerating) cash collections. However, it is easy to underestimate the degree of difficulty and the effort required to make those upstream changes.
But looking at a map and even drawing out a route doesn't get you anywhere if you don't hoist the sails, put a hand on the tiller, and take the helm. Identifying the problems is good, developing solutions is better, but you have to implement to really get any benefit.
Like any change management effort, implementing processes changes to improve denials requires buy in from stakeholders, but these stakeholders may be more challenging than most.
For process failures that are occurring upstream – likely in clinical areas or in patient access functions potentially managed by clinical or other non- revenue cycle staff – fixes will require the assistance, or at least the approval, of a clinical resource, perhaps even a physician with administrative oversight. Those resources, particularly physicians, will likely not be satisfied with anecdotes or even best practices developed in other organizations – they will want to see hard facts and data.
By using the reporting capabilities that helped identify the problems, these resources can be converted from skeptics to enthusiasts, from obstacles to fellow navigators, who will help turn great solutions into tangible process changes and ultimately better performance.
Managing denials is not a new concept for providers. As payers have increasingly used denials as a mechanism to limit reimbursement, most organizations have made an effort to limit the revenue losses associated with denials.o unwieldy and often ineffective solutions that can provide inaccurate outcomes and create audit risk for hospitals.
While information about denials is readily available, improvement efforts have been hampered by a lack of insight, which makes it harder to identify areas of focus, harder to uncover root causes, and harder to win over stakeholders who are vital to not just overturning denials, but to eliminating them in the first place.
Providers seeking better performance should deploy business intelligence and analytic tools that help delve below the surface of the sea of denials to understand root causes – what services are being denied? Which payers use various codes to reflect what kinds of denials? What is the context for the various denial rates?
Analytic tools that help answer those questions are now available to those who seek performance improvement in these areas, and they can show the way to transit what are otherwise very treacherous waters. Even more effectively than a sextant and chronometer.