EDITOR’S NOTE: This is the final installment in a two-part series examining how to manage a CDI program using Key Performance Indicator management strategies.
If your set goals are not achievable, you may need to go back and look at the process. How can we eliminate things like duplicative work, administrative tasks, distractions, and non-CDI functions that are currently required of the employee? A bit of process engineering and change management may be in order before progress can be made. Over a period of about one year, you will find that it is possible to eliminate many of the non-productive tasks, and eventually you will get a better understanding of the true par numbers that are possible as each CDI specialist pushes himself or herself a little harder to gain small incremental improvements. At some point they are going to tap out and/or have quality concerns, which are legitimate limiters of the production, and you should take those conversations seriously as you determine the correct long-term goals for the following year.
If you are not able to review the necessary eligible records even with these performance improvements, increasing full-time equivalents (FTEs) is probably in order. Lastly, one needs to consider the type of review being conducted and distinguish between the initial review rate and follow-up review rate. An initial review reasonably can be expected to take longer than a follow-up. For instance, an initial review may take 15 minutes for a recent admittance, 30 minutes for a typical record, and up to an hour or more for a complex case. In contrast, a follow-up review should only require one to catch up on the events that have occurred since the last review, which could take as little as five minutes or as much as 15 minutes.
In general, initial reviews should be conducted within 24 to 48 hours. The 48-hour point is the best point to begin clarifying documentation. If you place the query sooner than 48 hours, the physician will likely not have all of the diagnostic testing required to answer the query. Place the query later than 48 hours and you run the risk of the physician not responding regarding the resolved diagnosis – or they may already be locked in to only documenting the symptomology.
In contrast to the review rate, the query rate is our first peek into the skill level (as opposed to the raw work effort) of a CDI specialist. A very low query rate can often be explained by the patient population the CDI specialist is covering. Patients in ICU who are ventilated and have neurological or cardiac complications and infections, for example, are likely to be well documented. You likely will be starting in a fully optimized DRG, already dealing with both a high severity of illness as well as risk of mortality. Understanding that the need to query for a patient population like this is minimal will account for the low query rate of the CDI specialist tasked with reviewing such records. Instead, a CDI specialist working an area such as this may have a slightly different CDI focus. They may spend extra time making sure that the documentation for present-on-admission status for all of the diagnoses is explicitly reported. They may look at patient safety indicators or quality problems for the additional diagnosis, which mitigates quality concern by providing the correct risk adjustment.
My point here is that when shifting focus, the CDI specialist may once again find reason to begin placing queries on charts. Alternatively, consider someone working on a regular telemetry floor, where the charts often are vaguely documented and full of symptoms rather than diagnoses. A low query rate in this area is most likely a performance concern for each individual CDI specialist. It isn’t only low query rates one should be concerned about, however. Very high query rates could be indicative of a CDI specialist who is placing queries with very weak or inappropriate clinical indicators. Such is the case with my pet peeve “single indicator query” employee. This behavior brings up concerns about the quality of the questions being asked, which jeopardizes physician participation, compliance, audit risk, and self-induced quality profile reductions. While the query rate can vary depending on the technology being used, some self-evident standards have emerged. In a random process, the query rate should be around 35-40-percent for a new program and around 28 percent for an established program that has good documentation.
Keep in mind, however, as CDI evolves to include value-based purchasing, incorporating factors such as patient safety indicators, risk adjustment, mortality etc., you would expect those percentages to go even higher. When using a technology tool that eliminates some of the grunt work of reviewing records with low ROI, the query rate should go higher. If you are only presented with cases that have multiple indicators that an additional query is needed, the query rate could be as high as 50 to 70 percent or more. As technology improves and proves able to find more indicators and eliminate more false positives, we could see expected query rates rise into the 80-percent range.
The agree rate introduces a couple new managerial elements: specifically, using different KPI in tandem and evaluation of physician buy-in. The agree rate has to be viewed not as a standalone metric, but in combination with the query rate. A high query rate with a low agree rate could be indicative of the dreaded poor-quality “single-indicator” CDI process. However, a low query rate with a very high agree rate could be your first clue that you have a CDI who is cherry-picking only the lowest-hanging fruit and/or is basically only placing queries when physicians are already in the process of documenting the questionable diagnosis anyway. Both practices can be detrimental to the success of a CDI program. In general, an agree rate of around 75 percent is a good starting point. If you have a lower agree rate, you may have opportunities to improve with respect to the clinical indicators you are choosing and utilizing within each query. If you have a higher agree rate than 85 percent, then you may very well not be placing enough queries.
Another factor to consider, however, is viewing the agree rate as part of your physician participation metrics. Data is needed on a per-physician basis in order to determine physician bias in the patterns of both “agrees” and “disagrees.” You do not want a “just tell me what to write” physician, as you run the risk of inappropriate clinical diagnoses being reported. Likewise, we often see very low agree and physician response rates, which are indicative of poor physician buy-in and/or lack of proper orientation/training for the program. The three keys you absolutely have to have in terms of the people element include expert CDI specialists, coder buy-in, and physician buy-in. You cannot have a successful program without all three elements working at the peak of their function.
Comparing the capture rate from quarter to quarter or for a peer group of hospitals is relatively straightforward. If the CC capture rate is low, focus on diagnoses that qualify as CCs. If the MCC rate is low, focus on diagnoses that qualify as MCCs. Many capture reports offer the opportunity to drill down into the specific DRG pairs with low capture rates. From there, the process of directing efforts could not be simplified any further without someone handing you the actual records that need increased scrutiny.
Simply direct your CDI specialists to spend extra time in records that have a principal diagnosis that places them into the low-performing DRG pairs or triplets. Have your CDI specialists log, record, and trend any troubling patterns observed within the low-performing groups. For example, if capture rate is low with regard to pneumonia cases, request that the CDI specialists look for a pattern of specific providers that are documenting symptoms rather than diagnoses, or dropping resolved diagnoses from their documentation without any confirming statements regarding the diagnosis status, resolution, or existence in the summary under the hospital course. This is a good segue into two additional metrics: top DRG opportunity by value and top DRG opportunity by volume.
Using the specific capture rates and per a set of predefined goals, a top 10 or top 20 list illustrating the codes with the highest values can be readily identified. Usually, when comparing to a peer group benchmark with a performance at a certain level, all that is required is the claims data in order to produce these lists. If you need to find areas to concentrate your efforts, this is a good place to start. Often we see that up to 50 percent of the available opportunity exists for these lists – certainly for a top 20, and sometimes even for a top 10. Armed with this information, focusing CDI efforts on these areas is relatively simple. Direct your CDI specialists to spend extra time combing through every page of these records looking for anything that indicates the presence of an undocumented or confusingly documented medical diagnosis.
So you have buttoned up the top 10 list by value, and now you want to find new areas to direct opportunity? That opportunity may exist within the top query opportunities by volume analysis. While the individual dollars per record may be low for certain cases, the sheer number of records within these categories can provide opportunities for additional improvement. Once again, the process is as simple as directing your CDI specialists to spend additional time combing through these records in order to find what is missing.
The lynchpin severity metric that is easily digestible from a 10,000-foot height is case mix index (CMI). The higher the index, the sicker the patients. The nice thing about the CMI is that with a bit of work, it is relatively easy to calculate each individual CDI specialist CMI, a physician’s CMI, what the CMI is quarter over quarter, or even how the CMI compares to a peer group. At times, CMI may be very close to the goal even though the individual capture scores may fluctuate from specialty to specialty and diagnosis to diagnosis (and over time). Since those individual metrics are often in flux, use the CMI as a summary of the overall health of your program.
What is the expected death rate for your facility, considering the reported severity and risk adjustment diagnoses in your patient populations? If it is lower than your actual death rate, you have a quality problem. The good news is that usually, the quality problem is not poor patient care, but rather poor reporting and capture of the necessary risk adjustment diagnoses that are not being reflected with the correct expected death rate. Again, this is an easy-to-digest metric for directing efforts at capturing risk adjustment diagnoses, some of which may not impact a standard MS-DRG.
Not always available to the CDI team, a denial report is a fabulous tool for directing CDI toward quality-related improvements. When a pattern of denials is reported, the CDI specialists need to be informed about what that pattern is and what rationale is being leveraged to adjudicate the denials. In many cases, the diagnoses are valid, but the claims are being denied on some technicality or poorly documented physician rationale for why the patient received the diagnosis.
The best defense against an audit is a strong CDI program that seeks to clarify the very issues that have been shown to result in denials. That was true with MS-DRGs going back to 2008, and it will remain true with value-based issues and risk adjustment going into the 2020s.