To query or not query that is the question.
Querying is the lifeline of clinical documentation integrity (CDI) professionals and many coding professionals. In fact, the CDI profession has many key performance indicators (KPIs) based on querying – so isn’t it funny that, as an industry, there is so much variation in how we track query metrics?
Think about it: query rates are often used to assess CDI professionals at the individual level, as well as the department level. Additionally, query response rates and agreement rates are also used to measure physician engagement. But what are we really measuring? And why don’t we have standardized metrics for query rates, response rates, and agreement rates? Yes, most departments use these same measures, but how they are calculated varies greatly. Even what is considered a response or “agreement” varies greatly, especially when clinical validation queries are thrown into the mix.
Let’s start by discussing query rates. A question I am often asked is “what should your query rate be?” My response is always, “it depends.” Just like review rates vary by what encompasses a CDI review, so do query rates. The most basic type of CDI query is one that will “impact” an MS-DRG. In other words, some CDI departments only query if they can add a CC or MCC to the DRG. As the patient population increases in complexity, and due to the prevalence of the CDI profession, as I’ve discussed in prior articles, it is becoming harder and harder to find that needle in the haystack of which review will yield a coveted DRG change. It’s like the old adage says, “you have to kiss a lot of frogs to find your prince.” If you have a mature CDI department that only queries when there is a potential DRG shift, it is likely that your query rate has decreased over time, but does that mean the CDI department is not successful or valuable? When I started in CDI, a consulting company popularized the idea of a 35-percent query rate, and that metric has stuck around.
So, let’s take a deeper dive. When you compare query rates, response rates, and agreement rates to those of your peers, you may think you are comparing apples to apples, but you likely are not. What is the denominator of the query rate? Is it the CDI in-scope patient population (admissions) or the number of CDI reviews or the number of patients reviewed? The denominator will heavily influence the query rate. Back in the day, most CDI departments only reviewed Medicare patients, so the expectation was that 35 percent of Medicare patients reviewed would produce a query opportunity that resulted in a DRG impact (I guess I would believe that for a brand-new CDI program). What I didn’t like about that metric, as a CDI manager, was if it was applied at the individual CDI level and the staff worked along service lines, which was common when records were paper. Under such a scenario, the CDI assigned to the orthopedic floor rarely, if ever, could reach a 35-percent query rate. However, those who staffed cardiology could easily exceed 35 percent thanks to the ever-present heart failure clarification queries. So, a 35-percent query rate was not reasonable, in my opinion, across each area, but may be reasonable for the overall department. However, as provider documentation and coding have improved through the growth of CDI and the population reviewed by CDI departments has increased to almost all payors, does a 35-percent query rate for DRG impact still feel achievable? Moreover, does it really make sense to review all patients if only about a third of all reviews will likely result in a query?
When the CDI profession emerged, as noted, the focus on was the Medicare population, which is likely to include those with chronic conditions and seeking medical care, but as CDI efforts moved to all payors, it also covered a younger, healthier population less likely to generate query opportunities. Now, if we measure our query rate by total admissions, our query rates will likely decrease as more payors are added. However, if we measure the query rate by CDI reviews or patients, our rates are likely to be more stable and higher. So, what is the difference at a per-review rate versus a per-patient rate? As the role of CDI has expanded, so has the number of reasons why we query. When the focus is on documentation “integrity” and not just “improvement,” there are a lot more query opportunities, so there can be multiple queries per patient, which is likely to drive up your query rate. If you measure your query rate at the per-patient level, the number of queries per patient doesn’t impact the query rate. Should there be an industry standard for how organizations calculate the query rate? It just seems to me that dividing the number of CDI queries by the number of admissions promotes inefficiency, because not all patients need CDI review – and even if they did, few if any CDI departments are sufficiently staffed to review every inpatient, which is likely why we are seeing the growth of case prioritization technology in CDI.
If you think about it, a lot of what we have historically done in CDI is a bit arbitrary. It might have made sense when CDI was a new profession, but as we’ve grown from documentation improvement to documentation integrity (and from the Medicare population to all payers), have we changed our processes? And what about one of the biggest impacts on our profession, the electronic medical record? It has been somewhat revolutionary for our profession. Could you have imagined the past year and a half if we did not have electronic records? Have we accounted for the impact of an electronic record in our workflow? Do we have to perpetuate arbitrary rules since we can now leverage technology? Take, for example, the timing of CDI reviews. There is no standard of when a first CDI review should occur. Some departments wait 48 hours, because they want to give providers enough time to get the history and physical (H&P) on the chart – because before electronic records, you often had to chase down documentation. That isn’t the case anymore, as electronic documentation allows physician notes and diagnostic results to be immediately accessible, and the CDI professional no longer must visit the floor to find the paper chart and see if the H&P is present. Even if the H&P is on the record and the CDI professional has a query opportunity, many still wait to send a query to allow the provider the opportunity to write the applicable diagnosis or correct their documentation. Why do we view queries as punitive, rather than an extension of bringing important information to the provider’s attention? Why wait? If the role of CDI is to promote a complete and accurate record, which often requires a query, causing us to measure the success of a CDI department based on their query rate, wouldn’t it make sense to be as proactive as possible when it comes to querying? Why are we apologetic for doing our job when the provider, the patient, and the facility all benefit from a complete and accurate record that can be reflected in precise coding? Waiting to query really makes us inefficient, and could minimize our impact by affecting our query rates. My experience is that the decision to query or not is heavily influenced by query metrics. Not only are CDI professionals considering query rates, but also response rates and agreement rates. I know many such professionals who won’t ever query a particular doctor, because they never respond to a query. Really? How will we ever change that behavior if we don’t hold them accountable by tracking physician response rates at the physician level? I also know of many who won’t query unless they think the physician will agree with the query to maintain their query agreement rate. The point is, I can see where having a target query rate may seem like a good idea to encourage CDI professionals to query, but most value the response rate and agreement rate over the basic rate. Conversely, I’ve also seen some unnecessary and terrible queries made just to pad a query rate.
Not only does the methodology and timing of the query impact the rates, but so do CDI and coding workflow, as well as organizational policies. When determining the query rate, is it just based on CDI professionals, or is it based on all queries, regardless of if they originate with CDI or coding? If coding also issues queries, then the timing of queries has increased importance, because some queries could be delayed until the coding process to allow the provider time to address the issue in the discharge summary. Additionally, who does post-discharge queries? Does CDI’s responsibility end at discharge, or continue until the coding process starts or until the claim is dropped? What happens to open queries upon discharge? If coders also issue queries, do they close the CDI concurrent query and reissue a new post-discharge query on the same topic? An example of an organizational policy that can inflate query rates would be one that requires all DRGs impacting diagnoses (e.g., CCs and MCCs) to be present in the discharge summary, so CDI must query the provider to either add the diagnoses to the discharge summary or rule them out. Another more recent example that lowered many query rates was the decision by some organizations to limit querying during times of high COVID patent volumes. I could go on and on with examples of how different processes can affect query rates. The bottom line is that we are not comparing apples to apples across CDI departments when we talk about this.
As you can see, there are so many things that can affect a query rate that it makes it difficult to set an industry benchmark. I like to say that if you’ve seen one CDI department, you’ve seen one CDI department. Although most CDI departments are tasked with similar roles, the culture of every organization is different, so there is variability in how we accomplish tasks associated with our roles – and we are inconsistent as an industry in how we measure our performance.
As the CDI profession looks to update our metrics to better reflect the diversity of our role, maybe we should also work to standardize those metrics, especially if we are going to promote industry benchmarks.