The use of computer-assisted coding (also known as CAC) software is becoming more widespread in the coding industry, particularly in the coding of inpatient claims.
It’s not a surprising trend, really, given the many known benefits of implementing a CAC system. Computer-assisted coding software helps streamline the coding workflow, reducing backlogs by increasing productivity. Specifically, the software is a useful tool that helps coders navigate through longer, difficult chart documentation more quickly. Because of this particular benefit, some have seen the effects of their coder shortages reduced.
Computer-assisted coding software works by analyzing the medical record documents and assigning ICD-10-CM and PCS codes to that analyzed data. The software automatically generates the codes based on specific key terms and/or phrases analyzed directly from the clinical documentation, using natural language processing, similar in concept to spellcheck. It is a type of artificial intelligence.
The AHIMA Foundation conducted a research study in collaboration with the Cleveland Clinic to examine the interface of technology and the HIM professional. Funded by a research grant from 3M, the research study examined the impact of computer-assisted coding on timeliness and data quality. The study process included 25 Cleveland Clinic inpatient charts coded by 12 coders—six using CAC technology; six not using CAC technology. All 25 records coded in this study were complex, with an average length of stay of 16 days and an average case mix index of 2.45. The coders’ code choices were measured against the gold standard set, which was the set of correct diagnosis and procedure codes for each medical record, and was established and validated by Cleveland Clinic’s coding leadership and quality team. It is important to note that the coders’ sequencing of the codes was not validated—only the presence or absence of “gold standard” diagnosis and procedure codes on each claim.
As a result of this limited study, several benefits of computer-assisted coding were realized. The first result of this study found that when a coder is paired with computer-assisted coding, the coding time of inpatient records is significantly reduced over traditional coding. On these specific records, coder productivity increased by more than 20 percent, and coding time was significantly shorter —a 22 percent reduction in time per chart—than that of their traditional coding peers. This result may be music to a facility’s ears. Increased productivity and reduction in time spent coding charts would certainly have a positive effect on reimbursement for the facility. This limited study additionally resulted in the finding that a CAC system, when used by a coder, did not reduce coding accuracy. In this study, the Cleveland Clinic was able to reduce coding time without sacrificing quality as measured by recall and precision for both procedures and diagnoses. Again, this result would have a positive effect on a facility’s reimbursement, and is good news for facilities considering implementing a CAC system.
Another advantage of computer-assisted coding is the use of the natural language processing (NLP) engine. This engine has the ability to learn from coder input over time, and to improve recall and accuracy of suggested codes. This is a disadvantage of computer-assisted coding as well. While the ability of the NLP engine to learn over time can be a positive selling point, this engine will learn from every input of every coder. It is important to note that the NLP engine cannot decipher the accuracy of what it is being “taught.”
Over the years as an auditor, I have had the opportunity to review records in many different EMR systems, some utilizing computer-assisted coding, and others utilizing traditional coding. One error discovered during an audit at a facility using computer-assisted coding stands out to me, because of the difference in the DRG and reimbursement amount.
A patient with a history of bouts of pancreatitis with reported occasional flares of pain was admitted with abdominal pain found to be due to superior mesenteric artery syndrome. The CAC system suggested acute pancreatitis as the patient’s diagnosis, and listed this code in the first, or principal, diagnosis position. The coder accepted this code as the principal diagnosis and submitted the claim with DRG 438, with a relative weight of 1.6612, reimbursed at approximately $57,581.13. After thorough review of the clinical documentation, it turned out that the patient actually had a diagnosis of chronic pancreatitis that was not the cause of the current abdominal pain, and for this particular admission, the principal diagnosis was the superior mesenteric artery syndrome. The appropriate DRG to be submitted for this claim was actually DRG 394, with a relative weight of 0.9502, reimbursed at approximately $16,933.28. The estimated difference between these two DRGs was an overpayment to the facility of about $40,647.85.
It cannot be overemphasized that the use of computer-assisted coding alone does not replace certified coders. The software is limited and does not have the ability to apply guidelines or make decisions about code application and the circumstances of each admission. It does not have the ability to “choose” a principal diagnosis or a principal procedure, and in many cases, does not have the ability to build ICD-10-PCS procedure codes.
In support of this, an additional finding of the AHIMA Foundation’s collaborative study with Cleveland Clinic was that when the CAC system is used alone, without credentialed coder intervention, coding accuracy was reduced. Further, as demonstrated by the acute pancreatitis case above, computer-assisted coding software does not have the ability to decipher whether or not it is being “taught” accurately by the coders who are using it.
Computer-assisted coding can be a helpful and advantageous tool when used appropriately by certified coders. Whether your facility is considering purchasing or implementing CAC or is already using it, there are some important things to remember. The CAC system is designed to be an additional coding tool. It cannot replace certified coding staff, and while it can improve productivity and reduce coding time, regular checks and balances of the software and coding accuracy must be performed to ensure accurate “teaching” of the tool, as well as appropriate use of the tool by coding staff.
Regular coding reviews of both the CAC system and coders is recommended to measure the success and accuracy of the tool. Additionally, reports can be run in most computer-assisted coding software to monitor and evaluate coder use of the system. Reports can be run to evaluate codes that are being validated and rejected by coders, as well as whether or not coders are utilizing the system appropriately.
Contacting the specific CAC system vendor may be necessary in situations where the computer-assisted coding software needs retraining due to errors in the code validation and rejection process.