Allow me to preference this by saying I love coders. Many of my best work relationships are with coders. I am a coder, and have been since 1998. Coding gave me my career path into consulting. I owe much of what I have learned to all of those years poring through records day in and day out in a tiny cubicle. Great relationships aside, however, sometimes, certain unpleasant realities have to be discussed. Before we can fix a problem, we must first admit we have a problem.
We have all heard of the familiar telltale red flags of a toxic and ineffective work culture. “That is not my job.” “That is the way we have always done it.” “We can’t question.” “We can’t assume.” “Our hands are tied.” “We don’t see the value in this.”
These are all the battle cries that hail resistance to change, the impeding of progress, and a greater likelihood of leaning toward mediocrity based on outdated work models than being a top-tier competitor. We know this, we can demonstrate this, and those of us in the consulting world can relay aneectodal story after story of this occurring. Interestingly enough, even the organizations that are guilty of this would likely agree that this is a dysfunctional work culture.
Change management is a tricky thing. Being a consultant delivering a service into a culture that seems destined to reject it, well, that is something of a unique experience. Have you ever wondered why Fortune 500 companies and successful businesses often seek a top-down culture change in advance of shifting focus on individual programs and incremental departmental improvement efforts? Quite simply, it is because the process of rolling out performance improvement into a toxic corporate cultural environment is doomed to fail. What, then, for we consultants who are working on products sold into such cultures as a “quick fix” for a flailing organization? Unfortunately, again, most often they are also doomed to fail.
The standards of practice for reporting ICD codes were changing long before ICD-10 was implemented. It is extremely interesting to see the variation in practice and just how different it is from facility to facility and from state to state. Go to a metropolitan hospital in Boston, for example, and make even the hint of a suggestion to a room full of coders that coders are guilty of only reporting verbatim what the doctor says without regard for medical necessity, listing incomplete or contradictory diagnoses, and possessing a lack of solid clinical understanding, and they will quickly disagree with you. They will cite their extensive training, which requires them to only assign codes which appear clinically validated.
Surprisingly, however, many coders are very thirsty for any knowledge that helps them identify such problematic documentation. In contrast, the very next week you may find yourself at a Midwestern facility having the same conversation, only now the coders are holding steady to the 1980s and 1990s mantra of “we can’t second guess” and ”we must report exactly what is on the record.” You may also hear that “identifying clinical inaccuracy before reporting the code is not our job” or “we would never ask that question.”
The contrast is frankly, startling.
At the end of the day, ask yourself this question: is the point of capturing the physician’s documentation in codes meant to be an exercise in a bureaucratic, pedantic interpretation of how following specific rules leads one to report the wrong diagnosis? If so, let me be the first to say congratulations! You followed your rules to the letter (at least your particular version or interpretation of the rules). But you also arrived at the completely wrong diagnosis for what is really wrong with the patient!
I would argue that the entire point is to capture the clinical truth of what is wrong with the patient, every time, on every case. I suspect you would be hard-pressed to get a statement from the Centers for Medicare & Medicaid Services (CMS) indicating that it disagrees with me here. Pointing out the obvious, they really don’t want the wrong diagnoses reported either. So why then do we as an industry continue to indulge coders who seem to relish in getting the wrong code as long as they can claim an “ah-ha, gotcha” moment on some overly narrow interpretation of the specific wording?
By the way, the geographical locations I cited were for narrative purposes only, and are not meant to be a commentary on actual geographic variation. I have encountered very progressive and advanced coding programs in South Dakota, for example, and encounter some very outdated and undertrained coding programs on both coasts.
The point is that standards of practice, levels of coder training and expertise, and cultural philosophy vary sharply from facility to facility, and some of the outdated cultures are in fact extremely inhospitable to performance improvement. None of this is news to anyone inside the industry, of course, but we all need to be reminded from time to time that the integrity of the metadata we are reporting is currently not very actionable because the capture of the clinical truth in the data varies so wildly.
This brings me back to my original point. Process and culture are the enemies of advancement. When making excuses and resisting change are elevated above the goal of actually having records reflect the clinical truth of what is wrong with the patient, we are moving backwards.
All is not lost, however, as there is hope. In my experience, moving away from the usual “I think it is x,” “no I think it is y” debates and focusing on an educational discussion about the true nature of the disease processes and codes that reflect them will often result in buy-in. From there, one can make a smooth transition into more productive conversations about how to do a better job of reporting the data.
Unfortunately, before this transformation can occur, often the culture itself has to change. You have to take an honest look at how coders are trained, audited, given feedback, respected, and appreciated. A coder who is constantly receiving feedback in the form of punitive management, contradictory advice from auditors, clinical outcomes, CDI, etc. is going to be pretty much fed up, and resistant to change across the board.
A coder who is encouraged to speak their mind, learn new things, question inconsistencies, and be rewarded for advanced understanding leading to actionable data will be a superstar in the quest for improved accuracy and clinical validity. The other challenge you should be aware of is that such change does not come cheap.
A transformation such as this will not occur with the usual low-budget audit strategy and a “here are the results of what you missed” delivery method. If those are the tactics you are still employing, you are pretty much set up for failure.
You are now on notice. It is time to move into the current standards of practice or risk being left behind.