Guest Post: Metrics ideas and thoughts
by Donald A. Butler
I thought I’d discuss a few metrics I use which are not the standard metrics such as percentage of cases reviewed, query rate, response rate, agree rate, etc. One of my motivations in considering and reflecting on metric choices is to find fairly simple concepts and numbers (simple to communicate, explain, understand at a glance) that at the same time provides a good, solid picture over time and fairly and broadly reflects the work and benefits of CDI. Metrics need to be chosen both for executive level data as well as detailed program level conversations—some metrics are not suited for both.
Comment— I am including at least one purely financial metric. I believe it unwise for a CDI program to not be aware of their total impact on the financial health of their organization. In today’s developing environment of providing more care for less reimbursement, this must be considered. However, we all know that financial return is not the sole benefit and many agree this should not be the primary consideration of CDI activity. We need some manner to concretely demonstrate and measure CDI activity and successes in addition to direct financial returns.
Chart value
One analysis I particularly like is a metric I refer to as chart value. I calculate for the entire program as well as for the individual CDI specialist. I believe it helps to partially level comparison among different clinical areas. For example, surgical areas have fewer query opportunities (See poll: Do you find many query opportunities in your surgical unit?), but there often is a larger financial impact with those surgical CC’s or MCC’s.
Calculation for any given time frame is simple: Total financial gain divided by total number of cases reviewed equals chart value
I find this useful in several aspects:
- It provides a nice perspective on a quarterly (or monthly) basis for program or individual performance.
- When there is a dynamic transfer process between units, this metric helps to clarify and understand a lower individual performance when compared with the activity on a particular discharge unit. There were many opportunities from previous units that were captured for CC’s or principal diagnosis clarifications and the final CDI specialists was picking up at what remained.
- Also, as mentioned, it helps to ‘level’ the comparison when comparing individual performances.
- The variables over time of the number of queries, the number of cases reviewed, etc. are also somewhat leveled out.
Net ROI (Return on Investment)
This is another simple calculation that is very common in a variety of settings. It directly addresses the question of the comparative strength of contribution of a CDI program. But this does NOT only mean financial ROI. There is an interesting perspective and possibility with ROI. If one would like to use some other measure of improvement in the numerator position (other than financial impact), it can demonstrate how much it cost to gain so much ‘X’. X could certainly be increased documentation of risk of mortality (ROM)/severity of illness (SOI), or some measure of the complexity of coding (I will ask for examples from our coding professionals), etc. As denominator, I have used my total department budget and costs and included payroll, consultant costs, equipment, etc.
Impact percentage
(If anyone wants to suggest a snazzier name for this metric, please do!) This is a new metric (or at least I have not seen anyone else using it). Essentially, I’ve rolled up all of the customary metrics into one single figure of merit.
First, determine what your CDI department considers a successful query. Whether that measure is improved finance, improved ROM/SOI, or additional/more specific ICD-9 coding (than would have been without query) does not really matter. The measure is what percentage of all cases reviewed is one successful with — ‘a win’. To calculate, simply divide the number of “wins” by the total number of cases reviewed.
Note, I deliberately removed the factor of case coverage (how many of the targeted cases are reviewed). However, every other step or metric is rolled up into this one figure of merit: query rate, response rate, and agree rate as well as whether or not the intended gain was actually reflected in the coding process.
Adjusted case review volumes
Fair notification, though I can’t remember from whom specifically this came from another hospital program. In essence, this metric answers the question how much volume would the CDI have contributed if they had the same level of efficiency but took no time off? Partially, this metric derives from personnel factors— depending on the seniority of individual team members and accrued time off, individual health incidents, full versus part time, etc., simply counting the number of cases reviewed by individual does not provide a level comparison.
The adjustment is simple. Take the actual number of cases reviewed and divide by the fractional value of time worked. Examples for fractional value of time worked include:
- If there were a maximum possible 160 hours, but the CDI is half time and took four hours of vacation time, then they only worked 76 hours, so that fraction would be 0.475
- For a second full time CDI who took eight hours of vacation and 16 hours of sick time, the fraction would be 0.85.
So, if one CDI specialist reviewed 80 cases and another CDI specialist reviewed 120, their respective adjusted volumes are 80/.475 = 168 and 120/.85 = 141. Looks to me like the part time CDI was more efficient. Of course there are other variables which might influence a CDI specialist’s productivity but at least this metric provides a starting point from which to delve deeper.
I would certainly like to hear from others about their own unique metrics. What do you feel are your most important measurements? I would greatly appreciate feedback as I am always looking for a better answer (as well as help identifying the right answer)!
Editor's note: Butler entered the nursing profession in 1993, and served 11 years with the US Navy Nurse Corps in a wide variety of settings and experiences. Since CDI program implementation in 2006, he has served as the Clinical Documentation Improvement Manager at Vidant Medical Center (an 860 bed tertiary medical center serving the 29 counties of Eastern North Carolina). Searching for better answers or at least questions, Butler says he has the privilege to support an outstanding team of CDI professionals, enjoys interacting with his CDI peers and is blessed with a wonderful family. This information was up-to-date at the time of this article's original release.