Audacious AI Blog

With MIPS, CMS Isn’t Always Right

Doug Blinco is a colleague of mine, who works in Wellstar Health Systems, also part of a large ACO in Georgia.  Doug recently shared an experience with me that we are calling “CMS doesn’t always get it right”.  Over the course of the several months after CMS published ACO MIPS Scores, he observed a number of errors.  Here is Doug’s account:

mistake_1-668x445-cShortly after the 2018 QPP/MIPs final scores were published in July, in which CMS incorporated a massive amount of new data reports, we began to notice discrepancies in the data reports.  As I primarily manage the Promoting Interoperability category for my medical group, and for ACO tracking, I was singularly interested in ensuring the data analysis I did throughout the 2018 program year met my expectations based on the data provided by CMS. 

Much to my dismay, the data provided was general in nature and was more generally representative of the ACO aggregate.  I discovered scores for groups I expected to be exempt, identified as Hospital Based or Non-Patient Facing, and groups with approved PI Hardship applications indicated as having a PI score and appearing to have impacted the final ACO PI score.  Given there are no tools readily available via the QPP website to actually validate PI attestations for each Affiliate TIN, short of requesting Staff User rights to each TIN, I opted to open tickets with the QPP help desk for each TIN to ensure I had a valid understanding of their impact. 

Based on my experience from the prior year, I had little confidence in the Targeted Review process, primarily because CMS made it very clear an exact citation of the error was required.  And true to that criteria, our request for Targeted Review was declined due to a lack of information.  I rationalized opening a help desk ticket would allow me to utilize CMS feedback for any citation needed if I decide to open a targeted review.  It is likely I'll be able to forego that process this year based on ticket feedback but I still have some outstanding items.

With over 100+ group TIN's and 4500+ individual NPI's my open ticket tracking and follow up became a full time job.  Without calling out all individual specific issues identified, I can attest that the result of opening all of these tickets improved our final score to a full 100 out of 100 points.

You have to ask yourself, "why do all this work?", well the answer is … CMS does not always get it right.  Additionally, was all that additional work worth the end result?  In this case, our final score only improved by 0.07 points, and our MIPs adjustment and exceptional performance adjustment did not change.  So why does it matter?  The goal here is to ensure the reputation and branding of our ACO maintains excellence and knowing the MIPS data is published to Physician Compare compelled me to challenge the data CMS provided as "evidence of our PI attestation aggregate".  And in general, our ACO providers aggregate over $180 million in Medicare payments annually, so even a small future adjustment could  be significant. 

As I've moved into focusing on 2019 affiliate performance, and ongoing efforts to identify MIPs clinicians and groups, I've identified participation statuses included on the QPP participation list for our ACO that seem odd, or at least suspect.  Two key statuses identified are PI Hardship and Extreme and Uncontrollable Circumstance.  While these statuses are typically attributed to PI Hardship applications, I'm not aware of anyone having taken these actions and the state of Georgia has not determined by CMS to have been classified with the automatic extreme and uncontrollable circumstance policy.   Needless to say, I'm opening tickets with the QPP help desk yet again, because as I mentioned previously CMS doesn't always get it right. 

The bottom line: I took away three action items based on Doug’s experience. 

  1. Trust, but verify – It is not enough to simply accept scores CMS provides. As CMS phases out their MIPS Transitional Rules, MIPS dollars go up dramatically.  It is well worth-while to calculate your own scoring expectations, and compare those scores against the CMS values
  2. Take the time for Directed Reviews - Even a simple error can have a measurable impact. And from this story, it is clear that CMS is willing to correct errors when supported by a well stated Directed Review. 
  3. Monitor throughout the Reporting Period - Ongoing monitoring is the best way to improve / sustain scores, and to maintain quality assurance over accuracy. If, after submission time we find discrepancies to file with CMS, we certainly need confidence in claiming our due. 

I fear that too many of us consider MIPS to be “done” once the annual data is submitted.  From Doug’s experience, there is clearly value in closing the loop on MIPS submissions with your own diligence. 

 

Jay R. Fisher, MACRA Monitor, and Doug Blinco, Wellstar Health Systems, LLC

September, 2019