See also: “The case study came together though partnership with KPI Ninja and the Johns Hopkins’ ACG® research team.” From written description of Hint Summit talk by Nextera CEO. “[KPI Ninja] brought in the Johns Hopkins research that has significant expertise in what is called population risk measurement”. Nextera video presentation at Hint Health Meeting. “We took that extra step and brought on the Johns Hopkins team that has this ability to run analysis. It’s in their wheel house and they applied that [.] “Nextera video presentation at Hint Health Meeting. Nextera slide from Hint meeting.
“We were not directly involved in this analysis.” Associate Director, HopkinsACG.org.
Silver linings matter.
Flipping the coin, let us give KPI Ninja and Nextera credit where credit is indeed due. They have disclosed, and at least attempted to address, the complete absence from the study data of both pharmacy cost data and employee/member cost-sharing data. These data limitations that have been present in some widely distributed self-studies by direct primary care firms, but are usually hidden or ignored.
Roughly 30% of the total health care spend is going to fall into that gap. That’s easily over $4M of data missing from the KPI Ninja study of Nextera’s SVVSD’s case, an amount that pretty much dwarfs the $580K that Nextera’s thinks it saved.
Still we will see that, despite having recognized these data gaps, the KPI Ninja report vastly underestimated how much resolving them might reduce Nextera’s money savings claims.
Nextera also stands tall for acknowledging the relevance of population health metrics to assessments that compare results between direct primary care popualtions and other populations. Words and concepts like “risk scoring”, “risk measurement”, “risk adjustment” or “selection bias” are rarely even mentioned in DPC firm self-studies. And, in its remarks to CMS on prospective payment models, the Direct Primary Care Alliance expressed a deep and burning hostility to the use of population risk data in assessing direct primary care providers.
Despite KPI Ninja having recognized the relevance of population health measurement, as we will see, there are significant problems in how KPI Ninja handled risk measurement and its relationship to other issues, including plan benefit structure and the structure of the study itself.
There are other problems in the report, as well. This post is long, because these flaws are numerous
Good on you, Nextera, for stepping up your self-assessment game. However, you still need to get things right.
IP admit rates matters.
Foremost, perhaps, is a matter discussed fully in the previous post. KPI Ninja reports an astonishing inpatient hospital admission rate for non-Nextera patients of 246 per 1k. That’s four times as large as the largest IP admit rate ever reported for ANY cohort in ANY study of direct primary care to date. This bespeaks a massive data error or massive cherry-picking or both.
Grade school arithmetic. For example, on that just mentioned issue of IP admit rate reduction, the Nextera report calculates the percentage difference between 246 admits per 1k and 90 admits per 1k as 93%. It’s not; it’s 63%. See report, p. 10, top table, bottom row. All four of the other entries on that row are wrong as well, mis-computing the percentage difference for urgent care visits, ED visits, OP admissions, and E&M visits. All five errors make the Nextera savings look better.
I recommend caution in regard to any number appearing in the Nextera report.
Study design details matter; KPI Ninja’s design distorts the single most important comparison — Nextera vs non-Nextera claims spend.
Third-party claim payments normally rise substantially over the course of a plan coverage year as plan members satisfy their deductibles or mOOPs. Accordingly, plans pay a significantly smaller portion of the claims of part-year plan members, particularly of newly employed enrollees. Many newly employed enrollees arrive too late to hit these landmarks before the end of their initial enrollment year. Year in and year out, third-party payers everywhere catch the part-year cost-sharing break.
Like other employers in the education world, the school district in the Nextera study sees a lot of new hiring in connection with the school year cycle, which begins roughly half-way through the benefit plan year.
Without a single word of justification, however, KPI Ninja made two unexplained, inter-related analytical choices for this study, in a combination that no certified actuary would have been likely to make. KPI Ninja elected to draw claim cost data for a mid-plan-year to mid-plan-year period that corresponds to a school year. But, at the same tome, they chose to open the study cohort to plan members who had been not been covered for an entire year’s claims cycle. In all, the average studied member had only 10.8 months of coverage. Even so, KPI Ninja’s unusual and unexplained choices might have been innocuous, if part-year members had been equally represented in both the Nextera and non-Nextera cohort.
Unfortunately, the representation of part year members is sharply higher for Nextera members than for non-Nextera members. The Nextera cohort members average only 10.1 month’s of coverage, the others average 11.1 months. No matter the reason for the skew, the school district catches the part-year enrollment break for about one-third of Nextera members, but for only about one-sixth of its non-Nextera members.
The upshot is that the reported employer paid claims for the two cohorts are not directly comparable. Some apparent “savings” result entirely from the oversize part-year membership of the Nextera cohort, a bulge that enters the comparison because of KPI Ninja’s cohort selection criteria. Had KPI Ninja used a study cohort selection criterion that address or avoids the over-representation of cheaper-to-the-district, part-year members in the Nextera cohort, a significant part of of the purported $913 spend gap would have vanished.
I estimate the appropriate adjustment would be solidly over $200 PMPY, based on an assumption of straight line growth of spend-to-date through the plan year. More important than anyone’s specific estimate is that KPI Ninja knew, or should have known, that the part-year members were present in higher proportion in the Nextera cohort but apparently never asked itself, “What difference does cohort selection make?” or “Can we eliminate any potential problem by using a slightly different study design?”
In their direct primary care report, Milliman Actuaries reduced this kind of problem by installing a minimum twelve month period of cohort membership. There are official Actuarial Standards of Practice that require actuaries address these kinds of situations, e.g., like Section 3.7.10 (b) of Actuarial Standard of Practice No. 6 from Actuarial Standard Board: “The actuary should consider the effect enrollment practices (for example, the ability of participants to drop in and out of a health plan) have had on health care costs.” See also, Sections 2.26 and 3.7.3 of the same standard.
KPI Ninja’s work on developing total employer spend that was literally substandard, and the result inflated the apparent cost-effectiveness of its client, Nextera.
Employee cost-sharing matters, Part 1. It counts as spending.
After admitting the data limitation of absent employee cost-sharing figures, KPI Ninja somewhat diminished concern about this $2,000,000 data omission by suggesting that its inclusion could only showcase additional savings for Nextera members. They even generated an example of how some missing employee cost-sharing data might be skillfully reconstructed from consideration of known costs, known utilization, and known aspects of benefit design. Specifically, they noted the Nextera plan members had no cost-sharing for Nextera’s primary care services. They then recalled the number of primary care visits by the Nextera cohort, computed an approximate $115 as the cost of a primary care visit from the claims data of the non-Nextera patients, and then applied cost-sharing information that they proudly announced had been “pulled directly” from the district’s benefit guide, to come up with about $12,000 in Nextera employee cost savings. Skillful!
KPI Ninja could have told us a lot more about employee cost-sharing, if it had the will to apply its skills to reveal matters both favorable and adverse to Nextera. Had they glanced around while they were looking in benefit guides, say while they were on the page previous to, and on the two pages following the one from which they “pulled directly” cost-sharing information that cast $12,000 in Nextera’s favor, they might have noticed that there was an HRA, not available to Nextera member. Had they “pulled directly” that HRA information and developed it, it would have cast $791,000 dis-favorably to Nextera.
The district picked up an average of $498 PMPY1 of first-dollar cost-sharing, only for non-Nextera members, through an HRA.
Or, if leaving the page from which they “pulled directly” $12,000 of Nextera gold was too much trouble, they might have noticed another vein on that page with well over $100,000 in pay dirt ready to be “pulled directly”, albeit in favor of the other guys. That’s because non-Nextera members paid only 10% in coinsurance, the district having doubled the coinsurance rate for Nextera members for the express purpose of mitigating some of the additional employer cost associated with Nextera.
Had the HRA and coinsurance cost-sharing information been “pulled directly” and then developed, the result have taken a $558 PMPY2 bite out of Nextera’s claimed $913 PMPY overall margin of victory.
By the way, each of the twenty or so Nextera mini-brags about individual cost categories ($1102 PMPY on ED visits! $594 PMPY on asthmatics!) would bear its share of the near million dollars cut-down.
1 Average district contribution PMPY to HRA computed by blending adult only payments and those for families with children, the latter being lower on a per person basis. I used the family mix data for the Nextera group, as it was the only one reported. Accordongly, this is a conservative estimate, because the Nextera population has proportionally more children. Even for adults with the full $750 HRA, the average contribution to the HRA fund is only about $600, an actuarial value that reflects that significant number of members with even lower claims totals than $600.
The average cost-sharing difference attributable to the district’s grant of deductible relief to non-Nextera members must, for a fair comparison, be capped by the average amount of deductible paid by Nextera members. That number is a bit harder to know with precision absent Nextera employee OOP data.
But it is a number that can be fairly estimated. Estimating an employee’s average liability under deductibles of various kinds and sizes is something health care actuaries do with great regularity. It is part, for example, of the process for determining actuarial value of insurance plans for metal status determination under the Affordable Care Act; CMS has a publicly available actuarial value calculator with relevant tools and tables. Similarly, Milliman Actuaries use Milliman Health Cost Guidelines (HCGs) based on national statistics; the tools MIlliman uses are also sold to analysts. While purchase of full HCG product is beyond my means, Milliman’s report on direct primary care serendipitously gave a glimpse of the HCGs in action that is more than sufficient for our purposes.
For inclusion in its landmark DPC study (Figure 12, line H), the Milliman team used the HCGs to compute the combined value to DPC members of the waiver of two separate deductibles. One applied to the first $150 of claims, the other applied to claims costs between $900 and $1500. Using the HCGs, Miliman estimated of the combined value of the two deductibles was $372 PMPY. Necessarily, a single deductible that covered a band from $0 to $750 would have a value of at least $372. The maximum value of a $150 deductible is $150; it follows that the minimum value of the deductible that covered the $600 band from $900 to $1500 was $222. But if a deductible in that band comes to 37 cents on the dollar, it follows that the minimum value of a deductible covering the band between $750 and $900 exceeds $55. The total of the minima for the three bands that cover the range of $0 to $1500 then, conservatively, declining to add even a penny of value for the band between $1500 and the full $2000 in deductible yields a floor of $699 PMPY in average spend subject to deductible for the direct primary care cohort. Accordingly, every bit of the $498 in HRA money represents costs that Nextera members pay and non-Nextera members avoid.
Again, KPI Ninja could have told us a lot more about employee cost-sharing, if it had the will to apply its skills to reveal matters both favorable and adverse to Nextera.
2 Includes HRA amount of $498 and coinsurance difference. 20% Nextera co-insurance calculated by dividing KPI Ninja reported Nextera employer claims spend by 4. The 10% non-Nextera co-insurance cost is given by the quotient of the non-Nextera claims spend divided by 9. The coinsurance difference works out to $60. The total is $572.
Here’s an example in table form of how employee cost sharing plays out for a single employee with over-all cost near average, but who needs a lot of primary care. So, yes, KPI Ninja is right that Nextera employees save on primary care cost sharing. But then Nextera employees are somewhat fleeced on downstream cost-sharing.
Employee cost-sharing matters, Part 2. It shapes employee utilization patterns.
The different cost-sharing landscape faced by Nextera and non-Nextera members not only shapes employees’ incurred costs, it also shapes incentives that effect whether an employee chooses to incur costs. Facing doubled coinsurance and going without $500 in HRA funds is quite likely to reduce “induced utilization” by Nextera members.
As expected in the case of high cost-sharing, the Nextera report indicates that a disproportionately large segment (20% ) of the Nextera cohort never seek downstream care. Even while admitting that it does not actually know whether ANY of these members actually visited a Nextera clinic for primary care, KPI Ninja suggests that this is the result of Nextera proficiency at reducing utilization. But there are decades of high-quality research, showing that cost-sharing differences have a large impact on utilization, while the Milliman study remains the sole high-quality, neutral work suggesting that the direct primary care model reduces overall utilization.
This is not rocket science. Nextera’s members have to pay more out of pocket for downstream care. Because they face higher out of pocket costs for downstream care, they use less of it.
To avoid having to pay more or to use less, informed patients who anticipate significant downstream care needs will tend to avoid Nextera and its adverse benefit design. Selection bias and induced utilization intertwine. Part of HHS’s risk adjustment process incorporates quantitative measures of utilization that take into account cost-sharing differences. More on this later.
The ACG® technical manual addresses the important connection between benefit design differences and ACG® concurrent risk measurement, the very type attempted by KPI Ninja. Milliman’s direct primary care team expressly noted that induced utilization should be considered in assessing the effectiveness of direct primary care.
KPI Ninja does not appear to have been acquainted with induced utilization.
Pharmacy costs matter.
As was the case with employee out-of-pocket costs, after honorably admitting the complete absence of pharmaceutical cost data, KPI Ninja dismissed concern about this data omission by suggesting that its inclusion could only showcase additional savings for Nextera members. No data was offered to support that proposition; the high point of support was KPI Ninja’s trust-me statement that its own, unidentified, and unpublished “research” showed that direct primary care presented opportunities for drug cost savings.
The opposite conclusion — that pharmacy data might well reveal additional direct primary care cost — is supported by actual evidence. A study comparing a direct primary clinic’s patient utilization measures against those of a propensity matched control population strongly suggests one successful DPC strategy is to reduce high expense downstream costs through increasing medication compliance. The DPC Coalition has even presented the clinic involved, an Iora clinic in Las Vegas, as a DPC poster child to the Senate Finance Committee; the cited work showed net overall savings for Iora’s members — along with, and very possibly because of, a 40% increase in prescription refills.
Indeed, both the current Nextera study and Nextera’s previous success brag suggested an important role for care plan adherence. Pharmaceutical spend data would certainly be salient, not only as important cost data, but as a way to help Nextera itself figure out how much, if any, of its value is the result of an implicit, de facto “Iora strategy”. If Nextera even vaguely tracks Iora’s path on pharma spend, it could easily bite off another $1043 PMPY from Nextera’s bragged savings.
The reason KPI Ninja has given for not using pharma spend data is, “We did not receive pharmaceutical data.”
Why the hell not?
3 Potential pharmaceutical cost increase for Nextera is conservatively estimated on the following assumptions: pharma spend is about 18% of all spending. The outlier adjusted employer only spend for Nextera members, excluding pharma spend, is $2360; implying a $518 pharma spend. Assuming that a partial “Iora strategy” entails only a 20% pharma increase, vs. 40% at Iora itself, results in a $104 increase.
Prescription information records the classes of therapeutic drugs used and supports risk assessment.
In addition to being salient on the costs issue, member prescription information gleaned from pharmaceutical claims reveals the classes of therapeutics used by each patient, which is reamrkably useful for risk measurement. In fact, the ACG® team strongly recommends that information be used in population risk assessment alongside age-gender-diagnosis based risk measurement of the type selected by Nextera. But the ACG® team also note that age-gender-pharma based risk adjustment can be credible even without diagnosis information. Equally aware as the ACG® team , the team from Milliman used an “age-gender-pharma” methodology rather than “age-gender-diagnosis” in their study of direct primary care, based on their assessment that the direct primary care model results in potentially distorted diagnostic data.
The reason KPI Ninja has given for not addressing pharma data is, “We did not receive pharmaceutical data.” With pharma data, KPI Ninja could have followed Milliman’s lead in avoiding the risk of distortion associated with diagnostic data in the DPC context.
Get the pharma data, please.
Risk adjustment matters; it needs to be done right.
Part 1. There is no indication that any raw data were actually adjusted for risk.
KPI Ninja computed the risk scores for the two populations at 0.358 (Nextera) and 0.385 (non-Nextera), a difference of about 7.6%. The norm for statistical work is to present unadjusted raw claims data, apply the computed difference to the raw claims data (regardless of its size) and, then present the adjusted claims data with a confidence interval to assist those using the data in making such judgments as they wish. As the Milliman report shows, this is done even when presenting differences deemed not statistically significant.
Instead of following standard statistical practice, the analyst declared the two populations comparable and then excused himself from actually applying any risk adjustment to modify the raw claims data. But 7.6% of health care costs is big enough to bother with. Nextera’s cost reduction is itself pegged at only 27%; when deducted from 27%, 7.6% gives a $256 haircut to Nextera’s brag.
Adding that $256 to the $558 in omitted employee cost-sharing differences cuts Nextera’s $913 savings claims down to $99 per member per year. If part of the reason for Nextera’s success is improving pharma compliance, it could it hit zero .
And even a Nextera $99 PMPY savings scenario turns on the accuracy of KPI Ninja’s calculation of a 7.6% risk differential. If that assertion is too low by a little over 3% Nextera’s cost savings claim is not just cancelled, it is reversed to a loss.
Part 2. There are sound reasons to believe that KPI Ninja’s diagnosis based risk measurements are skewed heavily in Nextera’s favor .
The Nextera population skews heavily toward children; this is entirely predictable, because Nextera employees pay $1600 per year less in premiums to add children than do non-Nextera employees. 24% of the Nextera cohort is less than 15 years old, compared with only 13% of the non-Nextera cohort. On other side of the spectrum, those over 65 were nearly four times as likely to reject Nextera. Upshot: the Nextera population is about 6.5 years younger on average and is less heavily female. Based on age and gender alone, per a landmark data set prepared by Dale Yamamoto for the Society of Actuaries, a risk adjustment of about 21% is called for.
The mere 7.6% risk difference between the cohorts that KPI Ninja ran across (in its maiden voyage on risk adjustment waters) requires that the illness burden data for the two populations severely slash the risk gap indicated by age and gender alone. That suggests a perfect storm of the odd: a surfeit of young, but relatively sick, Nextera members coupled to a surfeit of old, but relatively healthy non-Nextera members.
That seems deeply implausible. Especially so, in light of a uniquely relevant report of heavy selection bias at on Nextera’s favor at Nextera’s own clinic, a report prepared and widely circulated by Nextera itself.
About two and one-half years before Nextera got its first recruits into the school district employee cohort studied here, Nextera enrolled new members from a similar employee population of an employer whose headquarters was in the process of relocating from within three miles of the school district’s headquarters to a spot less than 20 miles more distant. Nextera’s flagship clinic is near both employers, and employees of both use the same doctors at the same clinics. In its own “whitepaper”, Nextera reported that the “Digital Globe” employees who chose Nextera had pre-Nextera medical claims that were 30% lower than the pre-Nextera medical claims of those who chose the non-Nextera option.
Did Nextera go, in a mere two and one-half years, from attracting a very healthy population to attracting a still young population now weirdly sick beyond its years? Cutting down a 30% selection bias by three-quarters? Really?
I will absolutely believe that is what happened when the Johns Hopkins ACG® Research Team tells me it happened.
Part 3. ACG® concurrent risk score comparisons, the type supposedly attempted by KPI Ninja in this study, are vulnerable to a bias that results from benefit design.
The ACG® technical manual notes that “where differences in ACG concurrent risk are present across healthcare organizations, it is almost universally attributable to differences in covered services reflected by different benefit levels and cost structures”. But, if different benefit designs can produce different ACG® concurrent risk score differences for equally risky populations, might there be occasions when different benefit designs will produce similar ACG® concurrent risk scores for populations that have different levels of underlying risk?
So it would seem. Members in a group with higher cost-sharing will under-present for care relative to a group with lower cost-sharing. If the higher cost sharing group was also the less risky group, this “benefit design artifact” would artificially shrink the “true” ACG® concurrent risk score gap.
This artifact is a corollary of induced utilization, and illustrates why the Milliman authors expressly called for studies of direct primary care to address induced utilization, and why CMS’s risk adjustment processes incorporate, as needed, quantitative correction of risk measures by induced utilization factors.
One particular result of a benefit design artifact would be a discrepancy between risk measurements that incorporate clinical information and those that rely solely on demographics; specifically, a younger population with less generous benefits will have ACG® concurrent risk scores that make it look sicker than it is relative to an older population with more generous benefits.
The Nextera cohort is younger; it looks sicker than its years on ACG® concurrent risk scores; its benefit package requires significantly more cost-sharing; and cohort members presents less frequently for care. The Nextera cohort lands squarely atop a benefit design artifact.
Note: The entirety of the preceding discussion of benefit design artifact may be above my pay grade.
Part 4. KPI Ninja’s risk measurements rest on undisclosed and unvalidated methods that were purpose-built by KPI Ninja to increase direct primary care population risk scores. Anyone see a red flag?
ACG® risk adjustment, in the absence of pharma data, is fueled by standard diagnostic codes usually harvested from standard insurance claims data. But direct primary care physicians do not file insurance claims, and a great many of them actively resist entering the standard diagnostic codes used by ACG® into patient EHRs. Indeed, direct primary doctors typically do not use the same EHR systems used by nearly all other primary care physicians. KPI Ninja has referred to a “data donut hole” of missing standard diagnostic codes which it sees as depriving direct primary care practitioners of the ability to defend themselves against charges of selection bias.
Milliman Actuaries are a world leader in health care analysis. The Society of Actuaries grant-funded a team from Milliman for a comprehensive study of direct primary care analysis team. That highly-qualifed team ended up relying on age-gender-pharma risk measurement because, after carefully addressing the data donut hole problem, they found no satisfactory solution to it.
But KPI Ninja implicitly claims to have found the solution that eluded the Milliman team; they just do not care to tell us how it works. The cure apparently involves using “Nextera Zero Dollar Claims (EHR)” to supply the diagnostic data input to ACG® software. Nextera does not explain what “Nextera Zero Dollar Claims (EHR)” actually are. It appears — but there is no way to tell — that KPI Ninja’s technology scours EHR that typically lack diagnosis codes, even long after the EHR are written, to synthesize an equivalent to insurance claim diagnosis codes which can then be digested by ACG®.
Concerns about the validity of such synthetic claims lead the Milliman actuaries team away from using a claims/diagnosis based methodology. KPI Ninja boldly goes exactly there, without telling us exactly how. Only a select few know the secret-sauce recipe that transformed direct primary care EHR records into data that is the equivalent of diagnosis code data harvested from the vastly different kind of diagnostic code records in claims from fee for service providers.
There is no evidence that KPI Ninja’s magical, mystery method for harvesting diagnosis code has been validated, or that KPI Ninja has the resources to attempt a validation or, even, that KPI Ninja has ever employed or contracted a single certified actuary.
That KPI Ninja validate its methods would be of at least moderate importance, given KPI Ninja’s general business model of providing paid services to the direct primary community. But validation becomes of towering significance for risk related data precisely because KPI Ninja’s methodology for risk data was developed for the clearly expressed purpose of helping direct primary care clinics address charges of cherry-picking by developing data specific to justifying increases in direct primary care member risk scores.
Validation in this context means that KPI Ninja should demonstrate that its methodologies are fair and accurate. Given KPI Ninja’s stated goal of increasing direct primary care risk scores, the most obviously pressing concern is that the method increases population risk scores only in proportion to actual risk.
For example, the ACG® technical manual itself warns about final risk scores being manipulated by upcoding. There is no evidence that KPI Ninja’s secreted data development process, whatever it may be, includes any protection from deliberate upcoding by providers. Moreover, in a secreted process, upcoding may have been baked into the “Nextera Zero Dollar Claims (EHR)” cake, even if the baker had only the best of intentions.
In a sense, filling the data donut hole is a “good” form of upcoding, if done correctly. That’s a big if.
No matter how good ACG® software may be in turning accurate diagnostic codes into accurate risk predictions, the risk measurements cranked out for Nextera patients can be no more valid than the diagnostic data generated by KPI Ninja’s secrets.
As there is no real transparency on KPI Ninja’s part as to how it generates, from Nextera EHRs, the data needed for AGC® risk adjustment, and no evidence that the methodology has been validated, it is impossible to confirm that KPI Ninja risk measurement of the Nextera cohort has any meaningful connection to reality.
Any bona fide intellectual property in KPI Ninja’s proprietary methods can be protected even as it is made transparent and validated in precisely the same way ACG® protects its own intellectual property. KPI Ninja can show us the goods any time it wants to.
Here’s a simple plan for assessing risk levels in the two cohorts that we can use while we wait for KPI Ninja to reveal and validate its methodology.
- Run age-gender-only risk measurements.
- Get the pharmacy data.
- Run age-gender-pharma risk measurements.
If the clearly younger Nextera cohort is disproportionately sick and the older non-Nextera cohort disproportionately well, it should not be hard to spot. And, we would learn how well Nextera’s brags hold up under the transparent, validated age-gender-pharma approach taken by Milliman.
If you do not already know where my money is, read my prior post “Nextera’s Next Era in Cherry-Picking Machine Design.
In comparing chronic conditions costs, Nextera’s fees matter.
A table on page 12 of the Nextera report ostensibly shows various employer costs differences between Nextera patients and non-Nextera patients associated to a selection of sixteen chronic conditions. For six of these conditions, the shown costs are actually higher for the Nextera patients; there are thus report 10 “wins” and 6 losses. Kudos again to KPI Ninja and Nextera for reporting the losses, implicitly acknowledging the possibility that DPC may not have ALL the answers to EVERY chronic condition.
Some entries on the (page 12) chronic conditions table seem likely to reflect inpatient hospital admissions. These, as we have already explained at length in a prior post, appear to be vastly over-reported for non-Nextera members. For that reason alone, the reported chronic conditions table may well over-report non-Nextera members costs and, as a result, over-report the Nextera “wins” for ten chronic conditions and under-reports the extent of the six Nextera losses.
Again, as well, the chronic conditions report is based only on the district’s share of costs. Any complete account must include employee cost-sharing, likely to be much lower for non-Nextera employee by virtue of the aforementioned HRA and reduced coinsurance. Then, too, as an astute reader may have noticed, the table may also reflect induced utlilization. An HRA and reduced coinisurance may invite non-Nextera members to take better care of their chronic conditions.
Even more importantly, however, the table results are systematically skewed in Nextera’s favor, because the table has been built strictly from claims costs. For non-Nextera members, employer claim costs may flow from both primary care payments and downstream care payments. For Nextera members, there are employer claim costs only from downstream care.
But Nextera members receive primary care — some of the most expensive primary care in the world. Nextera’s subscription fees average $832 PMPY. Fair comparison of employer costs for chronic conditions requires a proper accounting of Nextera’s fees as part of the employer costs for chronic conditions. By leaving out Nextera’s fees, KPI Ninja joins a long, and sadly still growing, line of DPC cost savings analysts who somehow “forget” that DPC fees are costs.
Inclusion of Nextera’s fees would turn the table against Nextera substantially.
Giving methodological detail matters.
Despite being “a team of experienced professionals in clinical knowledge, public health, healthcare analytics, data modeling, academic research and software development, KPI Ninja has here presented extremely little technical description of its methods, data sources, underlying assumptions, data inclusion criteria, statistical description of the study results, and other important matter. Indeed, despite a full paragraph of wrapping itself in the “statistically validity” of the ACG® methodology, KPI Ninja did not supply a confidence interval for even a single statistic in its report.
Above, I have also addressed (a) KPI Ninja’s failure to describe its method for harvesting diagnostic data from EHRs, (b) their failure to account for induced utilization and (c) a cost analysis problem arising from KPI Ninja’s unexplained cohort selection decisions and their effect on cost comparisons.4
4The cohort selection problem may also have distorted the report’s many utilization comparisons. Recall that the average study member in the Nextera group had just over 10 months of coverage, versus just over 11 for his counterpart. Since KPI Ninja did not adjust for the part-year difference in its cost comparisons, it seems quite likely that KPI Ninja also failed to make the same kind of adjustments in its comparisons of utilization measures. Since KPI Ninja does not explain it calculations we are left to guess, for example, whether their counts of ED admissions for Nextera and non-Nextera cohorts were normalized for their respective 10 and 11 month years.
It matters. If the presented data on ED was indeed properly normalized, Nextera can fairly point to a reduction of 9%. If the data has yet to be normalized, then Nextera produces no reduction in ED at all. I can not say one way or the other whether KPI Ninja was aware of this issue, but I should not have to. It is the job of an analyst to anticipate issues that arise from the data presented and explain how they were handled. The cohort differences in part-year participation should have been weeded out by better study design, or addressed where they might have affected the reported study results.
Here are some other significant gaps in how KPI Ninja has set out the methods used in developing the analysis.
There is, for example, a glaring gap regarding an essential non-technical question about how KPI Ninja computed the employer’s spend for non-Nextera patients. KPI Ninja does not tell us whether, in addition to claims, its account of employer spend also included the nearly $800K the employer spent to fund the HRA. I certainly hope so, it seems so, and I assume so, but this is exactly the kind of thing that needs to be explicitly disclosed for the study to be fully and fairly evaluated.
Even when KPI Ninja gives its best appearance of having analytical chops, the lack of detail and discussion raises serious questions. So, for example, even as we again give kudos to KPI Ninja — this time for addressing the problem of outlier claimants, it is frustrating to see about 40% of total spend hacked away from consideration without being told either the threshold for member claims outlier exclusion or the reasoning behind the particular choice made.
KPI Ninja makes clear that it excluded outlier claims amounts from the total spend of each cohort. But it is of additional importance to know whether members incurring high costs were also excluded from the cohort in compiling utilization rates and/or in computing the risk measurements.
From the report itself, KPI Ninja appears to have identified a total of 11 outlier patients between the two cohorts. The analyst explains how, if included, a million dollar claim would heavily skew cost comparisons. They did not explain that, if the same million dollar member had daily infusion therapy, this could heavily skew KPI Ninja’s OP visit utilization comparison. They did not mention that, if the member’s million-dollar, daily-infusion woes were the results of a medical condition which ACG® scores with an exceptionally high risk value, KPI Ninja’s population risk measurement might also be heavily skewed.
To avoid all skew from outliers, the Milliman report excluded members with outlier claims from the cohort for all determinations, whether of cost, utilization, or risk. In their report, KPI Ninja addressed outlier members only in terms of their claims cost. There is no indication whatsoever that KPI Ninja appropriately accounted for outlier patients in its determination of either utilization rates or population risk. A good chunk of that astonishing IP admit figure for non-Nextera patients might have vanished had they done so.
Whatever the provenance of the data used, the ACG® system presented KPI Ninja with an array of cost adjustment models and parameters with which to assessing that data. This ACG® manual has 16 pages of tightly spaced text explaining its various models and the various considerations for selecting one rather than another. KPI Ninja eventually reported unscaled AGC® concurrent risk scores. But AGC has six dense pages that discuss they whys, wherefores, and pitfalls just of of different kinds of concurrent risk scoring factors — scaled and unscaled, with or without regression, age-gender only or with the diagnosis factors. All of the ACG® concurrent models were found to somewhat overestimate costs for those in the lowest risk quintile lowest risk risks and to somewhat underestimate costs for those in the highest risk quintile, and the size of that effect varies among models. Would that matter?
One hopes the crackerjack KPI Ninja team of “experienced professionals in clinical knowledge, public health, healthcare analytics, data modeling, academic research, and software development” had full command of all the relevant considerations for model selection and made a good choice.
The Milliman team explained how it made its choice of risk model in considerable detail. On explaining its choice of risk model, as on nearly every other methodological question, KPI Ninja explained bupkes.
The Nextera study misappropriated the prestige of a Johns Hopkins research team; has errors of arithmetic; made poor cohort selection choices that skew employer claims cost comparison, and perhaps utilization comparison as well ; excludes employee cost-sharing and pharma data; ignores that employee cost-sharing will skew heavily against the study’s savings claim; did not address induced utilization; relies on questionable risk measurement results from KPI Ninja’s rookie effort at getting them; relied on an unknown and unvalidated methodology for obtaining diagnostic codes; omits the employer’s cost for direct primary care itself when computing employer savings on chronic conditions; fails to disclose and discuss many other key details of the study methodology, e.g, the criteria for and the scope of outlier exclusion or how, and even whether, $800,000 in employer HRA costs were counted; and accepts without batting an eye a blended IP admission rate for the combined cohorts of a teacher-heavy school district employee and their families that exceeded by 30% the IP admit rate for Medicare patients who are three decades older.
Some members of the DPC tribe the KPI Ninja/Nextera school district report as a “thorough analysis”.
Please don’t stake the health of your employees on it.
As always, there is a comment section and a contact section. If you think I’ve erred, fire away. Better still, join the issues with me in a public forum.