DPC cherry-picking: the defense speaks. Part 2.

Update: In the fall of 2020, KPI Ninja released the first study that relies on it’s new risk information technology. I find it sadly opaque.

Recap of Part 1

The direct primary care community has long tried to support claims that DPC reduces overall health care costs by 20% to 40% with non-risk-adjusted cost-reduction data drawn from employment health plans that allowed employees to elect between DPC and FFS primary care options options. But the first and, so far, only time that independent, neutral, certified professional actuaries looked hard at such a program, careful risk-adjustment showed that the savings claimed were merely an artifact of heavy selection bias. A DPC poster child, the Union County employer program — previously lauded for its claimed 23% cost reduction — was shown by Milliman Actuaries to have had a DPC cohort so young and healthy that it explained away all the observed cost savings.

Any reasonably informed employer or policy maker facing claims about the cost-effectiveness of direct primary care should insist that DPC provider boasts be scrutinized for evidence of selection bias.

In my last post, I noted that the DPC advocacy community has not even bothered to address the simplest of selection bias indicators, the younger average age of DPC cohort members compared to their FFS counterparts. I also noted that Milliman Actuaries have an age/gender risk-adjustment model1 and considered using it for the Union County study.

I further noted that in its Union County study, Milliman relied on a more complex model (“MARA-Rx”) which consider age, gender, and various therapeutic classes of prescription medications used by DPC and FFS group members. Milliman’s Rx risk-adjustment methodology, like its other risk adjustment models, was developed and validated as a health care cost predictor using health care cost data for millions of patients. There is nothing I can see, and nothing in the literature, to suggest that the Rx model has any inherent features that would unfairly penalize a DPC cohort in a studied DPC option employer health plan. As yet, no one in the DPC community has objected to Milliman’s use of the Rx methodology to assess the Union County DPC program, or to its future use in evaluating any other similar program.

There are even more complex and expensive methodologies, like MARA-Cx, that add diagnostic data harvested from payment claims to the factors used in MARA-Rx. In the Part 1 post, I also mentioned an arguably surprising difference in approach to selection among risk-adjustment methodology between Milliman Actuaries, who have no financial interest in the direct primary care movement, and KPI Ninja, a data-analytics group closely connected to the direct primary care movement. Both Milliman and KPI Ninja concurred that risk adjustment methodologies like “Cx” are likely to understate risk score for DPC cohort members because direct primary care physicians do not file claims.

KPI Ninja pointedly laments this “donut hole” in the claims-based data. But there is no anti-DPC donut hole in Rx based risk adjustment methodology.2

Although it possesses a fully-validated claims-based risk adjustment methodology (“MARA-Cx”), Milliman’s common-sense response to the data donut hole problem was to set that Cx methodology aside and determine risk-scores for the Union Count cohorts using only the Rx age/gender/prescription drug methodology. Like Milliman, KPI Ninja has access to risk-adjustment software engines that have equivalents, in a package known as ACG®, to both Rx and Cx. Unlike Milliman, KPI Ninja seemingly rejects Rx methodology and, instead, embraces Cx type methodologies that have the very donut hole KPI Ninja laments.

Why complain about the donut hole in Cx, then reject Rx which has none, and then return to and embrace Cx? Might it be precisely because KPI Ninja knows that Rx based risk adjustment will produce results that are sound, but not happy, for its cherry-picking DPC clients? On the other hand, when convenient, a donut hole can first be performatively disparaged as biased, then filled with custom data products developed by KPI Ninja to tell stories more to DPC’s liking.


Context: DPC docs feel coding makes their patients sick.

Direct primary care practitioners avoid third-party health care payment of claims and the (often digital) paperwork that accompanies it. While the logic of subscription practice renders coding procedures for reimbursement unnecessary and, therefore, a target of scorn, D-PCPs also disdain the type of recording of diagnostic codes that attend claims for third-party reimbursement.

Here’s what Dr Jeff Gold, a co-founder of the Direct Primary Care Alliance, had to say in a post entitled, “ICD-10: It’s Nice Not Knowing You.”

This is nothing more than another layer of bureaucratic red-tape that does nothing to enhance the quality or cost of your care, but rather furthers the disease process. All it does is waste more of your physician’s and office staff’s time – time that should be spent working towards your care. . . .

Luckily for us [Direct Primary Care doctors], we have nothing to do with this nonsense. [Emphasis supplied.]

ICD-10: It’s Nice Not Knowing You

Tell us how you really feel, Dr Gold.

DPC doctors, in other words, not only decline to file claims and code their procedures, they also hold industry standard diagnosis coding in fiery contempt. As a result, the donut hole problem can not be solved by simply collecting DPC patient diagnosis codes from their direct primary care physicians.


Enter KPI Ninja — with balm for DPC’s abstinence

Missing data reduces population risk score. Meaning it will look like you are treating healthier patients in the eyes of those who use these risk scores (employer data vendors, brokers, health plans) … aka they can argue that you are cherry picking.

KPI Ninja Blog Post: Claims vs. EHR data in Direct Primary Care

While the details remain murky, KPI Ninja seemingly plans to meet cherry-picking charges by filling the donut hole with information somehow scoured from such EHR records as direct primary care doctors may have.

But how?

In theory, KPI Ninja could develop a way to reverse engineer a DPC’s EHR entries and other information to generate the diagnostic codes upon which the DPC physician would have arrived had she not thought that participation in diagnostic coding was wasteful. How proceeding “backwards” to arrive at codes would result in any less waste is difficult to image, so that effort strikes me as misguided.

In any event, validation of a reverse engineering model would likely require resources beyond those of KPI Ninja and the DPC advocacy community. It would also likely require the participation of a control group of D-PCPs willing to do extensive coding.

However, for physicians like Dr Gold who have identified ICD-10 coding as “furthering the disease process”, such participation in a coding control group would be an ethical violation; it would “do harm”. Furthermore, if Dr Gold is correct, the members of a patient panel studied in such an experiment would have to give informed consent.

Furthermore, to whatever extent EHR mining for diagnosis codes omitted by a D-PCP produces fully accurate risk codes for members of a DPC cohort, the same mining techniques should be applied to the EHRs of FFS patients to correct any omissions by FFS-PCPs. What’s sauce for a DPC goose is sauce for an FFS gander.

Finally, fair implementation of a model in which diagnosis codes for risk scoring are derived from DPC-EHRs for comparison with diagnosis codes from FFS-claims would require safeguards against the DPCs deliberately larding EHR’s with entries that result in up-coding, just as FFS-claims data is subject to procedures to check up-coding. Again, goose/gander.


Perhaps, KPI Ninja merely has in mind developing a direct method of converting mined EHR data into risk factors that are not directly commensurate with those from diagnosis-based risk models, but that are instead presented to inquiring employers and policy-makers as an alternative model of risk-adjustment.

Precautions would still apply, however. If EHR data on, say, biometrics or family history is brought in demonstrate that the DPC population is less healthy than average, a knowledgeable employer should insist on counterpart data from the EHRs of FFS patients.

A recent addition to KPI Ninja’s website suggests their emphasis may rest on pre-post claims comparisons. It will of course be important to include pre- and post- data on DPC patients and their FFS counterparts. That will be somewhat revealing if the pre-choice claims data for the two populations are similar. But if the results of the Milliman study are representative, that will most likely not be the case.

In the more likely event of a higher level of prior claims in the FFS population, any “difference in difference” analysis of pre-post claims between DPC and FFS populations will still require an attempt to see whether the FFS higher “pre” claims might be accounted for by intractable chronic conditions. Such a finding would impeach any inference that DPC group pre-post gains can simply be projected onto an FFS group. And such a finding seems likely, if the Milliman actuaries were correct to ascribe selection bias to the tendency of the ill to stick with their PCPs; a “sticky” relationship to a PCP seems quite likely to correlate “sticky” chronic health conditions that bind the patient and the PCP.

Further explication from KPI NInja as to how its plans will work was said to be forthcoming. I appreciate the insights they have given in the past, and look forward to learning from them in the future.


There are many risk adjustment software packages available from neutral academics, accountants, and actuaries. They can be expensive; even access to the packages supplied by CMS for use by insurers participating in the Affordable Care Act run a bit over $2 per enrollee per year. Importantly, some of the packages rely on proprietary algorithms, but nonetheless tend to be generally transparent. In most cases, however, these packages come from neutral academic or actuarial sources.

Per se, I fault DPI Ninja neither for its close connection to the DPC industry nor for offering to DPC clinicians the best “risk adjustment” bang that can be generated from the least record keeping buck. To the extent that DPI Ninja delivers DPC data that is generally transparent in its methods and assumptions, that work will speak for itself. Should DPI Ninja lay on data produced by secreted assumptions and methodology, however, it should expect that its business relationship to DPC will affect the credibility of that data.3

Update: In the fall of 2020, KPI Ninja released the first study that relies on it’s new risk information technology. I find it sadly opaque.


For completeness, two parting thoughts.

First, none of foregoing is intended to deny that membership in a direct primary casre practice can significantly reduce usage of emergency departments. Of course it can! The question is what does it cost to avoid such visits. It is very rare for United States Presidents to go to emergency rooms as there’s always a doctor within a few hundred feet. DPC docs are easier to reach at odd hours and are more available for same day visits. They have fewer patients, and that costs a lot more money per patient.

Second, I have discussed in many posts the many sources of selection bias. Use the search item in the menu to find example using words like “selection”, “bias”, or “cherry”. It’s real and the only truly debatable proposition is whether DPC advocates who deny it are unintentionally fooling themselves or deliberately fooling you.


1 A college freshman IT student could probably develop an age/sex risk adjustment engine and apply it age/sex data from paired DPC/FFS cohorts over a weekend. If DPC clinics don’t report age/sex/risk comparisons with their FFS counterparts, it’s because they know full what the results would show.

2 It may even be that Rx metholodogy favors DPCs.

I expect to hear soon that Rx risk adjustment discriminates in favor of “bad”, overprescribing PCPs by making their patients seem sicker than those of “good” doctors who never overprescribe. But this can become an argument that Rx discriminates against DPC if, and only if, it is also assumed that DPC has fewer “bad” overprescribers than does FFS. There is no clear factual basis for assuming, in essence, that DPC doctors are simply better doctors and prescribers than their FFS counterparts.

If anything, DPC self-promotion suggests that Rx data would be skewed, if at all, in the opposite direction. DPC advocates regularly claim that DPC avoids expensive downstream care by better discovering illness early and managing chronic conditions better, by coaching patient compliance and by lowering the cost of medications. Another recurrent theme among DPC advocates is that FFS doctors rely too heavily on specialists; but, if true, FFS cohort patients would be more likely that DPC patients to have over-prescription “wrung out” of their regimes. In these ways, applying a risk adjustment methodology based on prescription medication data, DPC’s own bragging suggests that any donut hole is likely to make the FFS cohort appear healthier.

3 The following fairly new statement on KPI NInja’s website does not bode well, suggesting both secrecy and a predetermination of an answer favorable to DPC.

“We analyze historical claims data from self-insured employers by utilizing our proprietary algorithms to identify cost savings and quality improvement opportunities. Get in touch to learn more about how we can help show your value to employers!”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: