The raw downstream cost claims data fed into Miliman’s “isolation” model were imprinted with Union County’s (likely pro-DPC) model of downstream cost-sharing.

Note. The foregoing post was essentially completed and copied to Milliman in late May or early June 2020. In a footnote to an internet essay at the end of June, two members of the Milliman team presented new material addressing the issues raised below. I will address this new material in a new post.

In its recent assessment of the direct primary care model, Milliman took a two track approach. An employer ROI based approach included comparing claims experience for a group of employees who opted to receive primary care from a DPC clinic versus those using traditional FFS PCPs; in addition, the ROI analysis also assigned costs to employer of the differing cost-sharing structures the employer has required of the two groups. This computation resulted in the conclusion that in the plan under study DPC actually increased employer costs.

In a second approach, however, Milliman looked only at a claim cost comparison between the two groups. According to Milliman, “this cost comparison attempts to isolate the impact of the DPC delivery model on the overall level of demand for health care services“. [Italics in original]. Milliman should be proud of thinking of this approach. It seems likely to work well where both the DPC and FFS cohorts face identical cost-sharing incentives for downstream care. But that was not the case in Union County.

Since Union County offered very different cost sharing landscapes to the two groups, the purported isolation is simply not possible.

For example, all FFS employees (and no DPC employees) had $750 HRAs, which present FFS employees with “use or lose it” choices affecting their decisions on incurring claims. In that situation, FFS employees will be prone to over-consume health services, largely on downstream care, a significant swath of void of cost sharing discipline. By Milliman’s calculation (Fig 12, Line I), the average FFS member does not exceed the HRA-protected level of usage in a normal year.

Accordingly, that everage FFS member never reaches a second feature of the Union County plan, a 20% coinsurance level. On the other hand, all the downstream medical claims of an average DPC members is subject to the discipline of 20% copayment.

Combining the effect of the HRA and the coinsurance, the average DPC group member is likely to under-consume downstream health care services when compared to the average FFS member.

These real decisions by real employees surely affected Union County’s downstream claims experience. A cost sharing plan is not merely a dispensable ornament in an ROI computation. Rather, Union County’s cost-sharing structure presents important “confounders” that Milliman’s isolation model, as originally conceived and executed, simply ignored — with a likely effect of overstating direct primary care’s ability to reduce the downstream care costs of average utilizers.

Because the raw downstream claims cost data fed into the model intended to isolate DPC impact from the vicissitudes of Union County’s cost-sharing scheme had already been imprinted with the vicissitudes of Union County’s cost-sharing scheme to an unknown degree, Milliman’s attempted reinvention of the wheel results in an indeterminately bumpy ride.

To be fully clear, it is certainly possible that, if applied across all members at all levels of downstream care consumption, Union County’s unique and complex cost-sharing landscape for downstream care for non-DPC members, and it’s simpler coinsurance focused scheme for downstream care for DPC members, might combine to have a negligible net effect, or even tilt in favor of DPC. I don’t know. As of the date of its May 2020 report, neither did Milliman.

Until it completes the hard actuarial work needed to quantify the skew of the downstream claims cost data that fueled its isolation model, Milliman should back off its claim to have successfully isolated and determined the health care utilization impact of DPC.

Milliman and Milliman wannabes would be especially well-advised to use caution before attempting to apply the Union County isolation methodology in case where a DPC option is automatically tethered to an HDHP , even though the HDHP may be available on a voluntary basis to a quasi-control group.

Virtually all DPC practices take pains to sign on members with HDHPs, through employer arrangements or otherwise. Once such price-shoppers are signed on, they can be relied on to incur relatively modest downstream costs. The enrolling provider is, of course, happy to claim any reduced downstream costs as attributable to herself and/or to the DPC model. Borrowed valor. Good example here.

For more about Milliman or Union County, use Search.

Did Ratner Industries uncover the secret of health care cost reductions?

To kick off our open enrollment period two years ago, we at Ratner Industries held a company wide employee meeting. There we dusted off our brand new offering of a high deductible plan option. To get a rough idea how many employees planned on electing each option we offered free bags of M&Ms to employees as they left, requesting that those leaning toward the high deductible option each take a bag of peanut M&Ms, while those remaining in our standard plan were asked to take only traditional M&Ms. About equal numbers took bags of each of the two kinds of M&Ms.

We just ran some claims data (fully risk adjusted, of course) for the first full year under these plan choices. We found that those who received peanut M&Ms had 12% lower utilization of services overall compared to those receiving traditional M&Ms.

Have we have proven the power of peanuts? Caiaphas, a commenter, has suggested that this a case of borrowed valor, with the HDHP plan having done the hard work for which the pro-peanut caucus has “improperly” taken credit. Caiaphas’s anti-peanut bias is evident.

Attn: AEG/WP. Milliman study implies 12.6% downstream care cost reductions for DPC.

The AEG/WP plan still isn’t likely to work. A $95 PMPM fee, increasing at the same rate as other medical expenses, and coupled to a 12.6% reduction down stream would evaporate all of AEG/WP’s claimed billion savings.

Healthcare Innovations in Georgia:Two Recommendations”, the report prepared by the Anderson Economic Group and Wilson Partners (AEG/WP) for the Georgia Public Policy Foundation, made some valuable contributions to deliberations about direct primary care. The AEG/WP team clearly explained their computations and made clear the assumptions underlying their report.

This facilitated public discussion that the Georgia Public Policy Foundation sought to foster in publishing the report. I have examined those assumptions in many prior posts. A large number of them addressed a focal assumption in the AEG/WP report’s calculations: that DPC participation could reduce the claims cost for downstream care by 15%, a number represented by AEG/WP as a low end estimate. The sole support offered in the AEG/WP report for this 15% presumption is a statement that, “the factor is based on research and case studies prepared by Wilson Partners.”

In addressing the 15% claim, I looked over all available evidence then available in support of the 15% claim. I found a lot of corporate brags, and some propaganda from partisans with their own smash-public-spending agenda, but nothing n the way of independent, actuarial sound evidence derived from risk-adjusted data from DPC clinics.

That has changed with a study by actuaries from Milliman in a May 2020 report to the Society of Actuaries concerning a thinly disguised employer (Union County, NC). Although I take issue with some of the lessons others have drawn from that report, the report implies that deploying the direct primary care model can reduce downstream care cost by 12.6%. [Maybe even 13.2%.]

Some cautions.

  • Milliman’s is only one study, and so far one of a kind.
  • The Milliman study confirms nearly all of what I have said about how deeply flawed all the prior studies were, largely because they did not look at issues around risk adjustment.
  • In light of the Milliman indicating reductions of 12.6%, the AEG/WP suggestion that 15% figure represents a low end estimated should be rejected.
  • For downstream care cost reductions, replacing AEG/WP 15% with Milliman’s 12.6% takes away 1/6th of AEG/WP claimed savings.
  • My other two major criticisms of the AEG/WP report still stand, and one of them is actually reinforced
    • AEG/WP’s assumption that effective DPC can be purchased for $70 per month even more clearly lowballs the likely cost
      • the clinic Milliman studied was paid $95 per member
      • a $95 per member fee would reduce AEP/WP claimed savings in the individual market by nearly half, and would result in no net savings in the employer markets. See calculator here.
    • AEG/WPs assumption that a $70 fee for direct primary care will remain flat for a decade is still incorrect.
  • A $95 PMPM fee, increasing at the same rate as other medical expenses, and coupled to a 12.6% reduction down stream will evaporate all of AEG/WP’s claimed billion savings.
  • The final point.
    • There is good reason to suspect that a direct primary care clinic receiving resources of $95 PMPM will outperform a direct primary care clinic receiving resources of $70 PMPM.
      • Milliman studied a clinic had that invested $95 PMPM in direct primary care and attained a presumed 12.6% downstream cost reduction; the increase in the spend for primary care exceeded the downstream savings; the employer had a net loss for using direct primary care.
      • a $70 direct primary care clinic will need a larger patient panel than its $95 competitor; its PCPs will have less time with patients and less availability; it will be less able to deliver same day appointments; so there is strong reason to expect that AEG/WP’s proposed $70 DPC will fall well short of the 12.6% downstream cost reduction level.

Milliman’s valuation of DPC health care services at $8 PMPM rests on faulty data.

If I were a direct primary care practitioner, I’d be mildly miffed at Milliman’s reducing what I do to a series of CPT codes. I’d be more worried by Milliman’s team setting the value of my health care services at $8 PMPM.
The $8 PMPM figure Milliman declared as the health care service utilization to deliver primary care to DPC patients was based on apparent underreporting by the studied direct primary care provider, of a single class of data: the quantum of primary patient care actually delivered.
Although this data was of central importance and would have warranted a validation process for that reason alone, Milliman evidently took no steps to validate it. But there were clear warning signs warranting extra attention, including the employer’s public reports — known to Milliman — that DPC patients were visiting the DPC clinic three times a year.
Correcting the $8 PMPM to something reasonable shows that Milliman has vastly overstated net savings associated with DPC.

Note: Update of 6/24/2020.

The resources used by direct primary go beyond what has been recorded in CPT codes. DPC docs and advocates used to be the first to tell us that. Here’s a DPC industry leader, Erika Bliss, MD, telling us “how DPC helps”.

A large amount of DPC’s success comes from slowing down the pace of work so physicians can get to know our patients. While it might sound simplistic, having enough time to know a patient is fundamental to providing proper medical attention. Every experienced DPC physician understands that walking into the exam room relaxed, looking the patient in the eye, and asking personal questions dramatically improves treatment. [Emphasis supplied.]

Slower-paced and longer visits use real resources. As do all the other elements claimed to generate DPC success, such as same day appointments, nearly unlimited access 24/7, extended care coordination. A principal justification for the subscription payment model is that too much of the effort required for comprehensive primary care escapes capture in the traditional coding and billing model.

The Milliman report found no net cost savings to Union County from the money it spent on its DPC plan, a negative ROI. But some DPC advocates seek salvation in Milliman’s claim that application of its novel, CPT-code based, isolation model to Union County’s claims data turns that lemon into lemonade.

[T]he DPC option was associated with a statistically significant reduction in overall demand for health care services(−12.64%).

Milliman report at page 7.

As noted, that computation marks overall demand reduction across the system, in which lowered downstream care demands are measured as part of all demanded health care services including the health care services demanded by direct primary care itself.

Lemonade by Milliman — initial steps.

Downstream care utilization for both DPC and PPS patients, along with primary care for utilization non-DPC patients was assumed to be represented by the County’s paid claims. Milliman, in other words, felt it was actuarially sound to use the employer’s negotiated fee schedule as the appropriate yardstick to measure health care services utilization.

But DPC providers are not paid on a claims basis; they are paid on a subscription basis for nearly unlimited 24/7 access, same day appointments, long, slowed down visits, extensive care coordination and the like. How then is the “utilization” of direct primary care services to determined? Is there anything comparable to Union County’s negotiated fee schedule for fee for service medical services that might fit the bill for subscription primary care ?

How about Union County’s negotiated fee schedule for subscription direct primary care from the DPC? An average of $95 PMPM. Had that number been used in Milliman’s alternative model, I note, direct primary care delivered by the DPC would have been “associated with” a substantial increase in overall demand for health care services. Milliman, having found that Union County’s ability to negotiate fees was sauce for the FFS goose, did not find that Union County’s negotiating skill was an appropriate condiment for the subscription DPC gander.

How about setting the utilization of direct primary services at an approximation of market price for subscriptions to bundled primary care services, using perhaps the reports of DPC fees gathered in a survey that was part of the Milliman report? An average of $61 PMPM. Had that number been used in Milliman’s alternative model, I note, direct primary care delivered by the DPC would still have been “associated with” a modest increase in overall demand for health care services. But, hey, what do markets know? Milliman went a different route.

A cost approach, perhaps? I expect that Paladina, Union County’s client, would have declined, if asked, to provide data on the prices it paid for the inputs needed to provide Union County with the contracted direct primary care services. And it could well be that Paladina is as bad a price negotiator as Union County itself.

But these costs can be estimated, and the result would have more general applicability. A very conservative estimate of those costs would be $39 PMPM (based on Union County’s panel size of less than 500, a low PCP compensation package of $175k/yr, and overhead at a low 33% of PCP compensation). Had that number been used in Milliman’s alternative model, I note, direct primary care delivered by the DPC would have been “associated with” a modest decrease in overall demand for health care services of about 5% percent. Using AAFP reports of average PCP salaries and overhead instead of conservative assumptions would turn that number negative.

Using an estimate of the actual costs of putting a PCP into a DPC practice as a means of putting a value on the health care services demanded when a PCP is actually put into a DPC practice seems sensible.

But Milliman took a different course.

Breakthrough in Lemonading: the elements of the Milliman method for computing the health services utilization of direct primary care.

  • Assume that utilization of subscription-based holistic, integrative direct primary care can be accurately modeled using the same billing and coding technology used in fee for service medicine.
  • Ignore that a very frequently-given, explicit justification for subscription-based direct primary case is that the fee for service billing and coding methodology can not accurately model holistic, integrative direct primary care.
  • Ignore that direct primary care physicians as a group loudly disparage coding as a waste of their valuable time, strongly resist it, and do not use standard industry EHRs that are designed for purposes of payment, relying instead on software streamlined for patient care only.
  • Rely on disbelieving, reluctant DPC physicians, using EHRs ill-equipped for the task, to have accurately coded all services delivered to patients, used those codes to prepare “ghost claims” resembling those used for payment adjudication, and submitted the ghost claims to the employer’s TPA, not to prompt payment, but solely for reporting purposes.
  • Have the TPA apply the FFS fee schedule to the ghost claims.
  • Carefully verify the accuracy of the FFS fee schedule amounts applied to the ghost claims.
  • Do precisely nothing to verify the accuracy of the ghost claims to which the verified FFS fee schedule amounts were applied.
  • Perform no reality check on the resulting estimate of health care services utilization
    • Do not compare the results to those articles on Union County you have consulted, referred to and even quoted in your own study.
    • Do not compare the results to the market prices for direct primary care services revealed in your own study’s market survey.

Anyone see a potential weakness in this methodology?

This methodology resulted in $8 PMPM. That, I note, was the number which when used in Milliman’s alternative model, showed that direct primary care delivered by the DPC was “associated with” a decrease in overall demand for health care services of a 12.6%.

Milliman identifies its methodology as a tidy “apples-to-apples” comparison of FFS primary care services and direct primary care services measured by a common yardstick. But that look comes with the feeling that Milliman have emulated Procrustes, gaining a tidy fit to the iron bed of the fee schedule by cutting off the theoretical underpinnings of direct primary care model.

DPC practitioners, however, are very much bottom-line people who will endure repudiation of their ideology in Milliman’s study details as long as the ostensible headlines serve up something they might be able to monetize: a supposedly “actuarially sound” demonstration that the direct primary care model saves big bucks.

That demonstration, however, hinges on the $8 PMPM result being somewhere near accurate. But that result is at war with reality.

Milliman’s $8 PMPM result defies known facts and common sense — and does indeed contradict the core values of the DPC model.

Whether for the average patient panel size (~450) reported in Milliman’s survey of DPC practices, or for the specific panel size (~500) for the DPC practice in Milliman’s case study, $8 PMPM ($96 PMPY) works out to less than $50,000 per PCP per year. That’s not credible.

That Union County DPC patients see their PCP around three times a year is apparent from the public statements of the employer’s then-director of human resources and his successor and even from an article on Union County from which the Milliman study’s literature review quoted verbatim. The three visits are said to have lasted at least half an hour, as long as a full hour, and to be available on same day basis. $96 a year does not pay for that.

Consider also the logical implications of accepting that $8 PMPM yield by Milliman’s process accurately reflected actual office visit duration and frequency for the DPC population. That’s roughly one garden-variety visit per year. In that case, what exactly is there to account for downstream care cost reduction?

Were those reductions in ER visits caused simply by writing “Direct Primary Care” on the clinic door? Were hospital admissions reduced for patients anointed with DPC pixie dust?

What Milliman misses is magic, just not that kind.

It’s the magic of hard, but slowed down, work by DPC practioners. It’s their time doing things for which CPT codes may not or, at least, may not yet exist.* It’s relaxed schedules that assure availability for same day appointments. It’s 24/7 commitments. It’s knowing your patient well enough to ask the personal questions that Dr Bliss mentioned. Collectively this demands more health service resources than are captured by the CPT codes for little more than a single routine PCP visit.

The data set from which Milliman calculated utilization of direct primary care services underreported the patient care given at the clinic.

The only visible path to Milliman’s $8 PMPM figure for health services demand for the delivery of direct primary care is that the direct primary care physicians ghost claims were consistently underreported. That’s a kind of outcome that can reasonably be anticipated when disbelieving, reluctant DPC physicians, using EHRs ill-equipped for the task, are expected to accurately code all services delivered to patients, use those codes to prepare “ghost claims” resembling those used for payment adjudication, and submit those ghost claims to the employer’s TPA, not to prompt payment, but solely for reporting purposes.

In fact, Milliman even knew that the coding habits of the DPC practitioners were inconsistent, in that the ghost claims sometimes contained diagnosis codes and sometimes did not. Report at page 56.

Yet, Milliman did nothing to validate the “ghost claims”.

Whatever the justification for Milliman’s reconstructing the utilization of direct primary care health services demand from CPT codes collected in the situation these were, no meaningful conclusions can be drawn if the raw data used in the reconstruction is incomplete. Milliman does not appear to have investigated whether this key data set was accurate.

As a result of its apparent failure to capture the true resource costs of DPC-covered services rendered by the DPC, Milliman’s determination that the DPC model reduces overall utilization by 12.6% is far too high.

A plausible estimate of the demand for health care services for direct primary care services could be derived from widely-acccepted estimates of primary care physician compensation and practice overhead. Substituting any estimate of those costs greater than $45 PMPM for the $8 PMPM at which Milliman arrived would bring the calculated OVERALL medical services utilization gap between DPC and FFS well below four percent.

Another plausible estimate of the demand for health care services for direct primary care services is the market price of DPC services. Milliman’s estimate of that number was $61 PMPM. Substituting that market price for the $8 PMPM at which Milliman arrived turns the health care services utilization gap between DPC and FFS in favor of FFS.


* After the period Milliman studied, for example, CMS came up with 99491.

ATTN: Milliman. Even if Union County had not waived the $750 deductible, the County still would have lost money on DPC.

The lead actuary on Milliman’s study of direct primary care has suggested that the employer (Union County, NC, thinly disguised) would have had a positive ROI on its DPC plan if it had not waived the deductible for DPC members. It ain’t so.

Here’s the Milliman figure presumed to support that point.

It is true that removing the $31 figure of Line H, would lead to a tabulated result of total plan cost of $347, which would suggest net savings.

The problem is that the $61 figure of Line J of the Milliman report has been too low all along — and by more than $31.

Milliman got the $61 by estimating the plan cost of DPC membership, rather than learning what it actually was. $61 was the result of Milliman applying a 60:40 adult child split to fee levels drawn from Milliman’s survey of $75 adult and $40 child. But the publicly recorded contract between the DPC provider, Paladina, and Union County set the fees at $125 adult and $50 child, and $95 is the correct composite that should have been in Line J, representing $34 PMPM missed by Milliman.

Accordingly, even if the $31 cost that fell on the County for waiving the deductible is expunged from the calculation, the total plan costs for DPC would work out to $381 and would still exceed the total plan costs for FFS. The County’s ROI was indeed negative.

I can not tell you why Milliman used estimated fees of $61 rather than actual fees of $95. But doing so certainly made direct primary care look like a better deal than it is.

Risk adjustment, and more, badly needed for KPI Ninja’s Strada-brag

Amended 6/26/20 3:15AM

The Milliman report’s insistence on the important of risk adjustment will no doubt see the DPC movement pouring a lot of their old wine into new bottles, and perhaps even the creation of new wine. In the meantime, the old gang has been demanding attention to some of the old wine still in the old bottle, specifically, the alleged 68% cost care reductions attributed to Strada Healthcare in its work with a plumbing company of just over 100 persons in Nebraska.

Challenge accepted.

KPI Ninja’s study of Strada’s direct primary care option with Burton Plumbing illustrates why so much of the old DPC wine turns to vinegar in the sunlight.


At an extreme, there will be those who anticipate hitting the plan’s mOOP in the coming year — perhaps because of a planned surgery or a long-standing record of having “mOOPed” year-in and year-out due to an expensive chronic condition; these employees will be indifferent to whether they reach the mOOP by deductible or other cost-sharing; for them, moreover, the $32 PMPM in fixed costs needed for DPC option is pure disincentive. Furthermore, any sicker cohort is more likely to have ongoing relationships with non-Strada PCPs with whom they wish to stay.

An average non-Strada patient is apparently having claims costs of $8000. With a $2000 deductible and say 20% coinsurance applied to the rest that’s an OOP of $3200 and a total employee cost of about $6100; with a $3000 deductible that’s an OOP of $4000 and a total cost of $7250 . Those who expect claims experience of $8000 are unlikely to have picked the DPC/$3K plan. Why $1100 pay more and have fewer PCPs from which to choose?

But what about an employee who anticipated claims only a quarter that size, $2000. With the $2000 deductible that would come to an OOP of $2000 and a total cost of $4860. With the $3000 deductible that would come to an OOP of $2000 and a total cost of $5250. For these healthier employees, the difference between plans is now less than a $400 difference. Why not pay $400 more if, for some reason, you hit it off with the D-PCP when Strada made its enrollment pitch?

The sicker a Burton employee was, the harder this paired-plan structure worked to push her away. It’s a fine cherry-picking machine.

Strada’s analyst, KPI Ninja, recently acknowledged Milliman’s May 2020 report as a breakthrough in the application of risk adjustment to DPC. In doing that, KPI Ninja tacitly confessed their own failure to work out how to reflect risk in assessing DPC for its string of older reports.

To date, as far as I can tell, not one of KPI Ninja’s published case studies has used risk-adjusted data. If risk adjustment was something that Milliman invented barely yesterday, it might be understandable how KPI Ninja’s “data-analytics” team had never used it. But CMS has been doing risk adjustment since the year 2000. It’s significantly older than Direct Primary Care.

KPI Ninja should take this opportunity to revisit its Strada-Burton study, and apply risk adjustment to the results. Same for its Palmetto study and for its recently publicized, but risk-adjustment-free study, for DirectAccessMD. Or this one about Nextera.

Notice that, precisely because they have a higher deductible plan than their FFS counterparts, the Strada-Burton DPC patients faced greater cost-sharing discipline when seeking downstream care. How much of the savings claim in the Strada report owes to the direct primary care model, and how much to the a plan design that forced greater shopping incentives of DPC members?

It’s devilishly clever to start by picking the low-risk cherries and then use the leveraged benefit structure to make the picked cherries generate downstream cost savings.

The conjoined delivery of Strada DPC and enhanced HDHP makes the enhanced HDHP a “confounder” which, unless resolved, makes it virtually certain that even a risk adjusted estimate of DPC effectiveness will still be overly favorable to Strada DPC itself on utilization.

I have no doubt that risk adjustment and resolution of the confounding variable will shred Strada’s cost reduction claims. But, of course, if Strada is confident that it saved Burton money, they can bring KPI Ninja back for re-examination. It should be fun watching KPI Ninja learn on the job.

I’m not sure it would be fair for KPI Ninja to ask Strada to pay for this work, however. KPI Ninja’s website makes plain that its basic offering is data analytics that make DPC clinics look good. Strada may not like the result of a data analytic approach that replaces its current, attractive “data-patina” with mere accuracy.

I’ll skip explaining why the tiny sample size of the Strada-Burton study makes it of doubtful validity. Strada will see to that itself, with vigor, the moment it hears an employer request an actuarially sound version of its Burton study.

Special bonus segment. Burton had a bit over 100 employees in the study year, and a large fraction were not even in the DPC. I’m stumped that Burton had a one-year hospital admission rate of 2.09 per thousand. If Strada/Burton had a single hospital admission in the study year, Strada/Burton would had to have had 478 covered lives to reach a rate as low 2.09. See this spreadsheet. If even one of 200 covered lives had been admitted to the hospital, the inpatient hospitalization rate would have been 5.00.

The use of the 2.09 figure suggests that the hospital admission rate appearing in the whitepaper was simply reported by Strada to the KPI Ninja analyst. A good guess is that it was a hospitilization rate Strada determined for all of its patients. Often, DPC practices have a large number of uninsured patients. And uninsured patients have low hospitilization rates for a fairly obvious reason.

For Qliance, a plausible net savings is 6.8%

There are three main steps to get from a 19.6% savings claim by Qliance to a plausible number: (1) examining the validity of Qliance’s claim that it collected $251 more per employee than the employers were spending for fees for service primary; (2) including the drug costs which Qliance chose to omit from the data set; and (3) borrowing a generic DPC risk adjustment per Milliman, which brings the number down to 6.8%. Still, I probably wouldn’t bet that DPC can reduce net costs.

STEP ONE — Address the credibility of Qliance’s core claim

In early 2015, Qliance issued a press release that included a table of internal data, a package purporting to show that engagement of Qliance as a direct primary care provider for a subgroup of employees the employers resulted having “19.6 percent less than the total claims” when compared to those employees of the same employers who obtained primary care through traditional fee for service primary care practice. In dollar terms, the reported savings was $679 per person per year. No attempt was made to examine the degree to which the apparent savings might be due to differences in medical risk between the two populations. Some ambiguous wording in the text of the release was clarified by the table itself making clear that the 19.6% savings was intended to be net of Qliance’s monthly direct primary fee. And, a footnote to the table also mentioned that the claims costs analyzed included all claims data except for prescription drug claims.

Here’s the table as presented by Qliance.

The press release and table did not mention the amount of Qliance’s monthly direct primary care fees. These fees do appear, however, in contemporaneous publications such as this article about the Qliance clinic at Expedia. Qliance’s fees to employers were age-dependent, ranging from $49 to $89 per month. Assuming those 65 and older have a top bracket all to themselves, and at least roughly linear age-based pricing for the remaining employees, $64 per month ($768 per year) corresponds to a mid-point and a reasonable estimate of Qliance’s average per employee receipts.

Qliance’s table indicates that the employers’ primary care annual outlay for non-Qliance patients is $251 less Qliance’s annual fee. That would mean these employers were paying $547 per year primary care per employee.

Qliance’s table equates $679 and 19.6% of claims costs, excluding prescription drug costs. Dividing $679 by 19.6% yields $3464 as the total claim costs, excluding the cost of prescription drugs, for non-Qliance patients. The Health Care Cost Institute indicates that in 2014 the average annual prescription drug costs in the West Region where Qliance operates were $684 per person. Adding that amount to their $3464 of other claims costs, brings the total annual employer cost of care for non-Qliance member employees to $4148.

For non-Qliance employees, then, the $547 primary care spend corresponds to over 11.3% of total health care spend. This is a remarkable number. The American Association of Family Physicians expresses horror when it tells us that primary spending falls in the 5-8% range; a recent outgoing AAFP president took great pride for his role in two state intiaitives that pulled the primary care spending percentage into 12-13%. Family Medicine for America’s Health, an alliance of the American Academy of Family Physicians and seven other family medicine leadership organizations.

Presumably, we are to believe that, even though the non-Qliance employees were already approaching the pearly gates of primary care heaven with 11.3 % invested, Qliance’s swooped in and brought them a further 19.6% cost reduction.

Committing $251 more dollars to primary care while netting down $ 679 on total would mean that primary care for the Qliance patients reached 22% of total spend, a fifty per cent increase above the 14% seen in European countries thought to be top performers. A level of primary care spending unknown anywhere other than in Qliance clinics!

All this strongly suggests that Qliance’s math or method is just wrong. But Qliance has not disclosed how the calculation was done. Indeed, as already noted, the Qliance news release proudly claiming large saving Qliance did not even disclosed the monthly fees being paid by employers, an amount that is central to that calculation.

The Qliance table also presents some puzzling details about downstream care. Qliance patients are noted as having 14% fewer ER visits than their FFS counterparts. But the next cell in the same table reports that the average cost of ER claims for Qliance patients was higher, by $5 per annum, than the average cost of ER claims for non-Qliance patients. In percentage terms, an average Qlaince patient incurred ER costs that were slightly more than 14% higher than those of non-Qliance patients.

It is certainly plausible that Qliance patients visiting ERs might present a somewhat different case mix than their counterparts. But Qliance patients having greater average ER costs than their fee for service counterparts stands in sharp contrast to one of the most stressed talking points advanced by advocates for direct primary care.

The Direct Primary Care Coalition is lobbyist for direct primary care. Its current chairman was one of the founders of Qliance. In its advocacy to Congress and others, the Coalition often relies on the 2015 Qliance press release. DPC Coalition has addressed the apparent anomalyregarding ER in an interesting way. In a letter to members of the Senate Committee on Finance, the DPC Coalition produced a modified version of the 2015 table that solved the apparent problem — by simply deleting the data on ER visits!

Letter, DPC Coalition 1/26/2016 Despite removal of the ER visit claims data, the Coalition version of the table continues to represent that it contains all-claims data, and has at least one error of arithmetic.

In the wake of Qliance’s nondisclosure of details and subsequent closure of operations through bankruptcy, I have done a computation based on assumptions which I believe would have been made in the course of due diligence by a potential investor from whom Qlaince sought capital. The assumptions are:

  • that Qliance received, on average, the mid-point fee of $64 per member per month;
  • that $684 in prescription drug costs, being an average annual expenditure in the US West Region, should be included in total health care spend;
  • that non-Qliance patients incurred primary care claims at a rate of 6.7% of total health care spend, which corresponds to the mid-point of AAFP estimates.

Computed in this way, the non-risk adjusted percentage net savings for Qliance patients is 10.7%.

Here is a link to the computation in a downloadable spreadsheet showing all cell formulas. That spreadsheet is imaged here:

Google Spreadsheet

If there’s a better way to make this computation, or if I’ve totally blown it, let me know in the comments section.

The reader is also invited to visit the spreadsheet, get it onto their own device, and run their own variations. There is a table that notes how the “$251 assumption” and “19.6% assumption” combine to fix the relationship between putative Qliance fees and corresponding primary care spend percentages for non-Qliance employees. If, for example, you were to assume that average Qliance fee was $49 (despite the report that $49 was the minimum fee), the implicit non-Qliance primary care spend percentage would be 8.1% (a number actually over the top of the range that AAFP reported as typical of the US) and the implicit Qliance primary care spend would be 16.9% (and still well beyond AAFP’s wildest dreams) . In that scenario, the final figure, after risk adjustment and accounting for prescription meds, would still be less than half of Qliance’s initial claim of 19.6%

STEP TWO — Adjust computation to include prescription drug costs

The table above fills the gap created by Qliance’s excludion of prescription drug claims. I do not know the reason for this exclusion. We do know that inclusion of prescription drugs claims in “total claims” would have lowered the amount and percent amount and/or percentage of savings that Qliance reported.

Iora, a direct care practice leader whose work has been featured right alongside Qliance in the legislative advocacy of the Direct Primary Care Coalition, has reported data that certainly make it seem that substantial reductions in other downstream spending can, in effect, be purchased by large increases in prescription usage. In one of its clinics, a 40% increase prescription refills seem to have largely countered huge drops in hospital costs, including ER visits, so that true total spending reduction remained modest.

If Qliance was like Iora in this regard, the inclusion of prescription drug expenses would have significantly reduced what was literally the bottom line of it press-release table.

We don’t know whether Qliance was like Iora.

There appears to be only one careful study explicitly addressing the usage of prescription drugs by DPC patients relative to FFS patients. It’s not Iora’s.

Milliman’s case study for the Society of Actuaries carefully compiled DPC and FFS patient utilization data in all areas of medical care services for an employer contract similar to Qliance’s contracts. For prescription drugs, they measured a 1% greater utilization by the DPC patients.

Applying the Milliman study number to the Qliance work decreased the estimated total annual savings from using DPC by $7. But the largest inpact comes not from deducting $7 from the net savings. Figuring in drug costs increases the denominator of the % savings calculation by $648, so that overall cost savings fall to 16.2% even without any adjustment other than including drug costs.

STEP THREE — Risk adjustment

We urge readers to use caution when reviewing analyses of DPC outcomes that do not explicitly account for differences in population demographics and health status.

Milliman study for the Society of Actuaries

The Milliman study for the Society of Actuaries stands alone (in June of 2020) as the only examination by independent experts of the effect on health care costs of demographic and health risk differences between employees who elect direct primary and those who elect traditional primary care. For Milliman’s employer, raw costs needed to adjusted downward by 36% to account for health factors favorable to the employee group that elected direct primary care group.

Should we assume a general similarity between the employees studied by Milliman and the employees of the employers served by Qliance to reflect the health risk characteristics of the sub-population that elected direct primary care? The Milliman study authors note that when an employer offers a direct primary care option — with its exclusive PCP relationship — employees with lower health care needs and fewer anticipated PCP encounters may, ceteris paribus, be more likely to elect DPC. Milliman connects this with the fact that, historically, narrow access plans like HMOs see favorable selection effects relative to PPOs.

The heroic equating of the employee groups from the Qliance and Milliman studies is probably the best available way to address risk issues in the Qliance data. Qliance has been out of business since 2017; it left the field without giving us risk adjusted data. Milliman is, at least for now, is the best we have to try to fill that gap.

Applying the 36% adjustment from Milliman results in a plausible estimate that Qliance adoption was associated with a total cost savings (inclusive of prescription drugs) of 6.8% all employer costs.*

An alternative way to fill the gap draws upon work that is both less sophisticated and less impartial in its analysis, Nextera/DigitalGlobe white paper addressed in this prior post. This was also a much smaller study than Milliman’s and covered a far shorter period of time. It points to a level of risk adjustment within striking range of that from Milliman. Nextera obtained employee claims data for a five month period prior to the availability of a DPC option. The DPC cohort had pre-option claims costs that were lower than the FFS cohort more than 30%. Applying an alternative Nextera-based risk adjustment to Qliance data would have resulted in an estimate that Qliance adoption was associated with a 7.4% overall cost reduction. Due to its provider-independent sourcing, its development by professional actuaries, its larger size and longer duration, I choose to rely on the Milliman adjustment for my headline.

Addendum: July 12, 2020. Benefit plan structure can have a substantial impact on costs and utilization. Although, for the foregoing analysis, I assumed that the employers involved offered employees in the two Qliance and non-Qliance cohorts effectively equivalent cost-sharing obligations, an additional layer of selection bias may well be present if these employers offered significantly different benefit structures to the different cohorts.

In all, I advise against staking anything of value on any claim that Qliance produced net cost reductions. That issue was, in effect, crowd-sourced in early 2017 when Qliance desperately searched for fresh capital before declaring bankruptcy in late Spring.

*This would imply a downstream cost care reduction of about 14%.

DPC is uniquely able to telemed: a meme that suffered an early death.

An update to this post.

Larry A Green Center / Primary Care Collaborative’s Covid-19 primary care survey, May 8-11, 2020:

In less than two months, clinicians have transformed primary care, the largest health care platform in the nation, with 85% now making significant use of virtual health through video-based and telephone-based care.

Larry A Green Center and Primary Car Collaborative.

These words spelled the end of the meme that direct primary care was uniquely able to telmed. “DPC-Telly”, as the meme was known to her close friends, was briefly survived by her near constant companion, “Covid-19 means FFS is failing financially, but DPC is fine”. Further details here.

The Nextera/DigitalGlobe study design made any conclusion on the downstream effect of subscription primary care impossible.

The study indiscriminately mixed subscription patients with pay-per-visit patients. Selection bias was self-evident; the study period was brief; and the study cohort tiny. Still, the study suggests that choosing Nextera and its doctors was associated with lower costs; but the study’s core defect prevent the drawing of conclusions about subscription primary care.

UPDATED JUNE 2020. Here’s a link to an earlier version.

The Nextera/DigitalGlobe “whitepaper” on Nextera Healthcare’s “direct primary care” arrangement for 205 members of a Colorado employer’s health plan is such a landmark that, in his most recent book , an acknowledged thought leader of the DPC community footnotes it twice on the same page, in two consecutive sentences, once as the work of a large DPC provider and a second time, for contrast, as the work of a small DPC provider.

The defining characteristic of direct primary care is that it entails a fixed periodic fee for primary care services, as opposed to fee for service or per visit charges. DPC practitioners, their leadership organizations, and their lobbyists have made a broad, aggressive effort to have that definition inscribed into law at the federal level and in every state .

So why then does the Nextera whitepaper rely on the downstream claims costs of a group of 205 Nextera members, many of whom Nextera allowed to pay a flat per visit rather than having compensation only through than a fixed monthly subscription fee?

This “concession” by Nextera preserved HSA tax advantages for those members. This worked tax-wise because creating a significant marginal cost for each visit in this way actually brings this form of non-subscription practice within the intended medical economic goals for which HDHP/HSA plans were created— in precisely the way that a subscription plan, which puts a zero marginal cost on each visit, cannot.

The core idea is that having more immediate “skin the game” prompts patients to become better shoppers for health care services, and lowers patient costs. Those who pay subscription fees and those who pay per visit fees obviously face very different incentive structures at the primary care level. It would certainly have been interesting to see whether Nextera members who paid under the two different models differed in their primary care utilization.

More importantly, however, precisely because the fee per visit cohort all had HDHP/HSAs, they had enhanced incentives to control their consumption of downstream costs compared to those placed in the subscription plan, who did not have HDHP/HSA accounts. The per-visit cohort can, therefore, reasonably be assumed to have been responsible for greater downstream cost reduction per member than their subscription counterparts.

Had the whitepaper broken the plan participants into three groups — non-Nextera, Nextera-subscriber, Nextera per-visit — there is good reason to believe that the subscription model would have come out a loser.

Instead, Nextera analyzed only two groups, with all Nextera members bunched together. And, precisely because the group mixed significant numbers of both fixed fee members and fee for service members, it is logically impossible to say from the given data whether the subscription-based Nextera members experienced downstream cost reduction that were greater than, the same as, or less than the per-visit-based Nextera members. So, while the study does suggest that Nextera clinics are associate with downstream care savings, it could not demonstrate that even a penny of the observed benefit was associated with the subscription direct primary care model.

Here are the core data from the Nextera report.

Nextera Healthcare + DigitalGlobe: A Case Study

205 members joined Nextera; they had prior claim costs PMPM of $283.11; the others had prior claim costs PMPM of $408.31. This a huge selection effect. The group that selected Nextera had pre-Nextera claims that were over 30% lower than those declining Nextera.

Rather than award itself credit for that evident selection bias, Nextera more reasoanbly relied on a form of “difference in differences” ( DiD) analysis. They credited themselves, instead, for Nextera patients having claims costs decline during seven months of Nextera enrollment by a larger percentage basis (25.4%) than claim cost for their non-Nextera peers (5.0%), which works out to a difference in differences (DiD) of 20.4%.

Again, the data from mixed subscription and per-visit member can only show the beneficial effect of choosing Nextera, rather than declining Nextera. The observed difference appears to be a nice feather in Nextera’s cap; but the data presented is necessarily silent on whether that feather can be associated with a subscription model of care.

It cannot be presumed that Nextera’s success could have been replicated on other DigitalGlobe members.

In the time since the report, Nextera has actively claimed that its DigitalGlobe experience demonstrates that it can reduce claim costs by 25%. Nextera should certainly amend that number to the reflect the smaller difference in differences that its report actually shows (20%). But even that substituted claim of 20% cost reduction would require significant qualification before extension to other populations.

Even before they were Nextera members, those who eventually enrolled seem to have had remarkably low claims costs. Difference in differences analysis relies on a “parallel trend assumption“. The Nextera population may be so much different from those who declined Nextera that the trend observed for the Nextera cohort population can not be assumed even for the non-Nextera cohort from DigitalGlobe, let alone for a large, unselected population like the entire insured population of Georgia.

Consider, for example, an important pair of clues from the Nextera report itself: first, Nextera noted that signups were lower than expected, in part because of many employees showed “hesitancy to move away from an existing physicians they were actively engaged with”; second, “[a] surprising number of participants did not have a primary care doctor at the time the DPC program was introduced”.

As further noted in the report, the latter group “began to receive the health-related care and attention they had avoided up until then.”

A glance at Medicare, reminds us that routine screening at the primary care level is uniquely cost-effective for beneficiaries who may previously avoided costly health care. Medicare’s failure to cover regular routine physical examinations is notorious. But there is one reasonably complete physical examination that Medicare does cover: the “Welcome to Medicare” exam.

First attention to a population of “primary care naives” is likely a way to pick the lowest hanging fruit available to primary care. Far more can be harvested from a population enriched with people receiving attention for a first time than from a group enriched with those previously engaged with a PCP.

Accordingly, a “parallel trend” can not be assumed; and the 20% difference in differences savings in the Nextera group can not be directly extended to the non-Nextera group.

Relatedly, the comparative pre-Nextera claim cost figure may reflect that the Nextera population had a disproportionately high percentage of children, of whom a large number will be “primary care naive” and similarly present a one-time only opportunity for significant returns to initial preventative measures. But a disproportionately high number of children in the Nextera group means a diminished number of children in the remainder — and two groups that could not be presumed to respond identically to Nextera’s particular brand of medicine.

A similar factor might have arisen from the unusual way in which Nextera recruited its enrollees. A group of DigitalGlobe employees with a prior relationship with some Nextera physicians first brought Nextera to DigitalGlobe’s attention and then apparently became part of the enrollee recruiting team. Because of their personalized relationship with particular co-workers and their families, the co-employee recruiters would have been able to identify good matches between the needs of specific potential enrollees and the capabilities of specific Nextera physicians. But this patient panel engineering would result in a population of non-Nextera enrollees that was inherently less amenable to “Nexterity”. Again, it simply cannot can be assumed that the improvement seen with the one group can simply be assumed for any other.

Perhaps most importantly, let us revisit the Nextera reports own suggestion the difference in populations may have reflected “hesitancy to move away from an existing physicians they were actively engaged with”. High claims seem somewhat likely to match active engagement rooted in friendship resulting from frequent proximity. But consider, then, that the frequent promixity itself is likely to be the result of “sticky” chronic diseases that have bound doctor and patient through years of careful management. It seems likely that the same people who stick with their doctors are more likely to have a significantly different and less tractible set of medical conditions than those who have jumped to DPC.

Absent probing data on whether types of different health conditions prevail in the Nextera and non-Nextera populations, it is difficult to draw any firm conclusion about what Nextrea might have been able to accomplish with the non-Nextera population.

These kinds of possibilities should be accounted for in any attempt to use the Nextera results to predict downstream cost reductions outcomes for a general population.

Perhaps, the low pre-Nextera claims costs of the group that later elected Nextera reflects nothing more than the Nextera group having a high proportion of price-savvy HDHP/HSA members. If that is the case, Nextera can fairly take credit for making the savvy even savvier. But it cannot be presumed that Nextera could do as working with a less savvy group or with those who do not have HDHPs.

Whether or not Nextera inadvertently recruited a study population that made Nextera look good, that study population was tiny.

Another basis for caution before taking Nextera’s 20% claim into any broader context is the limited amount of total experience reflected in the Nextera data — seven months experience for 205 Nextera patients. In fact, Nextera’s own report explains that before turning to Nextera, DigitalGlobe approached several larger direct primary care companies (almost certainly including Qliance and Paladina Health); these larger companies declined to participate in the proposed study, perhaps because it was too short and too small. The recent Milliman report was based ten fold greater claims experience – and even then it had too few hospitalizations for statistical significance.

Total claims for the short period of the Nextera experiment were barely over $300,000, the 20% difference in difference for claimed savings comes to about $60,000. That’s a pittance.

Consider that two or three members may have elected to eschew Nextera in May 2015 because, no matter how many primary care visits they might have been anticipating in the coming months, they knew they would hit their yearly out-of-pocket maximum and, therefore, not be any further out of pocket. Maybe one was planning a June maternity stay; another, a June scheduled knee replacement. A third, perhaps, was in hospital because of an automobile accident at the time for election. Did Nextera-abstention of these kinds of cases contribute importantly to pre-Nextera claims cost differentials?

The matter is raised here primarily to suggest the fragility of a purported post-Nextera savings of a mere $60,000 over seven months. An eighth month auto accident, hip replacement, or Cesarean birth could evaporate a huge share of such savings in a single day. The Nextera experience is too small to be reliable.

Nextera has yet to augment the study numbers or duration.

Nextera has not chosen to publish any comparably detailed study of downstream claims reduction experience more recent than 2015 data — whether for DigitalGlobe or or any other group of Nextera patients. That’s a long time.

Nextera now has over one-hundred doctors, a presence in eight different states, and patient numbers in the tens of thousands. Shouldn’t there be newer, more complete, and more revealing data?


Because of its short duration and limited number of participants, because it has not been carried forward in time, because of the sharp and unexplained pre-Nextera claims rate differences between the Nextera group and the non-Nextera group, and because its reported cost reduction do not distinguish between subscription members and per-visit members, the Nextera study cannot be relied on as giving a reasonable account of the overall effectiveness of subscription direct primary care in reducing care costs.

Nextera’s case study also had errors of arithmetic, like this one:

The reduction rounds off to 5.0%, the number I used in the larger table above.

Iora’s Las Vegas experience is an inapt model for DPC, and shows no real cost reduction.

While DPC Coalition features an Iora Clinic in Las Vegas as a data model of the joys of direct primary care, it is simply not representative of a general population. That clinic focused on a very high need population, every member chronically ill. We are looking at people with $11,000 claim levels at 2014 prices; they are “superutilizers” in what is called a “hotspotters” program.

A close look at the very Iora data that the DPC Coalition presented to the United States Senate makes clear how inappropriate it is to use outlier populations.

The green bars at left were drawn from what Iora and the DPC Coalition regards as “well-matched controls with equivalently sick populations”. Like the blue bars, which represent the Iora study group, the green bars are clearly far higher than the red bars that represent the general local population (Las Vegas). Since the blue bars are also a bit higher than the green bars, however, the question arises: in what sense were these controls “well-matched controls with equivalently sick populations”. To be meaningful, I suggest, these controls would had to have been matched by some assessment of chronic conditions and risk scores.

But why, if the groups were closely matched, is the pre-treatment bar at $935 for the study group but only $806 for the control group. If the controls were indeed well matched then there must have been some sort selection effect. I suggest that this was the result of members being recruited into the Iora program when they presented with acute exacerbations of their underlying chronic conditions.

A treatment year passed. A mix of medical inflation and noise bring the costs for the control group up to $861. But the total costs of the study group at the end of the treatment year, including the fees paid to Iora, are $889 and remain higher than the “well matched controls”.

And why have the total costs of the treatment group dropped by 5%? Because they’ve gotten better — regressing to the mean in their need for services. Indeed, if the controls were well matched by risk scores at the start of the year, this is entirely predictable.

Is this wild speculation on my part? Hardly. A NEJM research piece noted regression to the mean of matched controls when a study of one of the pioneering superutilizer programs (Camden) showed no difference in hospitalization rates during the original study period between plan members and non-members. Suree, there might be some other explanation for the exact levels and changes seen in the bar graphs.

But what remains is this: During the treatment period, the total costs of the Iora study group, including the fees paid to Iora, were 3% more than the total costs of what Iora claimed were well matched controls.

If “well matched controls” means anything sensible, the Iora study does not demonstrate a net cost reduction.

Back in the 90s, I did some federal legislative advocacy on health policy. I once asked Representative Sander Levin (D -Michigan) to present an amendment that would swing about $15 Billion in the direction of low income citizens.

“I’d love to do that, Mr. Ratner. But you’ll have to show me a $15 Billion offset. If you can find it, get it to me this week.”

I actually foundthe money. The CBO had missed a $15 billion item in scoring the bill. You can bet I busted my ass (and confirmed my understanding with my betters like Stan Dorn) to make sure that I was not casually handing a load of careless bullshit to a federal legislator.

I guess times have changed.