data project

California Denti-Cal records show Anaheim's payment intensity rose after the March 2015 ownership transition while every peer office fell or held flat. The divergence survives difference-in-differences, synthetic control, and every leave-one-out subset.

Against the Trend: Denti-Cal Payments at Children's Dental Group After 2015

Article by John Tribbia

Analysis covers January 2013 – September 2016 · Source: Data obtained under the California Public Records Act · Procedure-level extracts and internal dashboards not used · All data in this article is drawn from shareable state records only.
The Finding

Children's Dental Group ran pediatric Denti-Cal clinics across California. In March 2015, the chain changed hands. A legal case can feel abstract until the billing pattern moves in the opposite direction.

California paid $62.23 million across ten offices between 2013 and 2016. After March 2015, every office except Anaheim flat-lined or fell—system average down 3.6%. Anaheim rose $14.49 per visit day above peer offices in two-way fixed-effects DiD (t = 4.28), growing to +$21.99 under trend adjustment. The synthetic control puts the post-period gap at $18.67. No single control office drives the result. At provider level, three dentists show elevated post-2015 payments in baseline DiD—but pretrend diagnostics split them: two show acceleration that predates the March 2015 cutoff, one does not, and it's that divergence in internal validity, not raw effect size, that determines how much weight each signal can carry.

$62.23M
Total Denti-Cal payments in office records (Jan 2013-Sep 2016)
+$14.49
Anaheim DiD effect per visit day after March 2015
+$18.67
Post-period Anaheim gap vs synthetic control
Jan 2013 Mar 2015 transition Sep 2016 Baseline period Divergence window tested

This article makes two claims. First, the office-level Anaheim signal is strong and remains strong under harder specifications. Second, the provider-level Diaz signal is important but less clean causally once pre-existing trend differences are modeled.

How the main results hold up: Anaheim's DiD estimate (+$14.49/visit-day, t = 4.28) survives synthetic control (+$18.67 gap), every leave-one-out subset ($12.16–$15.94), and trend adjustment (+$21.99). At provider level, Pham is the cleanest signal—flat pretrend, negative falsification, trend-adjusted effect that strengthens (+$3,521, t = 4.63). Diaz and Tarnavsky show larger raw effects but pre-period acceleration that attenuates or reverses under stricter specification.

Case Context and Research Question

Children's Dental Group (CDG) operates pediatric dental clinics across California, serving low-income populations. Litigation alleged infection-control failures at Anaheim (73 children hospitalized, hundreds exposed) and unnecessary high-reimbursement procedures chain-wide. Court-appointed co-lead counsel represented dozens of families in consolidated suits.

Scope note: This article does not adjudicate those legal claims. It examines a narrower question using state billing records only: whether payment-intensity patterns changed after the March 2015 ownership transition.

In March 2015, Sam Gruenbaum acquired CDG's clinic chain. The question here is narrower: do state payment records show a change?

Data source: Data is sourced from a California Public Records Act request. It contains state-reported payment totals by office-month (ten clinics, eight with full pre/post coverage) and by dentist-week (25 providers). No internal dashboards, patient-level extracts, or private benchmarks. Only shareable, verifiable state records.


Office-Level Results: Where the Divergence Starts

The office-level records span 45 months (January 2013–September 2016): ten clinics, $62.23 million in Denti-Cal payments. Two clinics (Baldwin Park, Whittier) appear only post-2015 and cannot support comparison. The eight with full-period coverage average $144 per visit day.

Pre-2015 average: $146.45. Post-2015: $141.13, a 3.6% system-wide decline. One office moved opposite.

REQUEST 1 & 2 · DENTI-CAL OFFICE RECORDS
Payment per Visit Day: Pre vs. Post March 2015
Dollars per patient visit day · office-level totals · March 2015 cutoff

Most offices show flat or declining payment intensity after the transition. Anaheim is the exception, rising 6% while the system average fell 3.6%. †Baldwin Park and Whittier appear only in post-2015 records and have no pre-period baseline.

Anaheim rose from $144.53 to $153.25 (+6%). Every other clinic stayed flat or fell. San Jose dropped most (−12%), Santa Ana −4.6%, South Gate flat. The divergence is measurable and moves against the system. It does not prove wrongdoing—office-level records omit procedure detail, case mix, volume shifts, and staffing changes can all produce the same signal.


The Causal Test: Did Anaheim Diverge or Just Float?

To test whether Anaheim's trajectory changed after March 2015—controlling for system-wide trends—the tool is difference-in-differences: measure Anaheim's relative change against the other seven offices, then net out system-wide movement.

Effect: +$14.49 per visit day (SE $3.39, t = 4.28, 95% CI: $7.85–$21.12). Standardized effect (relative to pre-period SD): 0.75. After netting system-wide trends, Anaheim rose $14.50 more than peer clinics.

Two robustness checks: pretrend slope is flat (−$0.21/month, t = −0.93), supporting parallel-trends. Trend-adjusted DiD is larger: +$21.99 (t = 4.02, CI: $11.26–$32.71). The office-level result strengthens under stricter specification.

SYNTHETIC CONTROL · ANAHEIM VS. CONSTRUCTED BENCHMARK
Anaheim Payment Intensity vs. Synthetic Counterpart
Dollars per visit day · monthly · synthetic control weighted from 7 control offices
Anaheim (actual) Synthetic control

Pre-treatment RMSE = $9.02. The synthetic series tracks Anaheim closely before March 2015. Post-treatment mean gap = +$18.67. The vertical line marks the Gruenbaum transition in March 2015.

Synthetic control confirms the pattern. Pre-2015, a weighted combination of seven offices tracks Anaheim with RMSE $9.02 (Carson 32.5%, Eagle Rock 18.4%, Sunnyvale 16%). Post-2015, gap opens. Post-period mean gap: +$18.67 per visit day, largest mid-2015 through late 2016.

SENSITIVITY CHECK · LEAVE-ONE-OUT
DiD Estimate Holds Across All Control-Office Subsets
95% confidence intervals · each bar drops one control office

LOO range: $12.16 to $15.94. All estimates exclude zero. The San Jose LOO produces the smallest effect because that office has the sharpest post-2015 decline, so removing it strengthens Anaheim's relative position.

Estimate stable across leave-one-out: $12.16 (San Jose dropped) to $15.94 (Eagle Rock dropped). All intervals exclude zero. No single office drives the result.

The placebo check is less decisive, though still informative: Anaheim ranks first among the eight offices, and the permutation p-value is 0.25. With only eight offices, that's the best possible ranking, but the sample is too small to achieve conventional significance thresholds through permutation alone. The LOO stability and the synthetic control's pre-period fit are the stronger tests. Both hold.


Twenty-Five Dentists, $24 Million, Not Evenly Distributed

The weekly provider records offer a second angle on the same system. Request 6 covers 25 individual dentists, identified by NPI number in the original data and matched to names here using a separate NPI lookup table. NPI numbers are public identifiers registered in the national NPI registry maintained by CMS. The records track weekly Denti-Cal payments from early 2013 through September 2016. Total payments in this extract are $24.09 million, distributed unevenly.

REQUEST 6 · WEEKLY PROVIDER PAYMENTS
Total Denti-Cal Payments by Dentist, Full Panel
All weeks in panel · Jan 2013 – Sep 2016 · top 15 providers shown

Lisa Vo Nguyen leads at $2.40M total. The top five providers together account for 44.9% of the $24.09M panel. The bottom half of the provider list accounts for less than 20%.

The top five providers, Lisa Vo Nguyen, Irina Mihaela Tarnavsky, Trinh Thuy Pham, Helen Hoi-Yen Ching, and Pamela Abraham, account for 44.9 percent of total panel payments. That concentration is not unusual by itself because dental billing often clusters among high-volume practitioners. The key question is whether the distribution changed after March 2015.

REQUEST 6 · PRE vs. POST COMPARISONS
Weekly Payment Mean: Before vs. After March 2015
Average weekly Denti-Cal payments · providers with observations in both periods
Pre-March 2015 Post-March 2015

David Michael Diaz shows the largest post/pre ratio (1.45×) among providers present on both sides of the cutoff. Irina Mihaela Tarnavsky is the second-largest mover (1.35×). Lisa Vo Nguyen and others stayed roughly flat.

Among providers with records on both sides of the March 2015 cutoff, the variation is striking. David Michael Diaz moved from a weekly mean of $12,039 to $17,509, a 45 percent increase. Irina Mihaela Tarnavsky rose 35 percent. Trinh Thuy Pham rose 14 percent. Lisa Vo Nguyen, the panel's highest earner by total, barely moved. Allison Lynnae Olex fell 12 percent. Caroline Hu fell 37 percent and left the active panel well before the end of the study period.

This is not uniform movement across providers. Post-March 2015 gains concentrated in a subset of dentists while others declined or exited. That concentration is consistent with workload or billing intensity being redistributed within the network.


Three Provider Signals: Testing Diaz, Tarnavsky, and Pham

Three dentists stand out in the pre/post comparison. David Diaz shows the largest raw increase (45%), but two others moved sharply too: Irina Tarnavsky (35%) and Trinh Thuy Pham (14%). The standard causal test is difference-in-differences: use nine other providers as controls, absorb week-to-week system variation, and measure whether each dentist's trajectory changed relative to peers after March 2015.

On the ten providers with sufficient pre- and post-period data, the baseline DiD estimates are: Diaz +$4,757/week (t = 6.27), Tarnavsky +$3,811/week (t = 7.80), Pham +$1,208/week (t = 2.73). All three rank positive in placebo rotation. But pretrend diagnostics matter: they reveal whether the increase actually began after the policy date or earlier.

Diaz has a pre-existing differential slope of +$177/week/week (t = 2.15) and a positive falsification cutoff at March 2014 (+$733, t = 2.68). When trend-adjusted, his estimate shrinks to +$1,284 with a confidence interval that crosses zero. Large raw effect, but pre-period acceleration undermines the post-March signal.

Tarnavsky shows a smaller pretrend (+$26/week/week, t = 2.76) and a positive falsification cutoff (+$1,462, t = 2.64). When trend-adjusted, her effect turns negative at −$653 (CI includes zero). Substantial raw effect, but also signs of earlier movement.

Pham shows a flat pretrend (−$12/week/week, t = −1.49)—no significant acceleration before March 2015. His falsification cutoff is negative (−$1,141, t = −2.31), consistent with no shift at the placebo date. When trend-adjusted, his effect grows: +$3,521 (t = 4.63, 95% CI: $1,599–$5,443, excludes zero). The smallest raw effect but the cleanest causal structure.

PROVIDER PANEL · MONTHLY TRENDS
Monthly Average Weekly Payments by Dentist
Monthly average of weekly Denti-Cal payments · eligible providers in DiD panel · Jan 2013 – Sep 2016
David Michael Diaz Irina Tarnavsky Lisa Vo Nguyen Trinh Thuy Pham Allison Olex

Diaz and Tarnavsky show sharp upward acceleration mid-2015 onward. Pham's rise is gentler but sustained. Lisa Vo Nguyen and Allison Olex track flat or downward trajectories. The monthly view is essential: Pham, despite a lower raw effect, shows no pre-March acceleration.

Diaz enters in late 2013 at $6–$12K/week, reaching $19–$23K by late 2015–2016 (peaks $21.7K and $20.2K in July–Aug 2016). Tarnavsky follows a similar trajectory from a lower base, rising from $11–$13K to $14–$19K post-2015. Pham starts at $11–$13K and rises more gradually to $13–$14K. The key difference: Diaz and Tarnavsky show acceleration beginning in early/mid-2015, while Pham's acceleration begins closer to the March cutoff. Lisa Vo Nguyen, despite $2.4M in total payments, shows no post-2015 step change. Allison Olex trends downward. This timing matters for causal inference.


Robustness: Ranking and Causal Quality

Baseline size is not the measure of causal quality. The relevant question is internal validity: does the estimate hold when pre-period trends are accounted for? Placebo rotation ranks all three among the top four providers, but their pretrend profiles diverge sharply.

PLACEBO ROTATION · PROVIDER PANEL
DiD Estimate When Each Provider Is Designated "Treated"
Weekly payment treatment effect · March 2015 cutoff · 10 eligible providers

Diaz ranks #1 ($4,757/week), Tarnavsky #2 ($3,811), Pham #4 ($1,208). Permutation p = 0.10 one-sided for Diaz. The bottom half of providers show negative effects. Ranking by baseline effect is different from ranking by causal robustness.

In baseline DiD ranking: Diaz first ($4,757), Tarnavsky second ($3,811), Pham fourth ($1,208). Permutation p = 0.10. But placebo rank is separate from pretrend quality. Diaz and Tarnavsky both show significant pre-period acceleration. Pham does not. Leave-one-out checks confirm baseline Diaz and Tarnavsky estimates are stable across control subset removals; Pham's smaller magnitude is stable too.

SENSITIVITY CHECK · PROVIDER LEAVE-ONE-OUT
Diaz DiD Estimate Across Control-Provider Subsets
95% confidence intervals · each bar drops one control provider · March 2015 cutoff

LOO range: $4,032 (drop Caroline Hu) to $5,139 (drop Tarnavsky). All estimates exclude zero. No single control provider drives the Diaz result.

PROVIDER LEV EL · THREE-PROVIDER CAUSAL COMPARISON
Baseline DiD, Pretrend, Falsification, and Trend-Adjusted Estimates
DiD model with 10 eligible providers as controls · March 2015 cutoff
Provider Baseline DiD Pretrend ($/wk/wk) Falsification (Mar 2014) Trend-Adjusted DiD Summary
David Diaz +$4,757 (t=6.27) +$177 (t=2.15) +$733 (t=2.68) +$1,284 (CI includes 0) Large baseline; pre-trend undermines
Irina Tarnavsky +$3,811 (t=7.80) +$26 (t=2.76) +$1,462 (t=2.64) −$653 (CI includes 0) Strong baseline; early acceleration evident
Trinh Pham +$1,208 (t=2.73) −$12 (t=−1.49) −$1,141 (t=−2.31) +$3,521 (t=4.63) Modest baseline; cleanest causal structure

Pham alone shows a flat pretrend and negative falsification cutoff, consistent with no shift before March 2015. When trend-adjusted, his effect strengthens and excludes zero. Diaz and Tarnavsky both show pre-period acceleration that weakens their causal interpretation when explicitly modeled.


ROBUSTNESS SUMMARY · STRICTER DIAGNOSTICS
How the Main Effects Hold Up Under Harder Specifications
Anaheim office panel vs. Diaz provider panel
Check Anaheim (Office Panel) Diaz (Provider Panel) Read
Baseline TWFE DiD +$14.49 (t = 4.28) +$4,757 (t = 6.27) Both positive and statistically strong
Clustered SE sensitivity SE $3.23, t = 4.49 SE $917, t = 5.19 Inference remains strong under clustering
Pretrend differential slope -$0.21/month (t = -0.93) +$177/week/week (t = 2.15) Office supports parallel trends; provider does not
Falsification date (Mar 2014) -$4.41 (t = -1.67) +$733 (t = 2.68) Provider shows early acceleration before policy date
Trend-adjusted DiD +$21.99 (t = 4.02) +$1,284 (SE $1,220; CI includes 0) Office result survives; provider effect attenuates

Bottom line: the office-level Anaheim result remains robust across all diagnostics. At provider level, Pham's causal signal is the cleanest: flat pretrend, negative falsification, and a trend-adjusted effect that strengthens and excludes zero. Diaz and Tarnavsky show larger baseline effects but pre-existing acceleration that complicates causal interpretation.


What the Records Show and What They Don't

State records log payments, not procedures, necessity, or patient harm. They show one clear signal: Anaheim's payment intensity rose after March 2015 while peers declined or flat-lined. That divergence holds across synthetic control, two-way fixed-effects DiD, clustered-SE sensitivity, null pretrend tests, and LOO checks. At provider level, Diaz remains the strongest outlier in baseline models and ranks first in placebo rotation, but pretrend diagnostics suggest part of that increase predates March 2015.

Summary of main estimates: Anaheim DiD +$14.49/visit-day (t = 4.28), trend-adjusted +$21.99 (t = 4.02), synthetic post-gap +$18.67. Provider level: Diaz baseline +$4,757 but trend-adjusted +$1,284 (CI includes zero); Tarnavsky baseline +$3,811 but trend-adjusted −$653 (CI includes zero); Pham baseline +$1,208 and trend-adjusted +$3,521 (t = 4.63, excludes zero).

Analytical hierarchy: office-level evidence carries strongest causal weight. At provider level, Pham's signal outperforms on internal diagnostics. His effect strengthens under stricter specification, while Diaz and Tarnavsky weaken.

Discovery turned on procedure-level records, dashboards, patient-level data, and documents. This article uses only state billing records and standard causal methods. Even so, the Anaheim divergence is visible and persistent.


DATA APPENDIX · ALL PROVIDER WEEKLY TOTALS
Complete Provider Summary Table
Ranked by total Denti-Cal payments · Jan 2013 – Sep 2016
# Dentist Total Payments Avg / Week Pre Mean Post Mean Post/Pre

Post/Pre ratio uses March 2015 as cutoff. Providers with records only on one side show "not available" for the ratio. Providers who left before March 2015 have no post-period data.

Data source: California Public Records Act request "Request 1 and 2" (office-month payments and visit days, 10 offices observed, Jan 2013–Sep 2016, 8 with full pre/post coverage) and "Request 6" (weekly NPI-level payments, 25 providers, Jan 2013–Sep 2016). NPI-to-name mapping via NPI Numbers provided by the state.

Payment per visit day (PPD): Total Denti-Cal payments divided by patient visit days for the same office-month. Visit days measure the count of days on which at least one patient was seen. It is not a headcount.

Difference-in-differences: Two-way fixed-effects OLS with office (or NPI) and calendar-period fixed effects, HC1 heteroskedasticity-robust standard errors. Treatment cutoff is March 2015 (the Gruenbaum transition). Second specification uses January 2016. Cluster-robust standard errors are reported as a sensitivity check (clustered by office or provider).

Synthetic control: Non-negative least squares weighting of control offices to minimize pre-period RMSE on the Anaheim PPD series. Pre-RMSE = $9.02. Weights by office: Carson 32.5%, Eagle Rock 18.4%, Sunnyvale 16.0%, Norwalk 14.9%, South Gate 13.5%, Santa Ana 4.7%, San Jose 0%.

Eligibility for provider DiD panel: Providers with at least 8 observed pre-treatment weeks and 6 observed post-treatment weeks relative to the March 2015 cutoff. 10 of 25 providers in the panel meet this threshold.

Placebo rotation: The DiD model is re-estimated for each of the 10 eligible providers as the designated treated unit. The permutation p-value is the fraction of providers whose |beta| meets or exceeds the Diaz estimate. Two-sided = 0.10, one-sided positive-tail = 0.10.

Pretrend and falsification diagnostics: Additional tests include (1) differential pretrend slopes estimated on pre-period data only and (2) a placebo policy date at March 2014. Office-level diagnostics are broadly supportive of a post-2015 divergence. Provider-level diagnostics indicate pre-existing acceleration for Diaz, which attenuates the trend-adjusted treatment estimate.

Analysis code: All statistical models, synthetic control, placebo rotations, and chart data are produced by cgd_stat_analysis.py. No other scripts contribute to the numbers in this article.

No causal claims about procedures or patient outcomes are made or implied. Payment intensity is a billing-record measure that does not distinguish procedure type, medical necessity, or clinical outcome.

AI Usage

Ideas, analysis, and opinions are my own. Generative AI was used as an editor after the writing and analysis were complete — sentence restructuring and light copy-editing. The author reviewed all suggested changes.