This vignette demonstrates basic usage of surveil for public health research. The package is designed for routine time trend analysis, namely for time trends in disease incidence rates or mortality rates. Models were built using the Stan modeling language for Bayesian inference with Markov chain Monte Carlo (MCMC), but users only need to be familiar with the R language.
The package also contains special methods for age-standardization,
printing and plotting model results, and for measuring and visualizing
health inequalities. For age-standardization see
vignette("age-standardization")
. For discussion and
demonstration analysis see Donegan, Hughes, and
Lee (2022).
To use the models provided by surveil, the surveillance data minimally must contain case counts, population at risk estimates, and a discrete time period variable. The data may also include one or more grouping variables, such as race-ethnicity. Time periods should consist of equally spaced intervals.
This vignette analyzes colorectal cancer incidence data by race-ethnicity, year, and Texas MSA for ages 50-79 (data obtained from CDC Wonder). The race-ethnicity grouping includes (non-Hispanic) black, (non-Hispanic) white, and Hispanic, and the MSAs include those centered on the cities of Austin, Dallas, Houston, and San Antonio.
head(msa) |>
kable(booktabs = TRUE,
caption = "Glimpse of colorectal cancer incidence data (CDC Wonder)")
Year | Race | MSA | Count | Population |
---|---|---|---|---|
1999 | Black or African American | Austin-Round Rock, TX | 28 | 14421 |
2000 | Black or African American | Austin-Round Rock, TX | 16 | 15215 |
2001 | Black or African American | Austin-Round Rock, TX | 22 | 16000 |
2002 | Black or African American | Austin-Round Rock, TX | 24 | 16694 |
2003 | Black or African American | Austin-Round Rock, TX | 34 | 17513 |
2004 | Black or African American | Austin-Round Rock, TX | 26 | 18429 |
The primary function in surveil is stan_rw
,
which fits random walk models to surveillance data. The function is
expects the user to provide a data.frame
with specific
column names. There must be one column named Count
containing case counts, and another column named
Population
, containing the sizes of the populations at
risk. The user must provide the name of the column containing the time
period (the default is time = Year
, to match CDC Wonder
data). Optionally, one can provide a grouping factor. For the MSA data
printed above, the grouping column is Race and the time column is
Year.
We will demonstrate using aggregated CRC cases across Texas’s top
four MSAs. The msa
data from CDC Wonder already has the
necessary format (column names and contents), but these data are
dis-aggregated by MSA. So for this analysis, we first group the data by
year and race, and then combine cases across MSAs.
The following code chunk aggregates the data by year and race-ethnicity:
The following code provides a glimpse of the aggregated data:
head(msa2) |>
kable(booktabs = TRUE,
caption = "Glimpse of aggregated Texas metropolitan CRC cases, by race and year")
Year | Race | Count | Population |
---|---|---|---|
1999 | Black or African American | 471 | 270430 |
2000 | Black or African American | 455 | 283280 |
2001 | Black or African American | 505 | 298287 |
2002 | Black or African American | 539 | 313133 |
2003 | Black or African American | 546 | 329481 |
2004 | Black or African American | 602 | 346886 |
The base surveil model is specified as follows. The Poisson model is used as the likelihood: the probability of observing a given number of cases, yt, conditional on a given level of risk, eϕt, and known population at risk, pt, is: yt ∼ Pois(pt ⋅ eϕt) where t indexes the time period.
Next, we need a model for the log-rates, ϕt. The first-difference prior states that our expectation for the log-rate at any time is its previous value, and we assign a normal probability distribution to deviations from the previous value (Clayton 1996). This is also known as the random-walk prior: ϕt ∼ Gau(ϕt − 1, τ2) This places higher probability on a smooth trend through time, specifically implying that underlying disease risk tends to have less variation than crude incidence.
The log-risk for time t = 1 has no previous value to anchor its expectation; thus, we assign a prior probability distribution directly to ϕ1. For this prior, surveil uses a normal distribution. The scale parameter, τ, also requires a prior distribution, and again surveil uses a normal model which is diffuse relative to the log incidence rates.
In addition to the Poisson model, the binomial model is also available: yt ∼ Binom(pt ⋅ g−1(ϕt)) where g is the logit function and $g^{-1}(x) = \frac{exp(x)}{1 + exp(x)}$ (the inverse-logit function). If the binomial model is used the rest of the model remains the same as stated above. The Poisson model is typically preferred for rare events (such as rates below .01), otherwise the binomial model is usually more appropriate. The remainder of this vignette will proceed using the Poisson model only.
The time series model is fit by passing surveillance data to the
stan_rw
function. Here, Year
and
Race
indicate the appropriate time and grouping columns in
the msa2
data frame.
fit <- stan_rw(msa2, time = Year, group = Race, iter = 1e3)
#> Distribution: normal
#> Distribution: normal
#> Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
#> Running the chains for more iterations may help. See
#> https://mc-stan.org/misc/warnings.html#bulk-ess
#> [1] "Setting normal prior(s) for eta_1: "
#> location scale
#> -6 5
#> [1] "\nSetting half-normal prior for sigma: "
#> location scale
#> 0 1
#>
#> SAMPLING FOR MODEL 'RW' NOW (CHAIN 1).
#> Chain 1:
#> Chain 1: Gradient evaluation took 2.6e-05 seconds
#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.26 seconds.
#> Chain 1: Adjust your expectations accordingly!
#> Chain 1:
#> Chain 1:
#> Chain 1: Iteration: 1 / 1000 [ 0%] (Warmup)
#> Chain 1: Iteration: 501 / 1000 [ 50%] (Sampling)
#> Chain 1: Iteration: 1000 / 1000 [100%] (Sampling)
#> Chain 1:
#> Chain 1: Elapsed Time: 0.242 seconds (Warm-up)
#> Chain 1: 0.143 seconds (Sampling)
#> Chain 1: 0.385 seconds (Total)
#> Chain 1:
#>
#> SAMPLING FOR MODEL 'RW' NOW (CHAIN 2).
#> Chain 2:
#> Chain 2: Gradient evaluation took 1.3e-05 seconds
#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.13 seconds.
#> Chain 2: Adjust your expectations accordingly!
#> Chain 2:
#> Chain 2:
#> Chain 2: Iteration: 1 / 1000 [ 0%] (Warmup)
#> Chain 2: Iteration: 501 / 1000 [ 50%] (Sampling)
#> Chain 2: Iteration: 1000 / 1000 [100%] (Sampling)
#> Chain 2:
#> Chain 2: Elapsed Time: 0.23 seconds (Warm-up)
#> Chain 2: 0.144 seconds (Sampling)
#> Chain 2: 0.374 seconds (Total)
#> Chain 2:
#>
#> SAMPLING FOR MODEL 'RW' NOW (CHAIN 3).
#> Chain 3:
#> Chain 3: Gradient evaluation took 1.2e-05 seconds
#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.12 seconds.
#> Chain 3: Adjust your expectations accordingly!
#> Chain 3:
#> Chain 3:
#> Chain 3: Iteration: 1 / 1000 [ 0%] (Warmup)
#> Chain 3: Iteration: 501 / 1000 [ 50%] (Sampling)
#> Chain 3: Iteration: 1000 / 1000 [100%] (Sampling)
#> Chain 3:
#> Chain 3: Elapsed Time: 0.283 seconds (Warm-up)
#> Chain 3: 0.148 seconds (Sampling)
#> Chain 3: 0.431 seconds (Total)
#> Chain 3:
#>
#> SAMPLING FOR MODEL 'RW' NOW (CHAIN 4).
#> Chain 4:
#> Chain 4: Gradient evaluation took 1.3e-05 seconds
#> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.13 seconds.
#> Chain 4: Adjust your expectations accordingly!
#> Chain 4:
#> Chain 4:
#> Chain 4: Iteration: 1 / 1000 [ 0%] (Warmup)
#> Chain 4: Iteration: 501 / 1000 [ 50%] (Sampling)
#> Chain 4: Iteration: 1000 / 1000 [100%] (Sampling)
#> Chain 4:
#> Chain 4: Elapsed Time: 0.235 seconds (Warm-up)
#> Chain 4: 0.144 seconds (Sampling)
#> Chain 4: 0.379 seconds (Total)
#> Chain 4:
The iter = 1e3
line controls how long the MCMC sampling
continues for (in this case, 1,000 samples: 500 warmup, then 500 more
for inference). The default is 3,000, which is more than sufficient for
this example model. By default, there are four independent MCMC chains
each with 500 post-warmup samples (for a total of 2,000 MCMC samples
used for the estimates).
To speed things up, we could take advantage of parallel processing
using the cores
argument (e.g., add cores = 4
)
to run on 4 cores simultaneously. You can suppress the messages seen
above by adding refresh = 0
.
The print
method will print the estimates with 95%
credible intervals to the console; adding scale = 100e3
will display rates per 100,000:
print(fit, scale = 100e3)
#> Summary of surveil model results
#> Time periods: 19
#> Grouping variable: Race
#> Correlation matrix: FALSE
#> time Race mean lwr_2.5 upr_97.5
#> 1 1999 Black or African American 170.33627 157.89832 183.47029
#> 2 2000 Black or African American 166.25709 155.91522 177.24787
#> 3 2001 Black or African American 168.13691 158.00815 178.16649
#> 4 2002 Black or African American 169.06644 159.09835 179.05615
#> 5 2003 Black or African American 167.13254 157.88156 177.00752
#> 6 2004 Black or African American 166.85426 157.51589 177.10549
#> 7 2005 Black or African American 158.91796 149.57105 168.61072
#> 8 2006 Black or African American 154.56980 146.15770 163.59576
#> 9 2007 Black or African American 152.63748 143.77058 161.87341
#> 10 2008 Black or African American 149.82285 141.95456 158.06923
#> 11 2009 Black or African American 143.62798 136.16250 151.22073
#> 12 2010 Black or African American 138.90791 131.91263 146.65986
#> 13 2011 Black or African American 130.80531 123.74910 137.70816
#> 14 2012 Black or African American 125.09925 117.74630 131.80395
#> 15 2013 Black or African American 124.84944 118.27895 131.46386
#> 16 2014 Black or African American 126.19285 119.53100 133.19746
#> 17 2015 Black or African American 122.30833 115.88816 128.77039
#> 18 2016 Black or African American 121.90578 115.54966 128.40574
#> 19 2017 Black or African American 122.51787 115.88469 129.18335
#> 20 1999 Hispanic 101.61683 94.62684 108.34906
#> 21 2000 Hispanic 104.19978 98.17773 110.95956
#> 22 2001 Hispanic 102.57242 97.07974 108.24075
#> 23 2002 Hispanic 101.50590 96.31276 107.00604
#> 24 2003 Hispanic 100.52686 95.52737 105.79378
#> 25 2004 Hispanic 100.85159 95.94472 106.69614
#> 26 2005 Hispanic 98.51065 93.39600 103.60096
#> 27 2006 Hispanic 96.78905 92.00167 101.91762
#> 28 2007 Hispanic 94.16045 89.68521 98.88435
#> 29 2008 Hispanic 90.53700 85.68957 95.03950
#> 30 2009 Hispanic 88.73558 84.36729 92.93874
#> 31 2010 Hispanic 87.87137 83.59341 91.86566
#> 32 2011 Hispanic 87.39946 83.41884 91.40878
#> 33 2012 Hispanic 87.83250 84.08717 91.92827
#> 34 2013 Hispanic 87.32980 83.56054 91.21428
#> 35 2014 Hispanic 85.81898 82.09110 89.46567
#> 36 2015 Hispanic 85.84780 82.33908 89.74098
#> 37 2016 Hispanic 84.88363 81.16265 88.51812
#> 38 2017 Hispanic 85.94445 81.90665 89.95500
#> 39 1999 White 135.19850 129.96038 140.23127
#> 40 2000 White 136.50813 131.77336 141.38120
#> 41 2001 White 134.07624 129.42512 138.97390
#> 42 2002 White 130.18033 125.59071 134.81820
#> 43 2003 White 127.48366 123.40033 131.76092
#> 44 2004 White 120.02967 115.64296 124.31315
#> 45 2005 White 115.14726 110.98258 119.34904
#> 46 2006 White 109.75415 105.86537 113.60642
#> 47 2007 White 108.20419 104.54599 112.04688
#> 48 2008 White 103.48119 99.79886 107.03083
#> 49 2009 White 98.99339 95.50058 102.51113
#> 50 2010 White 96.26304 92.91076 99.57790
#> 51 2011 White 93.70821 90.59168 96.96476
#> 52 2012 White 91.07672 88.04431 94.13333
#> 53 2013 White 90.12198 87.09711 93.21023
#> 54 2014 White 91.47322 88.39080 94.54667
#> 55 2015 White 93.11186 90.06600 96.38829
#> 56 2016 White 89.05492 86.12120 92.04415
#> 57 2017 White 92.32416 88.90862 95.65476
This information is also stored in a data frame,
fit$summary
:
head(fit$summary)
#> time mean lwr_2.5 upr_97.5 Race Year Count
#> 1 1999 0.001703363 0.001578983 0.001834703 Black or African American 1999 471
#> 2 2000 0.001662571 0.001559152 0.001772479 Black or African American 2000 455
#> 3 2001 0.001681369 0.001580082 0.001781665 Black or African American 2001 505
#> 4 2002 0.001690664 0.001590984 0.001790561 Black or African American 2002 539
#> 5 2003 0.001671325 0.001578816 0.001770075 Black or African American 2003 546
#> 6 2004 0.001668543 0.001575159 0.001771055 Black or African American 2004 602
#> Population Crude
#> 1 270430 0.001741671
#> 2 283280 0.001606185
#> 3 298287 0.001693000
#> 4 313133 0.001721313
#> 5 329481 0.001657152
#> 6 346886 0.001735440
The fit$summary
object can be used to create custom
plots and tables.
If we call plot
on a fitted surveil model, we
get a ggplot
object depicting risk estimates with 95%
credible intervals:
The crude incidence rates (observed values) are also plotted here as points.
The plot
method has a number of optional arguments that
control its appearance. For example, the base_size
argument
controls the size of labels. The size of the points for the crude rates
can be adjusted using size
, and size = 0
removes them altogether. We can also use ggplot
to add
custom modifications:
fig <- plot(fit, scale = 100e3, base_size = 11, size = 0)
#> Plotted rates are per 100,000
fig +
theme(legend.position = "right") +
labs(title = "CRC incidence per 100,000",
subtitle = "Texas MSAs, 50-79 y.o.")
The plot method has a style
argument that controls how
uncertainty is represented. The default, style = "mean_qi"
,
shows the mean of the posterior distribution as the estimate and adds
shading to depict the 95% credible interval (as above). The alternative,
style = "lines"
, plots MCMC samples from the joint
probability distribution for the estimates:
By default, M = 250
samples are plotted. The
style
option is available for all of the surveil
plot methods. This style is sometimes helpful for visualizing multiple
groups when their credible intervals overlap.
The apc
method calculates percent change by period and
cumulatively over time:
The object returned by apc
contains two data frames. The
first contains estimates of percent change from the previous period:
head(fit_pc$apc)
#> time group apc lwr upr
#> 1 1999 Black or African American 0.0000000 0.000000 0.000000
#> 2 2000 Black or African American -2.3088216 -9.956259 4.977248
#> 3 2001 Black or African American 1.1973859 -5.539544 8.755380
#> 4 2002 Black or African American 0.6118236 -5.786168 7.659835
#> 5 2003 Black or African American -1.0879812 -7.232153 5.725208
#> 6 2004 Black or African American -0.1091961 -6.457924 6.946290
Those estimates typically have high uncertainty.
The second data frame contains estimates of cumulative percent change (since the first observed period):
head(fit_pc$cpc)
#> time group cpc lwr upr
#> 1 1999 Black or African American 0.0000000 0.000000 0.000000
#> 2 2000 Black or African American -2.3088216 -9.956259 4.977248
#> 3 2001 Black or African American -1.1774982 -9.780237 7.097287
#> 4 2002 Black or African American -0.6232887 -9.218717 8.221068
#> 5 2003 Black or African American -1.7539902 -10.146712 7.639804
#> 6 2004 Black or African American -1.9176055 -10.307615 7.332490
Each value in the cpc
column is an estimate of the
difference in incidence rates between that year and the first year (in
this case, 1999) expressed as a percent of the first year’s rate. The
lwr
and upr
columns are the lower and upper
bounds of the 95% credible intervals for the estimates.
This information can also be plotted:
If desired, the average annual percent change from the first period can be calculated by dividing the cumulative percent change (CPC) by the appropriate number of periods. For example, the CPC from 1999 to 2017 for whites is about -31 for an average annual percent change of about −31/18 = −1.72. The credible intervals for the average annual percent change can also be obtained from the CPC table using the same method. For this example, the correct denominator is 2017 − 1999 = 18 (generally: the last year minus the first year).
If you do not see any warnings printed to the R console at the end of
the model fitting process then you can simply move forward with the
analysis. If there is a warning, it may say that the effective sample
size is low or that the R-hat values are large. For a crash course on
MCMC analysis with surveil, including MCMC diagnostics, see the
vignette on the topic vignette("surveil-mcmc")
.
A quick and dirty summary is that you want to watch two key
diagnostics, which are effective MCMC sample size (ESS) and R-hat. For
all your parameters of interest, ESS should be at least 400 or so. If
you want to increase the ESS, use the iter
argument to draw
more samples. The R-hat values should all be pretty near to one, such as
within the range 1 ± .02. For the
simple models we are using here, large R-hat values can often be fixed
just by drawing more samples.
You can find these diagnostics by printing results
print(fit$samples)
and seeing the columns
n_eff
(for ESS) and Rhat
.
The MCMC sampling is generally fast and without trouble when the
numbers are not too small (as in this example model). When the incidence
rates are based on small numbers (like counts less than 20), the
sampling often proceeds more slowly and a higher iter
value
may be needed.
In most applications, the base model specification described above will be entirely sufficient. However, surveil provides an option for users to add a correlation structure to the model when multiple groups are modeled together. For correlated trends, this can increase the precision of estimates.
The log-rates for k populations, ϕt, are assigned a multivariate normal distribution (Brandt and Williams 2007): ϕt ∼ Gau(ϕt − 1, Σ), where Σ is a k × k covariance matrix.
The covariance matrix can be decomposed into a diagonal matrix containing scale parameters for each variable, Δ = diag(τ1, …τk), and a symmetric correlation matrix, Ω (Stan Development Team 2021): Σ = ΔΩΔ When the correlation structure is added to the model, then a prior distribution is also required for the correlation matrix. surveil uses the LKJ model, which has a single shape parameter, η (Stan Development Team 2021). If η = 1, the LKJ model will place uniform prior probability on any k × k correlation matrix; as η increases from one, it expresses ever greater skepticism towards large correlations. When η < 1, the LKJ model becomes ‘concave’—expressing skepticism towards correlations of zero.
If we wanted to add a correlation structure to the model, we would
add cor = TRUE
to stan_rw
, as in: