This vignette demonstrates basic usage of surveil for public health research. The package is designed for routine time trend analysis, namely for time trends in disease incidence rates or mortality rates. Models were built using the Stan modeling language for Bayesian inference with Markov chain Monte Carlo (MCMC), but users only need to be familiar with the R language.
The package also contains special methods for age-standardization,
printing and plotting model results, and for measuring and visualizing
health inequalities. For age-standardization see
vignette("age-standardization")
. For discussion and
demonstration analysis see Donegan, Hughes, and
Lee (2022).
To use the models provided by surveil, the surveillance data minimally must contain case counts, population at risk estimates, and a discrete time period variable. The data may also include one or more grouping variables, such as race-ethnicity. Time periods should consist of equally spaced intervals.
This vignette analyzes colorectal cancer incidence data by race-ethnicity, year, and Texas MSA for ages 50-79 (data obtained from CDC Wonder). The race-ethnicity grouping includes (non-Hispanic) black, (non-Hispanic) white, and Hispanic, and the MSAs include those centered on the cities of Austin, Dallas, Houston, and San Antonio.
head(msa) |>
kable(booktabs = TRUE,
caption = "Glimpse of colorectal cancer incidence data (CDC Wonder)")
Year | Race | MSA | Count | Population |
---|---|---|---|---|
1999 | Black or African American | Austin-Round Rock, TX | 28 | 14421 |
2000 | Black or African American | Austin-Round Rock, TX | 16 | 15215 |
2001 | Black or African American | Austin-Round Rock, TX | 22 | 16000 |
2002 | Black or African American | Austin-Round Rock, TX | 24 | 16694 |
2003 | Black or African American | Austin-Round Rock, TX | 34 | 17513 |
2004 | Black or African American | Austin-Round Rock, TX | 26 | 18429 |
The primary function in surveil is stan_rw
,
which fits random walk models to surveillance data. The function is
expects the user to provide a data.frame
with specific
column names. There must be one column named Count
containing case counts, and another column named
Population
, containing the sizes of the populations at
risk. The user must provide the name of the column containing the time
period (the default is time = Year
, to match CDC Wonder
data). Optionally, one can provide a grouping factor. For the MSA data
printed above, the grouping column is Race and the time column is
Year.
We will demonstrate using aggregated CRC cases across Texas’s top
four MSAs. The msa
data from CDC Wonder already has the
necessary format (column names and contents), but these data are
dis-aggregated by MSA. So for this analysis, we first group the data by
year and race, and then combine cases across MSAs.
The following code chunk aggregates the data by year and race-ethnicity:
The following code provides a glimpse of the aggregated data:
head(msa2) |>
kable(booktabs = TRUE,
caption = "Glimpse of aggregated Texas metropolitan CRC cases, by race and year")
Year | Race | Count | Population |
---|---|---|---|
1999 | Black or African American | 471 | 270430 |
2000 | Black or African American | 455 | 283280 |
2001 | Black or African American | 505 | 298287 |
2002 | Black or African American | 539 | 313133 |
2003 | Black or African American | 546 | 329481 |
2004 | Black or African American | 602 | 346886 |
The base surveil model is specified as follows. The Poisson model is used as the likelihood: the probability of observing a given number of cases, yt, conditional on a given level of risk, eϕt, and known population at risk, pt, is: yt ∼ Pois(pt ⋅ eϕt) where t indexes the time period.
Next, we need a model for the log-rates, ϕt. The first-difference prior states that our expectation for the log-rate at any time is its previous value, and we assign a normal probability distribution to deviations from the previous value (Clayton 1996). This is also known as the random-walk prior: ϕt ∼ Gau(ϕt − 1, τ2) This places higher probability on a smooth trend through time, specifically implying that underlying disease risk tends to have less variation than crude incidence.
The log-risk for time t = 1 has no previous value to anchor its expectation; thus, we assign a prior probability distribution directly to ϕ1. For this prior, surveil uses a normal distribution. The scale parameter, τ, also requires a prior distribution, and again surveil uses a normal model which is diffuse relative to the log incidence rates.
In addition to the Poisson model, the binomial model is also available: yt ∼ Binom(pt ⋅ g−1(ϕt)) where g is the logit function and $g^{-1}(x) = \frac{exp(x)}{1 + exp(x)}$ (the inverse-logit function). If the binomial model is used the rest of the model remains the same as stated above. The Poisson model is typically preferred for rare events (such as rates below .01), otherwise the binomial model is usually more appropriate. The remainder of this vignette will proceed using the Poisson model only.
The time series model is fit by passing surveillance data to the
stan_rw
function. Here, Year
and
Race
indicate the appropriate time and grouping columns in
the msa2
data frame.
fit <- stan_rw(msa2, time = Year, group = Race, iter = 1e3)
#> Distribution: normal
#> Distribution: normal
#> [1] "Setting normal prior(s) for eta_1: "
#> location scale
#> -6 5
#> [1] "\nSetting half-normal prior for sigma: "
#> location scale
#> 0 1
#>
#> SAMPLING FOR MODEL 'RW' NOW (CHAIN 1).
#> Chain 1:
#> Chain 1: Gradient evaluation took 1.8e-05 seconds
#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.18 seconds.
#> Chain 1: Adjust your expectations accordingly!
#> Chain 1:
#> Chain 1:
#> Chain 1: Iteration: 1 / 1000 [ 0%] (Warmup)
#> Chain 1: Iteration: 501 / 1000 [ 50%] (Sampling)
#> Chain 1: Iteration: 1000 / 1000 [100%] (Sampling)
#> Chain 1:
#> Chain 1: Elapsed Time: 0.244 seconds (Warm-up)
#> Chain 1: 0.148 seconds (Sampling)
#> Chain 1: 0.392 seconds (Total)
#> Chain 1:
#>
#> SAMPLING FOR MODEL 'RW' NOW (CHAIN 2).
#> Chain 2:
#> Chain 2: Gradient evaluation took 1.2e-05 seconds
#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.12 seconds.
#> Chain 2: Adjust your expectations accordingly!
#> Chain 2:
#> Chain 2:
#> Chain 2: Iteration: 1 / 1000 [ 0%] (Warmup)
#> Chain 2: Iteration: 501 / 1000 [ 50%] (Sampling)
#> Chain 2: Iteration: 1000 / 1000 [100%] (Sampling)
#> Chain 2:
#> Chain 2: Elapsed Time: 0.179 seconds (Warm-up)
#> Chain 2: 0.142 seconds (Sampling)
#> Chain 2: 0.321 seconds (Total)
#> Chain 2:
#>
#> SAMPLING FOR MODEL 'RW' NOW (CHAIN 3).
#> Chain 3:
#> Chain 3: Gradient evaluation took 1.3e-05 seconds
#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.13 seconds.
#> Chain 3: Adjust your expectations accordingly!
#> Chain 3:
#> Chain 3:
#> Chain 3: Iteration: 1 / 1000 [ 0%] (Warmup)
#> Chain 3: Iteration: 501 / 1000 [ 50%] (Sampling)
#> Chain 3: Iteration: 1000 / 1000 [100%] (Sampling)
#> Chain 3:
#> Chain 3: Elapsed Time: 0.238 seconds (Warm-up)
#> Chain 3: 0.151 seconds (Sampling)
#> Chain 3: 0.389 seconds (Total)
#> Chain 3:
#>
#> SAMPLING FOR MODEL 'RW' NOW (CHAIN 4).
#> Chain 4:
#> Chain 4: Gradient evaluation took 1.3e-05 seconds
#> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.13 seconds.
#> Chain 4: Adjust your expectations accordingly!
#> Chain 4:
#> Chain 4:
#> Chain 4: Iteration: 1 / 1000 [ 0%] (Warmup)
#> Chain 4: Iteration: 501 / 1000 [ 50%] (Sampling)
#> Chain 4: Iteration: 1000 / 1000 [100%] (Sampling)
#> Chain 4:
#> Chain 4: Elapsed Time: 0.23 seconds (Warm-up)
#> Chain 4: 0.147 seconds (Sampling)
#> Chain 4: 0.377 seconds (Total)
#> Chain 4:
The iter = 1e3
line controls how long the MCMC sampling
continues for (in this case, 1,000 samples: 500 warmup, then 500 more
for inference). The default is 3,000, which is more than sufficient for
this example model. By default, there are four independent MCMC chains
each with 500 post-warmup samples (for a total of 2,000 MCMC samples
used for the estimates).
To speed things up, we could take advantage of parallel processing
using the cores
argument (e.g., add cores = 4
)
to run on 4 cores simultaneously. You can suppress the messages seen
above by adding refresh = 0
.
The print
method will print the estimates with 95%
credible intervals to the console; adding scale = 100e3
will display rates per 100,000:
print(fit, scale = 100e3)
#> Summary of surveil model results
#> Time periods: 19
#> Grouping variable: Race
#> Correlation matrix: FALSE
#> time Race mean lwr_2.5 upr_97.5
#> 1 1999 Black or African American 170.26676 159.05719 182.65431
#> 2 2000 Black or African American 166.67698 156.09374 177.11090
#> 3 2001 Black or African American 168.53196 158.95582 178.33056
#> 4 2002 Black or African American 169.26216 160.01883 179.13748
#> 5 2003 Black or African American 167.06029 157.79870 176.46654
#> 6 2004 Black or African American 166.78747 157.56047 176.75800
#> 7 2005 Black or African American 159.13270 149.99109 168.44421
#> 8 2006 Black or African American 154.63870 146.16250 163.14546
#> 9 2007 Black or African American 152.56866 144.55584 160.67305
#> 10 2008 Black or African American 149.58332 142.09718 157.78554
#> 11 2009 Black or African American 143.43951 135.72175 151.29488
#> 12 2010 Black or African American 138.79265 131.51318 146.30744
#> 13 2011 Black or African American 131.00589 123.66220 138.45438
#> 14 2012 Black or African American 125.34946 117.93502 132.33165
#> 15 2013 Black or African American 124.93224 118.70358 131.06733
#> 16 2014 Black or African American 126.18695 120.06034 132.81516
#> 17 2015 Black or African American 122.40840 115.86679 128.82619
#> 18 2016 Black or African American 121.84843 115.40250 128.15501
#> 19 2017 Black or African American 122.47092 116.27476 129.30151
#> 20 1999 Hispanic 101.48307 94.48172 108.50749
#> 21 2000 Hispanic 104.16201 98.23244 110.93826
#> 22 2001 Hispanic 102.53307 97.10501 108.20712
#> 23 2002 Hispanic 101.48689 96.11798 107.10017
#> 24 2003 Hispanic 100.48534 95.35374 105.85383
#> 25 2004 Hispanic 100.81772 95.89644 106.23758
#> 26 2005 Hispanic 98.52844 93.79392 103.52898
#> 27 2006 Hispanic 96.83265 92.20527 102.26217
#> 28 2007 Hispanic 94.08457 89.74434 98.98279
#> 29 2008 Hispanic 90.47411 85.81656 94.79601
#> 30 2009 Hispanic 88.76308 83.98257 93.13734
#> 31 2010 Hispanic 87.92103 83.58804 92.27727
#> 32 2011 Hispanic 87.36662 83.27820 91.34813
#> 33 2012 Hispanic 87.72414 84.02251 91.77388
#> 34 2013 Hispanic 87.27915 83.50333 91.23312
#> 35 2014 Hispanic 85.80091 82.11733 89.51517
#> 36 2015 Hispanic 85.84688 82.10998 89.61861
#> 37 2016 Hispanic 84.85911 80.89671 88.59965
#> 38 2017 Hispanic 85.92903 81.48484 90.52700
#> 39 1999 White 135.18132 130.33681 140.23303
#> 40 2000 White 136.58594 131.78648 141.33743
#> 41 2001 White 134.18454 129.65225 139.10978
#> 42 2002 White 130.20109 125.77211 134.66834
#> 43 2003 White 127.43923 123.01588 132.14947
#> 44 2004 White 120.06518 115.95168 124.05546
#> 45 2005 White 115.08939 111.34309 118.88814
#> 46 2006 White 109.66415 105.93979 113.43037
#> 47 2007 White 108.23398 104.65193 111.99186
#> 48 2008 White 103.44483 99.87286 106.99818
#> 49 2009 White 98.90509 95.43183 102.50306
#> 50 2010 White 96.24194 92.73942 99.83672
#> 51 2011 White 93.66670 90.55288 96.84659
#> 52 2012 White 91.13508 87.93858 94.37102
#> 53 2013 White 90.13578 87.00839 93.28901
#> 54 2014 White 91.45708 88.55699 94.39348
#> 55 2015 White 93.05731 89.84734 96.28302
#> 56 2016 White 89.05793 85.78573 92.12122
#> 57 2017 White 92.32501 88.97061 95.76867
This information is also stored in a data frame,
fit$summary
:
head(fit$summary)
#> time mean lwr_2.5 upr_97.5 Race Year Count
#> 1 1999 0.001702668 0.001590572 0.001826543 Black or African American 1999 471
#> 2 2000 0.001666770 0.001560937 0.001771109 Black or African American 2000 455
#> 3 2001 0.001685320 0.001589558 0.001783306 Black or African American 2001 505
#> 4 2002 0.001692622 0.001600188 0.001791375 Black or African American 2002 539
#> 5 2003 0.001670603 0.001577987 0.001764665 Black or African American 2003 546
#> 6 2004 0.001667875 0.001575605 0.001767580 Black or African American 2004 602
#> Population Crude
#> 1 270430 0.001741671
#> 2 283280 0.001606185
#> 3 298287 0.001693000
#> 4 313133 0.001721313
#> 5 329481 0.001657152
#> 6 346886 0.001735440
The fit$summary
object can be used to create custom
plots and tables.
If we call plot
on a fitted surveil model, we
get a ggplot
object depicting risk estimates with 95%
credible intervals:
The crude incidence rates (observed values) are also plotted here as
points.
The plot
method has a number of optional arguments that
control its appearance. For example, the base_size
argument
controls the size of labels. The size of the points for the crude rates
can be adjusted using size
, and size = 0
removes them altogether. We can also use ggplot
to add
custom modifications:
fig <- plot(fit, scale = 100e3, base_size = 11, size = 0)
#> Plotted rates are per 100,000
fig +
theme(legend.position = "right") +
labs(title = "CRC incidence per 100,000",
subtitle = "Texas MSAs, 50-79 y.o.")
The plot method has a style
argument that controls how
uncertainty is represented. The default, style = "mean_qi"
,
shows the mean of the posterior distribution as the estimate and adds
shading to depict the 95% credible interval (as above). The alternative,
style = "lines"
, plots MCMC samples from the joint
probability distribution for the estimates:
By default, M = 250
samples are plotted. The
style
option is available for all of the surveil
plot methods. This style is sometimes helpful for visualizing multiple
groups when their credible intervals overlap.
The apc
method calculates percent change by period and
cumulatively over time:
The object returned by apc
contains two data frames. The
first contains estimates of percent change from the previous period:
head(fit_pc$apc)
#> time group apc lwr upr
#> 1 1999 Black or African American 0.0000000 0.000000 0.000000
#> 2 2000 Black or African American -2.0286711 -9.220709 4.920823
#> 3 2001 Black or African American 1.1841054 -5.260767 8.454766
#> 4 2002 Black or African American 0.4872353 -5.650314 7.348711
#> 5 2003 Black or African American -1.2474414 -7.324993 5.317912
#> 6 2004 Black or African American -0.1105692 -6.469704 6.471695
Those estimates typically have high uncertainty.
The second data frame contains estimates of cumulative percent change (since the first observed period):
head(fit_pc$cpc)
#> time group cpc lwr upr
#> 1 1999 Black or African American 0.0000000 0.000000 0.000000
#> 2 2000 Black or African American -2.0286711 -9.220709 4.920823
#> 3 2001 Black or African American -0.9075888 -8.884128 7.050956
#> 4 2002 Black or African American -0.4677430 -8.623916 9.191096
#> 5 2003 Black or African American -1.7583879 -10.296909 7.406061
#> 6 2004 Black or African American -1.9157764 -10.165109 7.570540
Each value in the cpc
column is an estimate of the
difference in incidence rates between that year and the first year (in
this case, 1999) expressed as a percent of the first year’s rate. The
lwr
and upr
columns are the lower and upper
bounds of the 95% credible intervals for the estimates.
This information can also be plotted:
If desired, the average annual percent change from the first period can be calculated by dividing the cumulative percent change (CPC) by the appropriate number of periods. For example, the CPC from 1999 to 2017 for whites is about -31 for an average annual percent change of about −31/18 = −1.72. The credible intervals for the average annual percent change can also be obtained from the CPC table using the same method. For this example, the correct denominator is 2017 − 1999 = 18 (generally: the last year minus the first year).
If you do not see any warnings printed to the R console at the end of
the model fitting process then you can simply move forward with the
analysis. If there is a warning, it may say that the effective sample
size is low or that the R-hat values are large. For a crash course on
MCMC analysis with surveil, including MCMC diagnostics, see the
vignette on the topic vignette("surveil-mcmc")
.
A quick and dirty summary is that you want to watch two key
diagnostics, which are effective MCMC sample size (ESS) and R-hat. For
all your parameters of interest, ESS should be at least 400 or so. If
you want to increase the ESS, use the iter
argument to draw
more samples. The R-hat values should all be pretty near to one, such as
within the range 1 ± .02. For the
simple models we are using here, large R-hat values can often be fixed
just by drawing more samples.
You can find these diagnostics by printing results
print(fit$samples)
and seeing the columns
n_eff
(for ESS) and Rhat
.
The MCMC sampling is generally fast and without trouble when the
numbers are not too small (as in this example model). When the incidence
rates are based on small numbers (like counts less than 20), the
sampling often proceeds more slowly and a higher iter
value
may be needed.
In most applications, the base model specification described above will be entirely sufficient. However, surveil provides an option for users to add a correlation structure to the model when multiple groups are modeled together. For correlated trends, this can increase the precision of estimates.
The log-rates for k populations, ϕt, are assigned a multivariate normal distribution (Brandt and Williams 2007): ϕt ∼ Gau(ϕt − 1, Σ), where Σ is a k × k covariance matrix.
The covariance matrix can be decomposed into a diagonal matrix containing scale parameters for each variable, Δ = diag(τ1, …τk), and a symmetric correlation matrix, Ω (Stan Development Team 2021): Σ = ΔΩΔ When the correlation structure is added to the model, then a prior distribution is also required for the correlation matrix. surveil uses the LKJ model, which has a single shape parameter, η (Stan Development Team 2021). If η = 1, the LKJ model will place uniform prior probability on any k × k correlation matrix; as η increases from one, it expresses ever greater skepticism towards large correlations. When η < 1, the LKJ model becomes ‘concave’—expressing skepticism towards correlations of zero.
If we wanted to add a correlation structure to the model, we would
add cor = TRUE
to stan_rw
, as in: