Skip to contents

Calculate an estimate of true prevalence from apparent prevalence, and uncertain estimates of test sensitivity and test specificity, using one of 3 methods.

Usage

true_prevalence(
  pos_obs,
  n_obs,
  false_pos_controls = NULL,
  n_controls = NULL,
  false_neg_diseased = NULL,
  n_diseased = NULL,
  confint = 0.95,
  method = c("lang-reiczigel", "rogan-gladen", "bayes"),
  ...,
  spec = NULL,
  sens = NULL
)

Arguments

pos_obs

the number of positive observations for a given test

n_obs

the number of observations for a given test

false_pos_controls

the number of positives that appeared in the specificity disease-free control group. These are by definition false positives. This is (1-specificity)*n_controls

n_controls

the number of controls in the specificity disease-free control group.

false_neg_diseased

the number of negatives that appeared in the sensitivity confirmed disease group. These are by definition false negatives. This is (1-sensitivity)*n_diseased

n_diseased

the number of confirmed disease cases in the sensitivity control group.

confint

confidence interval limits

method

one of:

...

Arguments passed on to uncertain_rogan_gladen

samples

number fo random draws of sensitivity and specificity

fmt

a sprintf formatting string accepting 3 numbers

seed

set seed for reproducibility

spec

the prior specificity of the test as a beta_dist.

sens

the prior sensitivity of the test as a beta_dist.

Value

A dataframe containing the following columns:

  • test (character) - the name of the test or panel

  • prevalence.lower (numeric) - the lower estimate

  • prevalence.median (numeric) - the median estimate

  • prevalence.upper (numeric) - the upper estimate

  • prevalence.method (character) - the method of estimation

  • prevalence.label (character) - a fomatted label of the true prevalence estimate with CI

Ungrouped.

No default value.

Examples

true_prevalence(c(1:50), 200, 2, 800, 25, 75)
#> # A tibble: 50 × 15
#>    prevalence.lower prevalence.median prevalence.upper prevalence.method
#>               <dbl>             <dbl>            <dbl> <chr>            
#>  1          0                 0.00373           0.0420 lang-reiczigel   
#>  2          0                 0.0112            0.0534 lang-reiczigel   
#>  3          0                 0.0188            0.0644 lang-reiczigel   
#>  4          0.00313           0.0263            0.0751 lang-reiczigel   
#>  5          0.00787           0.0338            0.0856 lang-reiczigel   
#>  6          0.0128            0.0413            0.0959 lang-reiczigel   
#>  7          0.0179            0.0488            0.106  lang-reiczigel   
#>  8          0.0231            0.0564            0.116  lang-reiczigel   
#>  9          0.0284            0.0639            0.126  lang-reiczigel   
#> 10          0.0338            0.0714            0.136  lang-reiczigel   
#> # ℹ 40 more rows
#> # ℹ 11 more variables: prevalence.label <chr>, spec.median <dbl>,
#> #   spec.lower <dbl>, spec.upper <dbl>, spec.label <chr>, sens.median <dbl>,
#> #   sens.lower <dbl>, sens.upper <dbl>, sens.label <chr>, pos_obs <int>,
#> #   n_obs <dbl>
true_prevalence(c(1:10)*2, 200, 25, 800, 1, 6, method="rogan-gladen")
#> # A tibble: 10 × 15
#>    prevalence.median prevalence.lower prevalence.upper prevalence.label      
#>                <dbl>            <dbl>            <dbl> <chr>                 
#>  1            0               0                 0      0.00% [0.00% — 0.00%] 
#>  2            0               0                 0      0.00% [0.00% — 0.00%] 
#>  3            0               0                 0.0127 0.00% [0.00% — 1.27%] 
#>  4            0.0117          0                 0.0288 1.17% [0.00% — 2.88%] 
#>  5            0.0244          0.00755           0.0498 2.44% [0.75% — 4.98%] 
#>  6            0.0371          0.0220            0.0677 3.71% [2.20% — 6.77%] 
#>  7            0.0495          0.0304            0.0930 4.95% [3.04% — 9.30%] 
#>  8            0.0624          0.0433            0.105  6.24% [4.33% — 10.50%]
#>  9            0.0747          0.0562            0.131  7.47% [5.62% — 13.09%]
#> 10            0.0881          0.0659            0.157  8.81% [6.59% — 15.73%]
#> # ℹ 11 more variables: spec.median <dbl>, spec.lower <dbl>, spec.upper <dbl>,
#> #   spec.label <chr>, sens.median <dbl>, sens.lower <dbl>, sens.upper <dbl>,
#> #   sens.label <chr>, prevalence.method <chr>, pos_obs <dbl>, n_obs <dbl>
true_prevalence(c(1:10)*2, 200, 5, 800, 1, 6, method="bayes")
#> 
#> SAMPLING FOR MODEL 'component-logit' NOW (CHAIN 1).
#> Chain 1: 
#> Chain 1: Gradient evaluation took 1.4e-05 seconds
#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.14 seconds.
#> Chain 1: Adjust your expectations accordingly!
#> Chain 1: 
#> Chain 1: 
#> Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
#> Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
#> Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
#> Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
#> Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
#> Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
#> Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
#> Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
#> Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
#> Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
#> Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
#> Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
#> Chain 1: 
#> Chain 1:  Elapsed Time: 0.088169 seconds (Warm-up)
#> Chain 1:                0.083953 seconds (Sampling)
#> Chain 1:                0.172122 seconds (Total)
#> Chain 1: 
#> 
#> SAMPLING FOR MODEL 'component-logit' NOW (CHAIN 2).
#> Chain 2: 
#> Chain 2: Gradient evaluation took 9e-06 seconds
#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds.
#> Chain 2: Adjust your expectations accordingly!
#> Chain 2: 
#> Chain 2: 
#> Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
#> Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
#> Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
#> Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
#> Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
#> Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
#> Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
#> Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
#> Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
#> Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
#> Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
#> Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
#> Chain 2: 
#> Chain 2:  Elapsed Time: 0.078706 seconds (Warm-up)
#> Chain 2:                0.078158 seconds (Sampling)
#> Chain 2:                0.156864 seconds (Total)
#> Chain 2: 
#> 
#> SAMPLING FOR MODEL 'component-logit' NOW (CHAIN 3).
#> Chain 3: 
#> Chain 3: Gradient evaluation took 9e-06 seconds
#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds.
#> Chain 3: Adjust your expectations accordingly!
#> Chain 3: 
#> Chain 3: 
#> Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
#> Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
#> Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
#> Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
#> Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
#> Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
#> Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
#> Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
#> Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
#> Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
#> Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
#> Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
#> Chain 3: 
#> Chain 3:  Elapsed Time: 0.076819 seconds (Warm-up)
#> Chain 3:                0.080375 seconds (Sampling)
#> Chain 3:                0.157194 seconds (Total)
#> Chain 3: 
#> 
#> SAMPLING FOR MODEL 'component-logit' NOW (CHAIN 4).
#> Chain 4: 
#> Chain 4: Gradient evaluation took 1e-05 seconds
#> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds.
#> Chain 4: Adjust your expectations accordingly!
#> Chain 4: 
#> Chain 4: 
#> Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
#> Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
#> Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
#> Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
#> Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
#> Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
#> Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
#> Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
#> Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
#> Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
#> Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
#> Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
#> Chain 4: 
#> Chain 4:  Elapsed Time: 0.082852 seconds (Warm-up)
#> Chain 4:                0.077246 seconds (Sampling)
#> Chain 4:                0.160098 seconds (Total)
#> Chain 4: 
#> # A tibble: 10 × 15
#>    prevalence.median prevalence.lower prevalence.upper prevalence.label       
#>                <dbl>            <dbl>            <dbl> <chr>                  
#>  1           0.00217        0.0000214           0.0210 0.22% [0.00% — 2.10%]  
#>  2           0.0128         0.000546            0.0439 1.28% [0.05% — 4.39%]  
#>  3           0.0250         0.00408             0.0657 2.50% [0.41% — 6.57%]  
#>  4           0.0373         0.0117              0.0846 3.73% [1.17% — 8.46%]  
#>  5           0.0497         0.0208              0.104  4.97% [2.08% — 10.43%] 
#>  6           0.0625         0.0292              0.124  6.25% [2.92% — 12.35%] 
#>  7           0.0751         0.0378              0.144  7.51% [3.78% — 14.40%] 
#>  8           0.0873         0.0488              0.161  8.73% [4.88% — 16.10%] 
#>  9           0.0982         0.0574              0.173  9.82% [5.74% — 17.28%] 
#> 10           0.110          0.0651              0.192  11.02% [6.51% — 19.20%]
#> # ℹ 11 more variables: prevalence.method <chr>, sens.median <dbl>,
#> #   sens.lower <dbl>, sens.upper <dbl>, sens.label <chr>, spec.median <dbl>,
#> #   spec.lower <dbl>, spec.upper <dbl>, spec.label <chr>, pos_obs <dbl>,
#> #   n_obs <dbl>