```
dist_fit(
values = NULL,
samples = NULL,
cores = 1,
chains = 2,
dist = "exp",
verbose = FALSE
)
```

- values
Numeric vector of values

- samples
Numeric, number of samples to take

- cores
Numeric, defaults to 1. Number of CPU cores to use (no effect if greater than the number of chains).

- chains
Numeric, defaults to 2. Number of MCMC chains to use. More is better with the minimum being two.

- dist
Character string, which distribution to fit. Defaults to exponential (

`"exp"`

) but gamma (`"gamma"`

) and lognormal (`"lognormal"`

) are also supported.- verbose
Logical, defaults to FALSE. Should verbose progress messages be printed.

A `stan`

fit of an interval censored distribution

```
# \donttest{
# integer adjusted exponential model
dist_fit(rexp(1:100, 2),
samples = 1000, dist = "exp",
cores = ifelse(interactive(), 4, 1), verbose = TRUE
)
#>
#> SAMPLING FOR MODEL 'exp' NOW (CHAIN 1).
#> Chain 1:
#> Chain 1: Gradient evaluation took 3.1e-05 seconds
#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.31 seconds.
#> Chain 1: Adjust your expectations accordingly!
#> Chain 1:
#> Chain 1:
#> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup)
#> Chain 1: Iteration: 50 / 2000 [ 2%] (Warmup)
#> Chain 1: Iteration: 100 / 2000 [ 5%] (Warmup)
#> Chain 1: Iteration: 150 / 2000 [ 7%] (Warmup)
#> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup)
#> Chain 1: Iteration: 250 / 2000 [ 12%] (Warmup)
#> Chain 1: Iteration: 300 / 2000 [ 15%] (Warmup)
#> Chain 1: Iteration: 350 / 2000 [ 17%] (Warmup)
#> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup)
#> Chain 1: Iteration: 450 / 2000 [ 22%] (Warmup)
#> Chain 1: Iteration: 500 / 2000 [ 25%] (Warmup)
#> Chain 1: Iteration: 550 / 2000 [ 27%] (Warmup)
#> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup)
#> Chain 1: Iteration: 650 / 2000 [ 32%] (Warmup)
#> Chain 1: Iteration: 700 / 2000 [ 35%] (Warmup)
#> Chain 1: Iteration: 750 / 2000 [ 37%] (Warmup)
#> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup)
#> Chain 1: Iteration: 850 / 2000 [ 42%] (Warmup)
#> Chain 1: Iteration: 900 / 2000 [ 45%] (Warmup)
#> Chain 1: Iteration: 950 / 2000 [ 47%] (Warmup)
#> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup)
#> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling)
#> Chain 1: Iteration: 1050 / 2000 [ 52%] (Sampling)
#> Chain 1: Iteration: 1100 / 2000 [ 55%] (Sampling)
#> Chain 1: Iteration: 1150 / 2000 [ 57%] (Sampling)
#> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling)
#> Chain 1: Iteration: 1250 / 2000 [ 62%] (Sampling)
#> Chain 1: Iteration: 1300 / 2000 [ 65%] (Sampling)
#> Chain 1: Iteration: 1350 / 2000 [ 67%] (Sampling)
#> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling)
#> Chain 1: Iteration: 1450 / 2000 [ 72%] (Sampling)
#> Chain 1: Iteration: 1500 / 2000 [ 75%] (Sampling)
#> Chain 1: Iteration: 1550 / 2000 [ 77%] (Sampling)
#> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling)
#> Chain 1: Iteration: 1650 / 2000 [ 82%] (Sampling)
#> Chain 1: Iteration: 1700 / 2000 [ 85%] (Sampling)
#> Chain 1: Iteration: 1750 / 2000 [ 87%] (Sampling)
#> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling)
#> Chain 1: Iteration: 1850 / 2000 [ 92%] (Sampling)
#> Chain 1: Iteration: 1900 / 2000 [ 95%] (Sampling)
#> Chain 1: Iteration: 1950 / 2000 [ 97%] (Sampling)
#> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling)
#> Chain 1:
#> Chain 1: Elapsed Time: 0.084308 seconds (Warm-up)
#> Chain 1: 0.09373 seconds (Sampling)
#> Chain 1: 0.178038 seconds (Total)
#> Chain 1:
#>
#> SAMPLING FOR MODEL 'exp' NOW (CHAIN 2).
#> Chain 2:
#> Chain 2: Gradient evaluation took 2.5e-05 seconds
#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.25 seconds.
#> Chain 2: Adjust your expectations accordingly!
#> Chain 2:
#> Chain 2:
#> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup)
#> Chain 2: Iteration: 50 / 2000 [ 2%] (Warmup)
#> Chain 2: Iteration: 100 / 2000 [ 5%] (Warmup)
#> Chain 2: Iteration: 150 / 2000 [ 7%] (Warmup)
#> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup)
#> Chain 2: Iteration: 250 / 2000 [ 12%] (Warmup)
#> Chain 2: Iteration: 300 / 2000 [ 15%] (Warmup)
#> Chain 2: Iteration: 350 / 2000 [ 17%] (Warmup)
#> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup)
#> Chain 2: Iteration: 450 / 2000 [ 22%] (Warmup)
#> Chain 2: Iteration: 500 / 2000 [ 25%] (Warmup)
#> Chain 2: Iteration: 550 / 2000 [ 27%] (Warmup)
#> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup)
#> Chain 2: Iteration: 650 / 2000 [ 32%] (Warmup)
#> Chain 2: Iteration: 700 / 2000 [ 35%] (Warmup)
#> Chain 2: Iteration: 750 / 2000 [ 37%] (Warmup)
#> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup)
#> Chain 2: Iteration: 850 / 2000 [ 42%] (Warmup)
#> Chain 2: Iteration: 900 / 2000 [ 45%] (Warmup)
#> Chain 2: Iteration: 950 / 2000 [ 47%] (Warmup)
#> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup)
#> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling)
#> Chain 2: Iteration: 1050 / 2000 [ 52%] (Sampling)
#> Chain 2: Iteration: 1100 / 2000 [ 55%] (Sampling)
#> Chain 2: Iteration: 1150 / 2000 [ 57%] (Sampling)
#> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling)
#> Chain 2: Iteration: 1250 / 2000 [ 62%] (Sampling)
#> Chain 2: Iteration: 1300 / 2000 [ 65%] (Sampling)
#> Chain 2: Iteration: 1350 / 2000 [ 67%] (Sampling)
#> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling)
#> Chain 2: Iteration: 1450 / 2000 [ 72%] (Sampling)
#> Chain 2: Iteration: 1500 / 2000 [ 75%] (Sampling)
#> Chain 2: Iteration: 1550 / 2000 [ 77%] (Sampling)
#> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling)
#> Chain 2: Iteration: 1650 / 2000 [ 82%] (Sampling)
#> Chain 2: Iteration: 1700 / 2000 [ 85%] (Sampling)
#> Chain 2: Iteration: 1750 / 2000 [ 87%] (Sampling)
#> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling)
#> Chain 2: Iteration: 1850 / 2000 [ 92%] (Sampling)
#> Chain 2: Iteration: 1900 / 2000 [ 95%] (Sampling)
#> Chain 2: Iteration: 1950 / 2000 [ 97%] (Sampling)
#> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling)
#> Chain 2:
#> Chain 2: Elapsed Time: 0.081272 seconds (Warm-up)
#> Chain 2: 0.095962 seconds (Sampling)
#> Chain 2: 0.177234 seconds (Total)
#> Chain 2:
#> Inference for Stan model: exp.
#> 2 chains, each with iter=2000; warmup=1000; thin=1;
#> post-warmup draws per chain=1000, total post-warmup draws=2000.
#>
#> mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
#> lambda 2.33 0.01 0.33 1.77 2.09 2.32 2.53 3.08 718 1.00
#> lp__ -19.74 0.03 0.65 -21.65 -19.85 -19.51 -19.35 -19.30 621 1.01
#>
#> Samples were drawn using NUTS(diag_e) at Mon Mar 28 01:56:06 2022.
#> For each parameter, n_eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor on split chains (at
#> convergence, Rhat=1).
# integer adjusted gamma model
dist_fit(rgamma(1:100, 5, 5),
samples = 1000, dist = "gamma",
cores = ifelse(interactive(), 4, 1), verbose = TRUE
)
#>
#> SAMPLING FOR MODEL 'gamma' NOW (CHAIN 1).
#> Chain 1:
#> Chain 1: Gradient evaluation took 0.000259 seconds
#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 2.59 seconds.
#> Chain 1: Adjust your expectations accordingly!
#> Chain 1:
#> Chain 1:
#> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup)
#> Chain 1: Iteration: 50 / 2000 [ 2%] (Warmup)
#> Chain 1: Iteration: 100 / 2000 [ 5%] (Warmup)
#> Chain 1: Iteration: 150 / 2000 [ 7%] (Warmup)
#> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup)
#> Chain 1: Iteration: 250 / 2000 [ 12%] (Warmup)
#> Chain 1: Iteration: 300 / 2000 [ 15%] (Warmup)
#> Chain 1: Iteration: 350 / 2000 [ 17%] (Warmup)
#> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup)
#> Chain 1: Iteration: 450 / 2000 [ 22%] (Warmup)
#> Chain 1: Iteration: 500 / 2000 [ 25%] (Warmup)
#> Chain 1: Iteration: 550 / 2000 [ 27%] (Warmup)
#> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup)
#> Chain 1: Iteration: 650 / 2000 [ 32%] (Warmup)
#> Chain 1: Iteration: 700 / 2000 [ 35%] (Warmup)
#> Chain 1: Iteration: 750 / 2000 [ 37%] (Warmup)
#> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup)
#> Chain 1: Iteration: 850 / 2000 [ 42%] (Warmup)
#> Chain 1: Iteration: 900 / 2000 [ 45%] (Warmup)
#> Chain 1: Iteration: 950 / 2000 [ 47%] (Warmup)
#> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup)
#> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling)
#> Chain 1: Iteration: 1050 / 2000 [ 52%] (Sampling)
#> Chain 1: Iteration: 1100 / 2000 [ 55%] (Sampling)
#> Chain 1: Iteration: 1150 / 2000 [ 57%] (Sampling)
#> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling)
#> Chain 1: Iteration: 1250 / 2000 [ 62%] (Sampling)
#> Chain 1: Iteration: 1300 / 2000 [ 65%] (Sampling)
#> Chain 1: Iteration: 1350 / 2000 [ 67%] (Sampling)
#> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling)
#> Chain 1: Iteration: 1450 / 2000 [ 72%] (Sampling)
#> Chain 1: Iteration: 1500 / 2000 [ 75%] (Sampling)
#> Chain 1: Iteration: 1550 / 2000 [ 77%] (Sampling)
#> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling)
#> Chain 1: Iteration: 1650 / 2000 [ 82%] (Sampling)
#> Chain 1: Iteration: 1700 / 2000 [ 85%] (Sampling)
#> Chain 1: Iteration: 1750 / 2000 [ 87%] (Sampling)
#> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling)
#> Chain 1: Iteration: 1850 / 2000 [ 92%] (Sampling)
#> Chain 1: Iteration: 1900 / 2000 [ 95%] (Sampling)
#> Chain 1: Iteration: 1950 / 2000 [ 97%] (Sampling)
#> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling)
#> Chain 1:
#> Chain 1: Elapsed Time: 1.59849 seconds (Warm-up)
#> Chain 1: 1.43623 seconds (Sampling)
#> Chain 1: 3.03472 seconds (Total)
#> Chain 1:
#>
#> SAMPLING FOR MODEL 'gamma' NOW (CHAIN 2).
#> Chain 2:
#> Chain 2: Gradient evaluation took 0.000184 seconds
#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 1.84 seconds.
#> Chain 2: Adjust your expectations accordingly!
#> Chain 2:
#> Chain 2:
#> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup)
#> Chain 2: Iteration: 50 / 2000 [ 2%] (Warmup)
#> Chain 2: Iteration: 100 / 2000 [ 5%] (Warmup)
#> Chain 2: Iteration: 150 / 2000 [ 7%] (Warmup)
#> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup)
#> Chain 2: Iteration: 250 / 2000 [ 12%] (Warmup)
#> Chain 2: Iteration: 300 / 2000 [ 15%] (Warmup)
#> Chain 2: Iteration: 350 / 2000 [ 17%] (Warmup)
#> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup)
#> Chain 2: Iteration: 450 / 2000 [ 22%] (Warmup)
#> Chain 2: Iteration: 500 / 2000 [ 25%] (Warmup)
#> Chain 2: Iteration: 550 / 2000 [ 27%] (Warmup)
#> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup)
#> Chain 2: Iteration: 650 / 2000 [ 32%] (Warmup)
#> Chain 2: Iteration: 700 / 2000 [ 35%] (Warmup)
#> Chain 2: Iteration: 750 / 2000 [ 37%] (Warmup)
#> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup)
#> Chain 2: Iteration: 850 / 2000 [ 42%] (Warmup)
#> Chain 2: Iteration: 900 / 2000 [ 45%] (Warmup)
#> Chain 2: Iteration: 950 / 2000 [ 47%] (Warmup)
#> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup)
#> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling)
#> Chain 2: Iteration: 1050 / 2000 [ 52%] (Sampling)
#> Chain 2: Iteration: 1100 / 2000 [ 55%] (Sampling)
#> Chain 2: Iteration: 1150 / 2000 [ 57%] (Sampling)
#> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling)
#> Chain 2: Iteration: 1250 / 2000 [ 62%] (Sampling)
#> Chain 2: Iteration: 1300 / 2000 [ 65%] (Sampling)
#> Chain 2: Iteration: 1350 / 2000 [ 67%] (Sampling)
#> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling)
#> Chain 2: Iteration: 1450 / 2000 [ 72%] (Sampling)
#> Chain 2: Iteration: 1500 / 2000 [ 75%] (Sampling)
#> Chain 2: Iteration: 1550 / 2000 [ 77%] (Sampling)
#> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling)
#> Chain 2: Iteration: 1650 / 2000 [ 82%] (Sampling)
#> Chain 2: Iteration: 1700 / 2000 [ 85%] (Sampling)
#> Chain 2: Iteration: 1750 / 2000 [ 87%] (Sampling)
#> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling)
#> Chain 2: Iteration: 1850 / 2000 [ 92%] (Sampling)
#> Chain 2: Iteration: 1900 / 2000 [ 95%] (Sampling)
#> Chain 2: Iteration: 1950 / 2000 [ 97%] (Sampling)
#> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling)
#> Chain 2:
#> Chain 2: Elapsed Time: 1.53288 seconds (Warm-up)
#> Chain 2: 1.71865 seconds (Sampling)
#> Chain 2: 3.25153 seconds (Total)
#> Chain 2:
#> Inference for Stan model: gamma.
#> 2 chains, each with iter=2000; warmup=1000; thin=1;
#> post-warmup draws per chain=1000, total post-warmup draws=2000.
#>
#> mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
#> alpha_raw 0.87 0.02 0.50 0.07 0.49 0.83 1.20 1.97 436 1
#> beta_raw 0.90 0.02 0.53 0.07 0.49 0.85 1.24 2.04 527 1
#> alpha 7.03 0.02 0.50 6.23 6.66 6.99 7.37 8.13 436 1
#> beta 7.43 0.02 0.53 6.61 7.02 7.39 7.78 8.58 527 1
#> mu 0.95 0.00 0.07 0.82 0.90 0.95 1.00 1.08 1079 1
#> sigma 0.36 0.00 0.02 0.32 0.34 0.36 0.37 0.40 887 1
#> lp__ -9.99 0.08 1.33 -13.47 -10.52 -9.58 -9.08 -8.71 285 1
#>
#> Samples were drawn using NUTS(diag_e) at Mon Mar 28 01:56:12 2022.
#> For each parameter, n_eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor on split chains (at
#> convergence, Rhat=1).
# integer adjusted lognormal model
dist_fit(rlnorm(1:100, log(5), 0.2),
samples = 1000, dist = "lognormal",
cores = ifelse(interactive(), 4, 1), verbose = TRUE
)
#>
#> SAMPLING FOR MODEL 'lnorm' NOW (CHAIN 1).
#> Chain 1:
#> Chain 1: Gradient evaluation took 3.8e-05 seconds
#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.38 seconds.
#> Chain 1: Adjust your expectations accordingly!
#> Chain 1:
#> Chain 1:
#> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup)
#> Chain 1: Iteration: 50 / 2000 [ 2%] (Warmup)
#> Chain 1: Iteration: 100 / 2000 [ 5%] (Warmup)
#> Chain 1: Iteration: 150 / 2000 [ 7%] (Warmup)
#> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup)
#> Chain 1: Iteration: 250 / 2000 [ 12%] (Warmup)
#> Chain 1: Iteration: 300 / 2000 [ 15%] (Warmup)
#> Chain 1: Iteration: 350 / 2000 [ 17%] (Warmup)
#> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup)
#> Chain 1: Iteration: 450 / 2000 [ 22%] (Warmup)
#> Chain 1: Iteration: 500 / 2000 [ 25%] (Warmup)
#> Chain 1: Iteration: 550 / 2000 [ 27%] (Warmup)
#> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup)
#> Chain 1: Iteration: 650 / 2000 [ 32%] (Warmup)
#> Chain 1: Iteration: 700 / 2000 [ 35%] (Warmup)
#> Chain 1: Iteration: 750 / 2000 [ 37%] (Warmup)
#> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup)
#> Chain 1: Iteration: 850 / 2000 [ 42%] (Warmup)
#> Chain 1: Iteration: 900 / 2000 [ 45%] (Warmup)
#> Chain 1: Iteration: 950 / 2000 [ 47%] (Warmup)
#> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup)
#> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling)
#> Chain 1: Iteration: 1050 / 2000 [ 52%] (Sampling)
#> Chain 1: Iteration: 1100 / 2000 [ 55%] (Sampling)
#> Chain 1: Iteration: 1150 / 2000 [ 57%] (Sampling)
#> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling)
#> Chain 1: Iteration: 1250 / 2000 [ 62%] (Sampling)
#> Chain 1: Iteration: 1300 / 2000 [ 65%] (Sampling)
#> Chain 1: Iteration: 1350 / 2000 [ 67%] (Sampling)
#> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling)
#> Chain 1: Iteration: 1450 / 2000 [ 72%] (Sampling)
#> Chain 1: Iteration: 1500 / 2000 [ 75%] (Sampling)
#> Chain 1: Iteration: 1550 / 2000 [ 77%] (Sampling)
#> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling)
#> Chain 1: Iteration: 1650 / 2000 [ 82%] (Sampling)
#> Chain 1: Iteration: 1700 / 2000 [ 85%] (Sampling)
#> Chain 1: Iteration: 1750 / 2000 [ 87%] (Sampling)
#> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling)
#> Chain 1: Iteration: 1850 / 2000 [ 92%] (Sampling)
#> Chain 1: Iteration: 1900 / 2000 [ 95%] (Sampling)
#> Chain 1: Iteration: 1950 / 2000 [ 97%] (Sampling)
#> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling)
#> Chain 1:
#> Chain 1: Elapsed Time: 0.19194 seconds (Warm-up)
#> Chain 1: 0.176327 seconds (Sampling)
#> Chain 1: 0.368267 seconds (Total)
#> Chain 1:
#>
#> SAMPLING FOR MODEL 'lnorm' NOW (CHAIN 2).
#> Chain 2:
#> Chain 2: Gradient evaluation took 3.6e-05 seconds
#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.36 seconds.
#> Chain 2: Adjust your expectations accordingly!
#> Chain 2:
#> Chain 2:
#> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup)
#> Chain 2: Iteration: 50 / 2000 [ 2%] (Warmup)
#> Chain 2: Iteration: 100 / 2000 [ 5%] (Warmup)
#> Chain 2: Iteration: 150 / 2000 [ 7%] (Warmup)
#> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup)
#> Chain 2: Iteration: 250 / 2000 [ 12%] (Warmup)
#> Chain 2: Iteration: 300 / 2000 [ 15%] (Warmup)
#> Chain 2: Iteration: 350 / 2000 [ 17%] (Warmup)
#> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup)
#> Chain 2: Iteration: 450 / 2000 [ 22%] (Warmup)
#> Chain 2: Iteration: 500 / 2000 [ 25%] (Warmup)
#> Chain 2: Iteration: 550 / 2000 [ 27%] (Warmup)
#> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup)
#> Chain 2: Iteration: 650 / 2000 [ 32%] (Warmup)
#> Chain 2: Iteration: 700 / 2000 [ 35%] (Warmup)
#> Chain 2: Iteration: 750 / 2000 [ 37%] (Warmup)
#> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup)
#> Chain 2: Iteration: 850 / 2000 [ 42%] (Warmup)
#> Chain 2: Iteration: 900 / 2000 [ 45%] (Warmup)
#> Chain 2: Iteration: 950 / 2000 [ 47%] (Warmup)
#> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup)
#> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling)
#> Chain 2: Iteration: 1050 / 2000 [ 52%] (Sampling)
#> Chain 2: Iteration: 1100 / 2000 [ 55%] (Sampling)
#> Chain 2: Iteration: 1150 / 2000 [ 57%] (Sampling)
#> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling)
#> Chain 2: Iteration: 1250 / 2000 [ 62%] (Sampling)
#> Chain 2: Iteration: 1300 / 2000 [ 65%] (Sampling)
#> Chain 2: Iteration: 1350 / 2000 [ 67%] (Sampling)
#> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling)
#> Chain 2: Iteration: 1450 / 2000 [ 72%] (Sampling)
#> Chain 2: Iteration: 1500 / 2000 [ 75%] (Sampling)
#> Chain 2: Iteration: 1550 / 2000 [ 77%] (Sampling)
#> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling)
#> Chain 2: Iteration: 1650 / 2000 [ 82%] (Sampling)
#> Chain 2: Iteration: 1700 / 2000 [ 85%] (Sampling)
#> Chain 2: Iteration: 1750 / 2000 [ 87%] (Sampling)
#> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling)
#> Chain 2: Iteration: 1850 / 2000 [ 92%] (Sampling)
#> Chain 2: Iteration: 1900 / 2000 [ 95%] (Sampling)
#> Chain 2: Iteration: 1950 / 2000 [ 97%] (Sampling)
#> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling)
#> Chain 2:
#> Chain 2: Elapsed Time: 0.18961 seconds (Warm-up)
#> Chain 2: 0.195892 seconds (Sampling)
#> Chain 2: 0.385502 seconds (Total)
#> Chain 2:
#> Inference for Stan model: lnorm.
#> 2 chains, each with iter=2000; warmup=1000; thin=1;
#> post-warmup draws per chain=1000, total post-warmup draws=2000.
#>
#> mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
#> mu 1.63 0.00 0.02 1.59 1.62 1.63 1.64 1.67 1510 1
#> sigma 0.16 0.00 0.02 0.13 0.15 0.16 0.18 0.20 1117 1
#> lp__ -75.96 0.03 0.95 -78.47 -76.34 -75.67 -75.29 -75.04 927 1
#>
#> Samples were drawn using NUTS(diag_e) at Mon Mar 28 01:56:13 2022.
#> For each parameter, n_eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor on split chains (at
#> convergence, Rhat=1).
# }
```