Last updated: 2019-02-24

workflowr checks: (Click a bullet for more information)
  • R Markdown file: up-to-date

    Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.

  • Environment: empty

    Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.

  • Seed: set.seed(20190115)

    The command set.seed(20190115) was run prior to running the code in the R Markdown file. Setting a seed ensures that any results that rely on randomness, e.g. subsampling or permutations, are reproducible.

  • Session information: recorded

    Great job! Recording the operating system, R version, and package versions is critical for reproducibility.

  • Repository version: ad53680

    Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility. The version displayed above was the version of the Git repository at the time these results were generated.

    Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish or wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:
    
    Ignored files:
        Ignored:    .DS_Store
        Ignored:    .Rhistory
        Ignored:    .Rproj.user/
        Ignored:    .sos/
        Ignored:    data/.DS_Store
        Ignored:    output/.DS_Store
    
    Untracked files:
        Untracked:  docs/figure/test.Rmd/
        Untracked:  output/dscoutProblem475.rds
        Untracked:  output/dscoutProblem75.rds
        Untracked:  output/finemap_compare_random_data_null_dscout.rds
        Untracked:  output/finemap_compare_random_data_signal_dscout.rds
        Untracked:  output/finemap_compare_small_data_signal_dscout.rds
        Untracked:  output/finemap_compare_small_data_signal_dscout_RE8.rds
        Untracked:  output/random_data_100_sim_gaussian_null_1_get_sumstats_1_finemap_1.rds
        Untracked:  output/random_data_76.rds
        Untracked:  output/random_data_76_sim_gaussian_8.rds
        Untracked:  output/random_data_76_sim_gaussian_8_get_sumstats_1.rds
        Untracked:  output/small_data_42_sim_gaussian_36_get_sumstats_2_susie_z_2.rds
        Untracked:  output/small_data_92_sim_gaussian_30_get_sumstats_2_susie_z_2.rds
    
    Unstaged changes:
        Modified:   analysis/SusieZPerformanceRE3.Rmd
        Modified:   analysis/SusieZPerformanceRE8.Rmd
        Modified:   output/dsc_susie_z_v_output.rds
    
    
    Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
Expand here to see past versions:
    File Version Author Date Message
    Rmd ad53680 zouyuxin 2019-02-24 wflow_publish(“analysis/SusieZPerformance.Rmd”)
    html 772c0ea zouyuxin 2019-02-21 Build site.
    Rmd ffb9923 zouyuxin 2019-02-21 wflow_publish(“analysis/SusieZPerformance.Rmd”)
    html e5b64af zouyuxin 2019-02-21 Build site.
    Rmd 36f019f zouyuxin 2019-02-21 wflow_publish(“analysis/SusieZPerformance.Rmd”)
    html ac3569d zouyuxin 2019-02-21 Build site.
    Rmd ab29e7c zouyuxin 2019-02-21 wflow_publish(“analysis/SusieZPerformance.Rmd”)


The credible sets information from SuSiE and DAP are comparable. The credible set contains one causal variable with at least 95% posterior probability. However, there is no credible set for signals in FINEMAP. We generate the credible set of causal signals as the union of the variables included in the smallest set of causal configurations that already covered 95% of the total posterior probability.

We randomly generated 1200 by 1000 matrix X, each entry is random from N(0,1).

Simulation under null

We random generate 100 null y.

library(dscrutils)
dscout = dscquery('output/finemap_compare_random_data_null', target='method score_susie.converged score.total score.valid score.size score_susie.purity score_dap.avgr2',group = c("score: score_susie score_finemap score_dap", "method: susie_z susie_z_init finemap dap_z"))
colnames(dscout) = c('DSC', 'method', 'output.file', 'score', 'total', 'valid', 'size', 'converged', 'purity', 'avgr2')
library(dplyr)

Attaching package: 'dplyr'
The following objects are masked from 'package:stats':

    filter, lag
The following objects are masked from 'package:base':

    intersect, setdiff, setequal, union
library(knitr)
library(kableExtra)
library(susieR)
dscout = readRDS('output/finemap_compare_random_data_null_dscout.rds')
dscout.susie = dscout[dscout$method == 'susie_z',]
dscout.susie.init = dscout[dscout$method == 'susie_z_init',]
dscout.finemap = dscout[dscout$method == 'finemap',]
dscout.dap = dscout[dscout$method == 'dap_z',]
total = aggregate(total ~ method, dscout, sum)
size = aggregate(size ~ method, dscout, sum)
res = merge(total, size)
res %>% kable() %>% kable_styling()
method total size
dap_z 0 0e+00
finemap 100 1e+05
susie_z 0 0e+00
susie_z_init 0 0e+00

There are no false discoveries for SuSiE z and DAP. For FINEMAP, there is no posterior probability of zero causal. The posterior probability for each configuration is very small. The credible set contains all variables. One example for FINEMAP

How to summarize FINEMAP result?

Simulation with signals

We simulate a gaussian y under various number of causal variables, total percentage of variance explained (PVE) and whether the signals have equal effect. The reason I control the effect size is that if we random generate the effect size for the signals, some signals have large effect size by chance. Therefore these signals have larger PVE.

We fit SuSiE with L = 5, FINEMAP with max 5 causals.

library(dscrutils)
dscout = dscquery('output/finemap_compare_random_data_signal', target='method sim_gaussian.pve sim_gaussian.n_signal sim_gaussian.effect_weight score_susie.objective score_susie.converged score.total score.valid score.size score.signal_pip score_susie.purity score_dap.avgr2 score_susie.top score_dap.top score_susie.overlap score_dap.overlap ',group = c("score: score_susie score_finemap score_dap", "method: susie_z susie_z_init finemap dap_z"))

colnames(dscout) = c('DSC', 'method', 'output.file', 'pve', 'n_signal', 'effect_weight', 'score', 'total', 'valid', 'size', 'signal_pip', 'objective', 'converged', 'purity', 'top', 'overlap', 'avgr2')
dscout$effect_weight[which(dscout$effect_weight == 'rep(1/n_signal, n_signal)')] = 'equal'
dscout$effect_weight[which(dscout$effect_weight != 'equal')] = 'notequal'
library(dplyr)
library(knitr)
library(kableExtra)
library(susieR)
dscout = readRDS('output/finemap_compare_random_data_signal_dscout.rds')
dscout.susie = dscout[dscout$method == 'susie_z',]
dscout.susie.init = dscout[dscout$method == 'susie_z_init',]
dscout.finemap = dscout[dscout$method == 'finemap',]
dscout.dap = dscout[dscout$method == 'dap_z',]
  • Size of CS:

SuSiE and DAP performs similarly.

size.susie = aggregate(size~effect_weight+n_signal+pve, dscout.susie, mean)
colnames(size.susie)[colnames(size.susie) == 'size'] <- 'size.susie'
size.susie.init = aggregate(size~effect_weight+n_signal+pve, dscout.susie.init, mean)
colnames(size.susie.init)[colnames(size.susie.init) == 'size'] <- 'size.susie.init'
size.finemap = aggregate(size~effect_weight+n_signal+pve, dscout.finemap, mean)
colnames(size.finemap)[colnames(size.finemap) == 'size'] <- 'size.finemap'
size.dap = aggregate(size~pve+n_signal+effect_weight, dscout.dap, mean)
colnames(size.dap)[colnames(size.dap) == 'size'] <- 'size.dap'

size = Reduce(function(...) merge(...),
       list(size.susie, size.susie.init, size.dap, size.finemap))
size %>% kable() %>% kable_styling(bootstrap_options = c("striped", "condensed", "responsive"), full_width = F) 
effect_weight n_signal pve size.susie size.susie.init size.dap size.finemap
equal 1 0.05 1.00 1.00 1.00 1000.00
equal 1 0.10 1.00 1.00 1.00 1000.00
equal 1 0.20 1.00 1.00 1.00 998.30
equal 1 0.60 1.00 1.00 1.00 907.18
equal 1 0.80 1.00 1.00 1.00 941.55
equal 10 0.05 0.13 0.13 0.06 1000.00
equal 10 0.10 0.62 0.65 0.53 1000.00
equal 10 0.20 1.00 1.00 1.00 569.36
equal 10 0.60 1.00 1.00 1.00 7.57
equal 10 0.80 1.00 1.00 1.00 7.19
equal 3 0.05 0.76 0.75 0.73 1000.00
equal 3 0.10 0.99 0.99 0.99 1000.00
equal 3 0.20 1.00 1.00 1.00 999.34
equal 3 0.60 1.00 1.00 1.00 908.26
equal 3 0.80 1.00 1.00 1.00 885.09
equal 5 0.05 0.51 0.51 0.40 1000.00
equal 5 0.10 0.97 0.96 0.99 977.18
equal 5 0.20 1.00 1.00 1.00 43.75
equal 5 0.60 1.00 1.00 1.00 5.00
equal 5 0.80 1.00 1.00 1.00 5.00
notequal 1 0.05 1.00 1.00 1.00 1000.00
notequal 1 0.10 1.00 1.00 1.00 1000.00
notequal 1 0.20 1.00 1.00 1.00 998.47
notequal 1 0.60 1.00 1.00 1.00 907.11
notequal 1 0.80 1.00 1.00 1.00 941.62
notequal 10 0.05 0.99 0.99 0.98 1000.00
notequal 10 0.10 1.00 1.00 1.00 1000.00
notequal 10 0.20 1.00 1.00 1.00 1000.00
notequal 10 0.60 1.00 1.00 1.00 1000.00
notequal 10 0.80 1.00 1.00 1.00 1000.00
notequal 3 0.05 0.99 0.99 0.98 1000.00
notequal 3 0.10 1.00 1.00 1.00 1000.00
notequal 3 0.20 1.00 1.00 1.00 1000.00
notequal 3 0.60 1.00 1.00 1.00 907.02
notequal 3 0.80 1.00 1.00 1.00 890.33
notequal 5 0.05 1.00 1.00 0.99 1000.00
notequal 5 0.10 1.00 1.00 1.00 1000.00
notequal 5 0.20 1.00 1.00 1.00 1000.00
notequal 5 0.60 1.00 1.00 1.00 846.83
notequal 5 0.80 1.00 1.00 1.00 101.83
  • Purity of CS:
purity.susie = aggregate(purity~effect_weight+n_signal+pve, dscout.susie, mean)
colnames(purity.susie)[colnames(purity.susie) == 'purity'] <- 'purity.susie'
purity.susie.init = aggregate(purity~effect_weight+n_signal+pve, dscout.susie.init, mean)
colnames(purity.susie.init)[colnames(purity.susie.init) == 'purity'] <- 'purity.susie.init'
purity.dap = aggregate(avgr2~effect_weight+n_signal+pve, dscout.dap, mean)
colnames(purity.dap)[colnames(purity.dap) == 'avgr2'] <- 'avgr2.dap'

purity = Reduce(function(...) merge(...),
       list(purity.susie, purity.susie.init, purity.dap))
purity %>% kable() %>% kable_styling(bootstrap_options = c("striped", "condensed", "responsive"), full_width = F) 
effect_weight n_signal pve purity.susie purity.susie.init avgr2.dap
equal 1 0.05 1.00 1.00 1.00
equal 1 0.10 1.00 1.00 1.00
equal 1 0.20 1.00 1.00 1.00
equal 1 0.60 1.00 1.00 1.00
equal 1 0.80 1.00 1.00 1.00
equal 10 0.05 0.13 0.13 0.06
equal 10 0.10 0.62 0.65 0.53
equal 10 0.20 1.00 1.00 1.00
equal 10 0.60 1.00 1.00 1.00
equal 10 0.80 1.00 1.00 1.00
equal 3 0.05 0.76 0.75 0.73
equal 3 0.10 0.99 0.99 0.99
equal 3 0.20 1.00 1.00 1.00
equal 3 0.60 1.00 1.00 1.00
equal 3 0.80 1.00 1.00 1.00
equal 5 0.05 0.51 0.51 0.40
equal 5 0.10 0.97 0.96 0.99
equal 5 0.20 1.00 1.00 1.00
equal 5 0.60 1.00 1.00 1.00
equal 5 0.80 1.00 1.00 1.00
notequal 1 0.05 1.00 1.00 1.00
notequal 1 0.10 1.00 1.00 1.00
notequal 1 0.20 1.00 1.00 1.00
notequal 1 0.60 1.00 1.00 1.00
notequal 1 0.80 1.00 1.00 1.00
notequal 10 0.05 0.99 0.99 0.98
notequal 10 0.10 1.00 1.00 1.00
notequal 10 0.20 1.00 1.00 1.00
notequal 10 0.60 1.00 1.00 1.00
notequal 10 0.80 1.00 1.00 1.00
notequal 3 0.05 0.99 0.99 0.98
notequal 3 0.10 1.00 1.00 1.00
notequal 3 0.20 1.00 1.00 1.00
notequal 3 0.60 1.00 1.00 1.00
notequal 3 0.80 1.00 1.00 1.00
notequal 5 0.05 1.00 1.00 0.99
notequal 5 0.10 1.00 1.00 1.00
notequal 5 0.20 1.00 1.00 1.00
notequal 5 0.60 1.00 1.00 1.00
notequal 5 0.80 1.00 1.00 1.00
  • Power:
valid = aggregate(valid ~ effect_weight + n_signal + pve, dscout.susie, sum)
total = aggregate(DSC~ effect_weight + n_signal + pve, dscout.susie, length)
total$total_true = total$DSC * total$n_signal
power.susie = merge(valid, total)
power.susie$power.susie = round(power.susie$valid/(power.susie$total_true), 3)
colnames(power.susie)[colnames(power.susie) == 'valid'] <- 'valid.susie'

valid = aggregate(valid ~ effect_weight + n_signal + pve, dscout.susie.init, sum)
total = aggregate(DSC~ effect_weight + n_signal + pve, dscout.susie.init, length)
total$total_true = total$DSC * total$n_signal
power.susie.init = merge(valid, total)
power.susie.init$power.susie.init = round(power.susie.init$valid/(power.susie.init$total_true), 3)
colnames(power.susie.init)[colnames(power.susie.init) == 'valid'] <- 'valid.susie.init'

valid = aggregate(valid ~ effect_weight + n_signal + pve, dscout.dap, sum)
total = aggregate(DSC~ effect_weight + n_signal + pve, dscout.dap, length)
total$total_true = total$DSC * total$n_signal
power.dap = merge(valid, total)
power.dap$power.dap = round(power.dap$valid/(power.dap$total_true), 3)
colnames(power.dap)[colnames(power.dap) == 'valid'] <- 'valid.dap'

valid = aggregate(valid ~ effect_weight + n_signal + pve, dscout.finemap, sum)
total = aggregate(DSC ~ effect_weight + n_signal + pve, dscout.finemap, length)
total$total_true = total$DSC * total$n_signal
power.finemap = merge(valid, total)
power.finemap$power.finemap = round(power.finemap$valid/(power.finemap$total_true),3)
colnames(power.finemap)[colnames(power.finemap) == 'valid'] <- 'valid.finemap'

power = Reduce(function(...) merge(...),
       list(power.susie, power.susie.init, power.dap, power.finemap))
power = power[,-4]
power %>% kable() %>% kable_styling(bootstrap_options = c("striped", "condensed", "responsive","bordered"), full_width = F) %>% add_header_above(c(" ", " ", " "," ", "SuSiE z" = 2, "SuSiE z init" = 2,"DAP" = 2, "FINEMAP" = 2)) %>% column_spec(c(6, 8, 10, 12), bold = T)
SuSiE z
SuSiE z init
DAP
FINEMAP
effect_weight n_signal pve total_true valid.susie power.susie valid.susie.init power.susie.init valid.dap power.dap valid.finemap power.finemap
equal 1 0.05 100 100 1.000 100 1.000 100 1.000 100 1.000
equal 1 0.10 100 100 1.000 100 1.000 100 1.000 100 1.000
equal 1 0.20 100 100 1.000 100 1.000 100 1.000 100 1.000
equal 1 0.60 100 100 1.000 100 1.000 100 1.000 100 1.000
equal 1 0.80 100 100 1.000 100 1.000 100 1.000 100 1.000
equal 10 0.05 1000 13 0.013 13 0.013 6 0.006 1000 1.000
equal 10 0.10 1000 89 0.089 93 0.093 87 0.087 1000 1.000
equal 10 0.20 1000 361 0.361 716 0.716 639 0.639 989 0.989
equal 10 0.60 1000 423 0.423 1000 1.000 1000 1.000 757 0.757
equal 10 0.80 1000 439 0.439 1000 1.000 1000 1.000 719 0.719
equal 3 0.05 300 103 0.343 101 0.337 110 0.367 300 1.000
equal 3 0.10 300 284 0.947 284 0.947 279 0.930 300 1.000
equal 3 0.20 300 300 1.000 300 1.000 300 1.000 300 1.000
equal 3 0.60 300 300 1.000 300 1.000 300 1.000 300 1.000
equal 3 0.80 300 300 1.000 300 1.000 300 1.000 300 1.000
equal 5 0.05 500 68 0.136 68 0.136 54 0.108 500 1.000
equal 5 0.10 500 302 0.604 300 0.600 299 0.598 500 1.000
equal 5 0.20 500 496 0.992 496 0.992 495 0.990 500 1.000
equal 5 0.60 500 500 1.000 500 1.000 500 1.000 500 1.000
equal 5 0.80 500 500 1.000 500 1.000 500 1.000 500 1.000
notequal 1 0.05 100 100 1.000 100 1.000 100 1.000 100 1.000
notequal 1 0.10 100 100 1.000 100 1.000 100 1.000 100 1.000
notequal 1 0.20 100 100 1.000 100 1.000 100 1.000 100 1.000
notequal 1 0.60 100 100 1.000 100 1.000 100 1.000 100 1.000
notequal 1 0.80 100 100 1.000 100 1.000 100 1.000 100 1.000
notequal 10 0.05 1000 99 0.099 99 0.099 98 0.098 1000 1.000
notequal 10 0.10 1000 100 0.100 100 0.100 100 0.100 1000 1.000
notequal 10 0.20 1000 106 0.106 106 0.106 100 0.100 1000 1.000
notequal 10 0.60 1000 117 0.117 116 0.116 151 0.151 1000 1.000
notequal 10 0.80 1000 100 0.100 100 0.100 326 0.326 1000 1.000
notequal 3 0.05 300 99 0.330 99 0.330 98 0.327 300 1.000
notequal 3 0.10 300 106 0.353 106 0.353 101 0.337 300 1.000
notequal 3 0.20 300 187 0.623 186 0.620 152 0.507 300 1.000
notequal 3 0.60 300 296 0.987 298 0.993 300 1.000 300 1.000
notequal 3 0.80 300 267 0.890 267 0.890 300 1.000 300 1.000
notequal 5 0.05 500 100 0.200 100 0.200 99 0.198 500 1.000
notequal 5 0.10 500 104 0.208 104 0.208 101 0.202 500 1.000
notequal 5 0.20 500 131 0.262 131 0.262 112 0.224 500 1.000
notequal 5 0.60 500 310 0.620 315 0.630 404 0.808 500 1.000
notequal 5 0.80 500 127 0.254 126 0.252 483 0.966 500 1.000
  • FDR
valid = aggregate(valid ~ effect_weight + n_signal + pve, dscout.susie, sum)
total = aggregate(total~ effect_weight + n_signal + pve, dscout.susie, sum)
fdr.susie = merge(valid, total)
fdr.susie$fdr.susie = round((fdr.susie$total - fdr.susie$valid)/fdr.susie$total, 4)
colnames(fdr.susie)[colnames(fdr.susie) == 'valid'] <- 'valid.susie'
fdr.susie = fdr.susie[,-5]

valid = aggregate(valid ~ effect_weight + n_signal + pve, dscout.susie.init, sum)
total = aggregate(total~ effect_weight + n_signal + pve, dscout.susie.init, sum)
fdr.susie.init = merge(valid, total)
fdr.susie.init$fdr.susie.init = round((fdr.susie.init$total - fdr.susie.init$valid)/fdr.susie.init$total, 4)
colnames(fdr.susie.init)[colnames(fdr.susie.init) == 'valid'] <- 'valid.susie.init'
fdr.susie.init = fdr.susie.init[,-5]

valid = aggregate(valid ~ effect_weight + n_signal + pve, dscout.dap, sum)
total = aggregate(total ~ effect_weight + n_signal + pve, dscout.dap, sum)
fdr.dap = merge(valid, total)
fdr.dap$fdr.dap = round((fdr.dap$total - fdr.dap$valid)/fdr.dap$total, 4)
colnames(fdr.dap)[colnames(fdr.dap) == 'valid'] <- 'valid.dap'
fdr.dap = fdr.dap[,-5]

valid = aggregate(valid ~ effect_weight + n_signal + pve, dscout.finemap, sum)
total = aggregate(size ~ effect_weight + n_signal + pve, dscout.finemap, sum)
fdr.finemap = merge(valid, total)
fdr.finemap$fdr.finemap = round((fdr.finemap$size - fdr.finemap$valid)/fdr.finemap$size, 4)
colnames(fdr.finemap)[colnames(fdr.finemap) == 'valid'] <- 'valid.finemap'
fdr.finemap = fdr.finemap[,-5]

fdr = Reduce(function(...) merge(...),
       list(fdr.susie, fdr.susie.init, fdr.dap, fdr.finemap))

fdr %>% kable() %>% kable_styling(bootstrap_options = c("striped", "condensed", "responsive","bordered"), full_width = F) %>% add_header_above(c(" ", " ", " ", "SuSiE z" = 2, "SuSiE z init" = 2,"DAP" = 2, "FINEMAP" = 2)) %>% column_spec(c(5, 7, 9, 11), bold = T)
SuSiE z
SuSiE z init
DAP
FINEMAP
effect_weight n_signal pve valid.susie fdr.susie valid.susie.init fdr.susie.init valid.dap fdr.dap valid.finemap fdr.finemap
equal 1 0.05 100 0 100 0 100 0.0000 100 0.9990
equal 1 0.10 100 0 100 0 100 0.0000 100 0.9990
equal 1 0.20 100 0 100 0 100 0.0000 100 0.9990
equal 1 0.60 100 0 100 0 100 0.0000 100 0.9989
equal 1 0.80 100 0 100 0 100 0.0385 100 0.9989
equal 10 0.05 13 0 13 0 6 0.0000 1000 0.9900
equal 10 0.10 89 0 93 0 87 0.0000 1000 0.9900
equal 10 0.20 361 0 716 0 639 0.0000 989 0.9826
equal 10 0.60 423 0 1000 0 1000 0.0000 757 0.0000
equal 10 0.80 439 0 1000 0 1000 0.0000 719 0.0000
equal 3 0.05 103 0 101 0 110 0.0000 300 0.9970
equal 3 0.10 284 0 284 0 279 0.0000 300 0.9970
equal 3 0.20 300 0 300 0 300 0.0000 300 0.9970
equal 3 0.60 300 0 300 0 300 0.0000 300 0.9967
equal 3 0.80 300 0 300 0 300 0.0000 300 0.9966
equal 5 0.05 68 0 68 0 54 0.0000 500 0.9950
equal 5 0.10 302 0 300 0 299 0.0000 500 0.9949
equal 5 0.20 496 0 496 0 495 0.0000 500 0.8857
equal 5 0.60 500 0 500 0 500 0.0000 500 0.0000
equal 5 0.80 500 0 500 0 500 0.0000 500 0.0000
notequal 1 0.05 100 0 100 0 100 0.0000 100 0.9990
notequal 1 0.10 100 0 100 0 100 0.0000 100 0.9990
notequal 1 0.20 100 0 100 0 100 0.0000 100 0.9990
notequal 1 0.60 100 0 100 0 100 0.0000 100 0.9989
notequal 1 0.80 100 0 100 0 100 0.0385 100 0.9989
notequal 10 0.05 99 0 99 0 98 0.0000 1000 0.9900
notequal 10 0.10 100 0 100 0 100 0.0000 1000 0.9900
notequal 10 0.20 106 0 106 0 100 0.0000 1000 0.9900
notequal 10 0.60 117 0 116 0 151 0.0000 1000 0.9900
notequal 10 0.80 100 0 100 0 326 0.0000 1000 0.9900
notequal 3 0.05 99 0 99 0 98 0.0000 300 0.9970
notequal 3 0.10 106 0 106 0 101 0.0000 300 0.9970
notequal 3 0.20 187 0 186 0 152 0.0000 300 0.9970
notequal 3 0.60 296 0 298 0 300 0.0000 300 0.9967
notequal 3 0.80 267 0 267 0 300 0.0000 300 0.9966
notequal 5 0.05 100 0 100 0 99 0.0000 500 0.9950
notequal 5 0.10 104 0 104 0 101 0.0000 500 0.9950
notequal 5 0.20 131 0 131 0 112 0.0000 500 0.9950
notequal 5 0.60 310 0 315 0 404 0.0000 500 0.9941
notequal 5 0.80 127 0 126 0 483 0.0000 500 0.9509

Session information

sessionInfo()
R version 3.5.1 (2018-07-02)
Platform: x86_64-apple-darwin15.6.0 (64-bit)
Running under: macOS  10.14.3

Matrix products: default
BLAS: /Library/Frameworks/R.framework/Versions/3.5/Resources/lib/libRblas.0.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/3.5/Resources/lib/libRlapack.dylib

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] susieR_0.6.4.0438 kableExtra_1.0.1  knitr_1.20        dplyr_0.7.8      

loaded via a namespace (and not attached):
 [1] Rcpp_1.0.0        highr_0.7         compiler_3.5.1   
 [4] pillar_1.3.1      git2r_0.24.0      workflowr_1.1.1  
 [7] bindr_0.1.1       R.methodsS3_1.7.1 R.utils_2.7.0    
[10] tools_3.5.1       digest_0.6.18     lattice_0.20-38  
[13] evaluate_0.12     tibble_2.0.1      viridisLite_0.3.0
[16] pkgconfig_2.0.2   rlang_0.3.1       Matrix_1.2-15    
[19] rstudioapi_0.9.0  yaml_2.2.0        bindrcpp_0.2.2   
[22] stringr_1.3.1     httr_1.4.0        xml2_1.2.0       
[25] hms_0.4.2         grid_3.5.1        webshot_0.5.1    
[28] rprojroot_1.3-2   tidyselect_0.2.5  glue_1.3.0       
[31] R6_2.3.0          rmarkdown_1.11    purrr_0.2.5      
[34] readr_1.3.1       magrittr_1.5      whisker_0.3-2    
[37] backports_1.1.3   scales_1.0.0      htmltools_0.3.6  
[40] assertthat_0.2.0  rvest_0.3.2       colorspace_1.4-0 
[43] stringi_1.2.4     munsell_0.5.0     crayon_1.3.4     
[46] R.oo_1.22.0      

This reproducible R Markdown analysis was created with workflowr 1.1.1