Last updated: 2019-05-06

workflowr checks: (Click a bullet for more information)
  • R Markdown file: up-to-date

    Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.

  • Environment: empty

    Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.

  • Seed: set.seed(666)

    The command set.seed(666) was run prior to running the code in the R Markdown file. Setting a seed ensures that any results that rely on randomness, e.g. subsampling or permutations, are reproducible.

  • Session information: recorded

    Great job! Recording the operating system, R version, and package versions is critical for reproducibility.

  • Repository version: b1a7e3f

    Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility. The version displayed above was the version of the Git repository at the time these results were generated.

    Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish or wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:
    
    Ignored files:
        Ignored:    .Rhistory
        Ignored:    .Rproj.user/
        Ignored:    data/global/
        Ignored:    data/global_KS_cat.Rdata
        Ignored:    data/global_KS_clim.Rdata
        Ignored:    data/global_KS_event.Rdata
        Ignored:    data/global_dec_trend.Rdata
        Ignored:    data/global_effect_cat.Rdata
        Ignored:    data/global_effect_clim.Rdata
        Ignored:    data/global_effect_event.Rdata
        Ignored:    data/global_effect_event_length_slope.Rdata
        Ignored:    data/global_effect_event_slope.Rdata
        Ignored:    data/sst_ALL_add_trend.Rdata
        Ignored:    data/sst_ALL_aov_tukey.Rdata
        Ignored:    data/sst_ALL_clim_event_cat.Rdata
        Ignored:    data/sst_ALL_clim_event_cat_fix.Rdata
        Ignored:    data/sst_ALL_flat.Rdata
        Ignored:    data/sst_ALL_knockout.Rdata
        Ignored:    data/sst_ALL_length.Rdata
        Ignored:    data/sst_ALL_length_width_10.Rdata
        Ignored:    data/sst_ALL_length_width_20.Rdata
        Ignored:    data/sst_ALL_length_width_30.Rdata
        Ignored:    data/sst_ALL_length_width_40.Rdata
        Ignored:    data/sst_ALL_length_width_50.Rdata
        Ignored:    data/sst_ALL_missing.Rdata
        Ignored:    data/sst_ALL_missing_fix.Rdata
        Ignored:    data/sst_ALL_repl.Rdata
        Ignored:    data/sst_ALL_trended.Rdata
    
    Unstaged changes:
        Modified:   analysis/Climatologies_and_baselines.Rmd
        Modified:   analysis/best_practices.Rmd
        Modified:   analysis/r_vs_python.Rmd
        Modified:   analysis/r_vs_python_arguments.Rmd
    
    
    Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
Expand here to see past versions:
    File Version Author Date Message
    html fa7fd57 robwschlegel 2019-03-19 Build site.
    Rmd 64ac134 robwschlegel 2019-03-19 Publish analysis files

Overview

In this last comparison vignette between the two languages we will look at the additional functions that they come with. Slightly different approaches have been taken to provide the user with the ability to calculate block averages from the detection outputs as well as categorising the ‘extremeness’ of events. We will compare these different workflows/outputs below.

library(tidyverse)
library(ggpubr)
library(heatwaveR)
compare_event <- function(res_event_R, res_event_Python){
  # Remove non-numeric columns
  res_event_num <- res_event_R %>% 
    select_if(is.numeric)
  # Run the loop
  res_event <- data.frame()
  for(i in 1:length(colnames(res_event_num))){
    if(colnames(res_event_num)[i] %in% colnames(res_event_Python)){
      x1 <- res_event_R[colnames(res_event_R) == colnames(res_event_num)[i]]
      x2 <- res_event_Python[colnames(res_event_Python) == colnames(res_event_num)[i]]
      x <- data.frame(r = round(cor(x1, x2, use = "complete.obs"), 4),
                      sum = round(sum(x1, na.rm = T) - sum(x2, na.rm = T), 4),
                      var = colnames(res_event_num)[i])
      colnames(x)[1] <- "r"
      rownames(x) <- NULL
      } else {
        x <- data.frame(r = NA, sum = NA, var = colnames(res_event_num)[i])
        }
    res_event <- rbind(res_event, x)
    }
  return(res_event)
  }
library(reticulate)
use_condaenv("py27")
import numpy as np
from datetime import date
import pandas as pd
import marineHeatWaves as mhw
# The date values
t = np.arange(date(1982,1,1).toordinal(),date(2014,12,31).toordinal()+1)
# The temperature values
sst = np.loadtxt(open("data/sst_WA.csv", "r"), delimiter = ',', skiprows = 1)
# The event metrics
mhws, clim = mhw.detect(t, sst)

block_average() comparisons

First up we take a peak at the block_average() functions and the outputs they produce.

mhwBlock = mhw.blockAverage(t, mhws, clim)
mhwsBlock_df = pd.DataFrame.from_dict(mhwBlock)
mhwsBlock_df.to_csv('data/mhwBlock.csv', sep = ',', index = False)
default_r <- detect_event(ts2clm(data = sst_WA, climatologyPeriod = c("1982-01-01", "2014-12-31")))
block_r <- block_average(default_r)
block_py <- read_csv("data/mhwBlock.csv")

Overlapping column names

Initially we want to see how well the naming conventions for the columns hold up.

cols_r <- colnames(block_r)[!(colnames(block_r) %in% colnames(block_py))]
cols_r
cols_py <- colnames(block_py)[!(colnames(block_py) %in% colnames(block_r))]
cols_py

The two columns that the R code outputs that the Python code lacks is year and duration_max. We may see that the Python code creates years_centre, years_end, and years_start. The difference for this is that the Python version is set up to allow for block averages to be calculated for units of time other than single years, whereas the R version only outputs single annual means. I don’t know that this needs to be expanded upon in the R code. I don’t see this functions as terribly useful, to be honest. Surely if a user has gotten to this point, they can calculate block averages on their own to their own preferences.

The variable duration_max shows the maximum duration of an event detected in a given year, and was added to heatwaveR v0.3.5 at the request of a user. There is talk of including it in Python, too.

Comparison of outputs

Up next we look at how well the outputs correlate and sum up. As we saw in the previous vignette, correlation values are useful for showing how consistently similar the results are, but not for if they are exact. For that we will compare sums as well.

compare_event(block_r, block_py)

As expected, the results correlate and sum up to nearly identical values. The mismatch being a product of rounding differences between the languages.

Trend comparisons

There is no comparable R functions that performs these calculations. One could be created if the desire exists…

mean, trend, dtrend = mhw.meanTrend(mhwBlock)
#print(mean)
#print(trend)
#print(dtrend)
# Print out results in R as desired
#py$mean
#py$trend
#py$dtrend

Category comparisons

The final bit of extra functionality that must be compared between the two languages is the newest addition for both. This is the calculation of categories for MHWs as seen in Hobday et al. (2018). The two languages go about the calculation of these events in rather different ways. And produce different outputs. This is however intentional and so it must be decided if this is to be made more consistent, or be left as it is.

# Load Python category results
category_py <- read_csv("data/mhws_py.csv") %>% 
  select(date_peak, category, intensity_max, duration, 
         duration_moderate, duration_strong, duration_severe, duration_extreme)
category_py

# Calculate categories in R
category_r <- category(default_r, name = "WA")
category_r

I won’t go about comparing the outputs of these functions the same as I have for the other functions because, as stated above, they show different information. The R output is specifically tailored to match the information style of Table 2 in Hobday et al. (2018). This is one final point on which a decision must be made about whether or not the extra functionality of the languages should be brought to be the exact same, or allowed to differ. I think it is fine the way it is now.

Conclusion

Two questions still beg an answer:

  1. Do we want to have a built in trend detecting function in the R code, a la meanTrend in the Python package? I don’t think so. We have posted a vignette on the heatwaveR site showing users how to perform this calculation themselves, so I don’t think it needs to be included as a function. It would be simple to do should the desire exist.
  2. The output of the category information between the two languages is quite different. The Python code provides, as part of the base detect output, the days spent within each category, as well as the maximum category reached. The R output rather provides the proportion of time spent in each category, as well as the maximum category. The R code has additional columns added in to better match Table 2 in Hobday et al. (2018). I think this is useful as a user would probably want a summary output of the category information. But I could be convinced that it is the days, rather than proportions, that should be provided without the additional columns as well.

That wraps up the comparisons of the languages. It can be said that where small rounding differences persist between the languages, the base outputs are comparable and the languages may be used interchangeably. The extra functionality also matches up, minus a couple of issues that probably don’t need to be addressed. There are some difference, but these are stylistic and it is not clear that they should be changed/addressed.

One persistent problem is the impact that missing data have on the calculation of the 90th percentile threshold. THis must still be investigated at depth in the source code of both languages to nail down just exactly where they handle missing data differently.

It is now time to get started on looking at how to be go about consistently and reliably detecting thermal events in time series given a range of potential sub-optimal conditions.

References

Hobday, Alistair J., Eric C.J. Oliver, Alex Sen Gupta, Jessica A. Benthysen, Michael T. Burrows, Markus G. Donat, Neil J. Holbrook, et al. 2018. “Categorizing and naming marine heatwaves.” Oceanography 31 (2). https://doi.org/10.5670/oceanog.2018.205.

Session information

sessionInfo()

This reproducible R Markdown analysis was created with workflowr 1.1.1