Overview

In this last comparison vignette between the two languages we will look at the other options that exist within the two languages. Slightly different approaches have been taken to provide the user with the ability to calculate block averages from the detection outputs as well as categorising the ‘extremeness’ of events. We will compare these different workflows/outputs below.

library(tidyverse)
library(ggpubr)
library(heatwaveR)
compare_event <- function(res_event_R, res_event_Python){
  # Remove non-numeric columns
  res_event_num <- res_event_R %>% 
    select_if(is.numeric)
  # Run the loop
  res_event <- data.frame()
  for(i in 1:length(colnames(res_event_num))){
    if(colnames(res_event_num)[i] %in% colnames(res_event_Python)){
      x1 <- res_event_R[colnames(res_event_R) == colnames(res_event_num)[i]]
      x2 <- res_event_Python[colnames(res_event_Python) == colnames(res_event_num)[i]]
      x <- data.frame(r = cor(x1, x2, use = "complete.obs"),
                      sum = sum(x1, na.rm = T) - sum(x2, na.rm = T),
                      var = colnames(res_event_num)[i])
      colnames(x)[1] <- "r"
      rownames(x) <- NULL
      } else {
        x <- data.frame(r = NA, sum = NA, var = colnames(res_event_num)[i])
        }
    res_event <- rbind(res_event, x)
    }
  return(res_event)
  }
library(reticulate)
use_condaenv("py27")
import numpy as np
from datetime import date
import pandas as pd
import marineHeatWaves as mhw
# The date values
t = np.arange(date(1982,1,1).toordinal(),date(2014,12,31).toordinal()+1)
# The temperature values
sst = np.loadtxt(open("data/sst_WA.csv", "r"), delimiter = ',', skiprows = 1)
# The event metrics
mhws, clim = mhw.detect(t, sst)

block_average() comparisons

First up we take a peak at the block_average() functions and the outputs they produce.

mhwBlock = mhw.blockAverage(t, mhws, clim)
mhwsBlock_df = pd.DataFrame.from_dict(mhwBlock)
mhwsBlock_df.to_csv('data/mhwBlock.csv', sep = ',', index = False)
default_r <- detect_event(ts2clm(data = sst_WA, climatologyPeriod = c("1982-01-01", "2014-12-31")))
block_r <- block_average(default_r)
block_py <- read_csv("data/mhwBlock.csv")

Overlapping column names

Initially we want to see how well the naming conventions for the columns hold up.

cols_r <- colnames(block_r)[!(colnames(block_r) %in% colnames(block_py))]
cols_r
## [1] "year"
cols_py <- colnames(block_py)[!(colnames(block_py) %in% colnames(block_r))]
cols_py
## [1] "years_centre" "years_end"    "years_start"

The one column that the R code outputs that the Python code lacks is year. We may see that the Python code creates years_centre, years_end, and years_start. The difference for this is that the Python version is set up to allow for block averages to be calculated for units of time other than single years, whereas the R version only outputs single annual means. I don’t know that this needs to be expanded upon in the R code. I don’t see this functions as terribly useful, to be honest. Surely if a user has gotten to this point, they can calculate block averages on their own to their own preferences.

Comparison of outputs

Up next we look at how well the outputs correlate and sum up. As we saw in the previous vignette, correlation values are useful for showing how consistently similar the results are, but not for if they are exact. For that we will compare sums as well.

compare_event(block_r, block_py)
##            r           sum                            var
## 1         NA            NA                           year
## 2  1.0000000  0.000000e+00                          count
## 3  1.0000000  0.000000e+00                       duration
## 4  0.9999991  3.184609e-03                 intensity_mean
## 5  0.9999995  3.797761e-03                  intensity_max
## 6  0.9999997  6.187193e-03              intensity_max_max
## 7  0.9992295  2.350205e-01                  intensity_var
## 8  1.0000000  5.286055e-02           intensity_cumulative
## 9  0.9999986  4.120563e-03       intensity_mean_relThresh
## 10 0.9999996  4.575269e-03        intensity_max_relThresh
## 11 0.9991691  2.341655e-01        intensity_var_relThresh
## 12 0.9999999  1.092280e-01 intensity_cumulative_relThresh
## 13 1.0000000  0.000000e+00             intensity_mean_abs
## 14 0.9998088  3.368333e-01              intensity_max_abs
## 15 0.9998059  2.489592e-01              intensity_var_abs
## 16 1.0000000  0.000000e+00       intensity_cumulative_abs
## 17 1.0000000 -3.203514e-05                     rate_onset
## 18 0.9999997  2.186699e-04                   rate_decline
## 19 0.9683879  0.000000e+00                     total_days
## 20 0.9600661  1.573880e-01                     total_icum

As expected, the results correlate and sum up to nearly identical values. The mismatch being a product of rounding differences between the languages.

Trend comparisons

There is no comparable R functions that performs these calculations. One could be created if the desire exists.

mean, trend, dtrend = mhw.meanTrend(mhwBlock)
#print(mean)
#print(trend)
#print(dtrend)
# Print out results in R as desired
#py$mean
#py$trend
#py$dtrend

Category comparisons

The final bit of extra functionality that must be compared between the two languages is the newest addition for both. This is the calculation of categories for MHWs as seen in Hobday et al. (2018). The two languages go about the calculation of these events in rather different ways. And produce different outputs. This is however intentional and so it must be decided if this is to be made more consistent, or be left as it is.

# Load Python category results
category_py <- read_csv("data/mhws_py.csv") %>% 
  select(date_peak, category, intensity_max, duration, 
         duration_moderate, duration_strong, duration_severe, duration_extreme)
category_py
## # A tibble: 60 x 8
##    date_peak  category intensity_max duration duration_moderate
##    <date>     <chr>            <dbl>    <int>             <int>
##  1 1984-06-05 Moderate          1.98       12                12
##  2 1984-06-19 Moderate          2.13        6                 6
##  3 1984-07-10 Moderate          2.24       19                19
##  4 1984-10-06 Moderate          1.29        5                 5
##  5 1984-10-23 Moderate          1.83        7                 7
##  6 1984-10-30 Moderate          1.50       20                17
##  7 1985-07-17 Moderate          2.21        7                 7
##  8 1987-10-04 Moderate          1.12        5                 5
##  9 1988-06-11 Moderate          1.69        6                 6
## 10 1988-06-28 Moderate          1.99        6                 6
## # ... with 50 more rows, and 3 more variables: duration_strong <int>,
## #   duration_severe <int>, duration_extreme <int>
# Calculate categories in R
category_r <- category(default_r, name = "WA")
category_r
## # A tibble: 60 x 11
##    event_no event_name peak_date  category   i_max duration p_moderate
##       <int> <fct>      <date>     <chr>      <dbl>    <int>      <dbl>
##  1        1 <NA>       1984-06-05 I Moderate  1.98       12        100
##  2        2 <NA>       1984-06-19 I Moderate  2.13        6        100
##  3        3 <NA>       1984-07-10 I Moderate  2.24       19        100
##  4        4 <NA>       1984-10-06 I Moderate  1.29        5        100
##  5        5 <NA>       1984-10-23 I Moderate  1.83        7        100
##  6        7 <NA>       1985-07-17 I Moderate  2.21        7        100
##  7        8 <NA>       1987-10-04 I Moderate  1.12        5        100
##  8        9 <NA>       1988-06-11 I Moderate  1.69        6        100
##  9       10 <NA>       1988-06-28 I Moderate  1.99        6        100
## 10       11 <NA>       1988-10-25 I Moderate  1.55       11        100
## # ... with 50 more rows, and 4 more variables: p_strong <dbl>,
## #   p_severe <dbl>, p_extreme <dbl>, season <chr>

I won’t go about comparing the outputs of these functions the same as I have for the other functions because, as stated above, they show different information. The R output is specifically tailored to match the information style of Table 2 in Hobday et al. (2018). This is one final point on which a decision must be made about whether or not the extra functionality of the languages should be brought to be the exact same, or allowed to differ. I think it is fine the way it is now.

Conclusion

Two questions still beg an answer:

  1. Do we want to have a built in trend detecting function in the R code, a la meanTrend in the Python package? I don’t think so. We could write a vignette in the heatwaveR site showing users how to perform this calculation themselves, but I don’t think it needs to be included as a function. It would be simple to do should the desire exist.
  2. The output of the category information between the two languages is quite different. The Python code provides, as part of the base detect output, the days spent within each category, as well as the maximum category reached. The R output rather provides the proportion of time spent in each category, as well as the maximum category. The R code has additional columns added in to better match Table 2 in Hobday et al. (2018). I think this is useful as a user would probably want a summary output of the category information. But I could be convinced that it is the days, rather than proportions, that should be provided without the additional columns as well.

That wraps up the comparisons of the languages. It can be said that where small rounding differences persist between the languages, the base outputs are comparable and the languages may be used interchangeably. The extra functionality also matches up, minus a couple of issues I will address. There are some difference, but these are stylistic and it is not clear that they should be changed/addressed.

It is now time to get started on looking at how to be go about consistently and reliably detecting thermal events in time series given a range of potential sub-optimal conditions.