Real-world processes produce observable outputs characterized as signals. These can be discrete or continuous in nature, can be pure or embed uncertainty about the measurements and the explanatory model, come from a stationary or non-stationary source, among many other variations. These signals are modeled to allow for both theoretical descriptions and practical applications. The model itself can be deterministic or stochastic, in which case the signal is characterized as a parametric random process whose parameters can be estimated in a well-defined manner.
Many real-world signals exhibit significant autocorrelation and an extensive literature exists on different means to characterize and model different forms of autocorrelation. One of the simplest and most intuitive is the higher-order Markov process, which extends the “memory” of a standard Markov process beyond the single previous observation. The higher-order Markov process, unfortunately, is not as analytically tractable as its standard version and poses difficulties for statistical inference. A more parsimonious approach assumes that the observed sequence is a noisy observation of an underlying hidden process represented as a first-order Markov chain. In other terms, long-range dependencies between observations are mediated via latent variables. It is important to note that the Markov property is only assumed for the hidden states, and the observations are assumed conditionally independent given these latent states. While the observations may not exhibit any Markov behavior, the simple Markovian structure of the hidden states is sufficient to allow easy inference.
Model specification
HMM involve two interconnected models. The state model consists of a discrete-time, discrete-state first-order Markov chain \(z_t \in \{1, \dots, K\}\) that transitions according to \(p(z_t | z_{t-1})\). In turns, the observation model is governed by \(p(\mat{y}_t | z_t)\), where \(\mat{y}_t\) are the observations, emissions or output. The corresponding joint distribution is
\[
p(\mat{z}_{1:T}, \mat{y}_{1:T})
= p(\mat{z}_{1:T}) p(\mat{y}_{1:T} | \mat{z}_{1:T})
= \left[ p(z_1) \prod_{t=2}^{T}{p(z_t | z_{t-1})} \right] \left[ \prod_{t=1}^{T}{p(\mat{y}_t | z_{t})} \right].
\]
This is a specific instance of the state space model family in which the latent variables are discrete. Each single time slice corresponds to a mixture distribution with component densities given by \(p(\mat{y}_t | z_t)\), thus HMM may be interpreted as an extension of a mixture model in which the choice of component for each observation is not selected independently but depends on the choice of component for the previous observation. In the case of a simple mixture model for an independent and identically distributed sample, the parameters of the transition matrix inside the \(i\)-th column are the same, so that the conditional distribution \(p(z_t | z_{t-1})\) is independent of \(z_{t-1}\).
When the output is discrete, the observation model commonly takes the form of an observation matrix
\[
p(\mat{y}_t | z_t = k, \mat{\theta}) = \text{Categorical}(\mat{y}_t | \mat{\theta}_k)
\]
Alternatively, if the output is continuous, the observation model is frequently a conditional Gaussian \[
p(\mat{y}_t | z_t = k, \mat{\theta}) = \mathcal{N}(\mat{y}_t | \mat{\mu}_k, \mat{\Sigma}_k).
\]
The latter is equivalent to a Gaussian mixture model with cluster membership ruled by Markovian dynamics, also known as Markov Switching Models (MSM). In this context, multiple sequential observations tend to share the same location until they suddenly jump into a new cluster.
The non-stochastic quantities of the model are the length of the observed sequence \(T\) and the number of hidden states \(K\). The observed sequence \(\mat{y}_t\) is a stochastic known quantity. The parameters of the models are \(\mat{\theta} = (\mat{\pi}_1, \mat{\theta}_h, \mat{\theta}_o)\), where \(\mat{\pi}_1\) is the initial state distribution, \(\mat{\theta}_h\) are the parameters of the hidden model and \(\mat{\theta}_o\) are the parameters of the state-conditional density function \(p(\mat{y}_t | z_t)\). The form of \(\mat{\theta}_h\) and \(\mat{\theta}_o\) depends on the specification of each model. In the case under study, state transition is characterized by the \(K \times K\) sized transition matrix with simplex rows \(\mat{A} = \{a_{ij}\}\) with \(a_{ij} = p(z_t = j | z_{t-1} = i)\).
The following Stan code illustrates the case of continuous observations where emissions are modeled as sampled from the Gaussian distribution with parameters \(\mu_k\) and \(\sigma_k\) for \(k \in \{1, \dots, K\}\). Adaptation for categorical observations should follow the guidelines outlined in the manual (Stan Development Team 2017c, sec. 10.6).
data {
int<lower=1> T; // number of observations (length)
int<lower=1> K; // number of hidden states
real y[T]; // observations
}
parameters {
// Discrete state model
simplex[K] pi1; // initial state probabilities
simplex[K] A[K]; // transition probabilities
// A[i][j] = p(z_t = j | z_{t-1} = i)
// Continuous observation model
ordered[K] mu; // observation means
real<lower=0> sigma[K]; // observation standard deviations
}
The generative model
We write a routine in the R programming language for our generative model. Broadly speaking, this involves three steps:
- The generation of parameters according to the priors \(\mat{\theta}^{(0)} \sim p(\mat{\theta})\).
- The generation of the hidden path \(\mat{z}_{1:T}^{(0)}\) according to the transition model parameters.
- The generation of the observed quantities based on the sampling distribution \(\mat{y}_t^{(0)} \sim p(\mat{y}_t | \mat{z}_{1:T}^{(0)}, \mat{\theta}^{(0)})\).
We break down the description of our code in these three steps.
runif_simplex <- function(T) {
x <- -log(runif(T))
x / sum(x)
}
hmm_generate <- function(K, T) {
# 1. Parameters
pi1 <- runif_simplex(K)
A <- t(replicate(K, runif_simplex(K)))
mu <- sort(rnorm(K, 10 * 1:K, 1))
sigma <- abs(rnorm(K))
# 2. Hidden path
z <- vector("numeric", T)
z[1] <- sample(1:K, size = 1, prob = pi1)
for (t in 2:T)
z[t] <- sample(1:K, size = 1, prob = A[z[t - 1], ])
# 3. Observations
y <- vector("numeric", T)
for (t in 1:T)
y[t] <- rnorm(1, mu[z[t]], sigma[z[t]])
list(y = y, z = z,
theta = list(pi1 = pi1, A = A,
mu = mu, sigma = sigma))
}
Generating parameters from the priors
The parameters to be generated include the \(K\)-sized initial state distribution vector \(\mat{\pi}_1\) and the \(K \times K\) transition matrix \(\mat{A}\). There are \((K-1)(K+1)\) free parameters as the vector and each row of the matrix are simplexes.
We set up uniform priors for \(\mat{\pi}_1\) and \(\mat{A}\), a weakly informative Gaussian for the location parameter \(\mu_k\) and a weakly informative half-Gaussian that ensures positivity for the scale parameters \(\sigma_k\). An ordinal constraint is imposed on the location parameter to restrict the exploration of the symmetric, degenerate mixture posterior surface to a single ordering of the parameters, thus solving the non-identifiability issues inherent to the model density (Betancourt 2017). In the simulation routine, the location parameters are adjusted to ensure that the observations are well-separated. We refer the reader to the Stan Development Team’s Prior Choice Recommendations Wiki article for advice on selecting priors which are simultaneously computationally efficient and statistically reasonable. Given the fixed quantity \(K\), we draw one sample from the prior distributions \(\mat{\theta}^{(0)} \sim p(\mat{\theta})\).
Generating the hidden path
The initial hidden state is drawn from a multinomial distribution with one trial and event probabilities given by the initial state probability vector \(\mat{\pi}_1^{(0)}\). Given the fixed quantity \(T\), the transition probabilities for each of the following steps \(t \in \{2, \dots, T\}\) are generated from a multinomial distribution with one trial and event probabilities given by the \(i\)-th row of the transition matrix \(\mat{A}_1^{(0)}\), where \(i\) is the state at the previous time step \(z_{t-1}^{(0)} = i\). The hidden states are subsequently sampled based on these transition probabilities.
Generating data from the sampling distribution
The observations conditioned on the hidden states are drawn from a univariate Gaussian density with parameters \(\mu_k^{(0)}\) and \(\sigma_k^{(0)}\).
Inference
There are several quantities of interest that can be inferred via different algorithms. Our code contains the implementation of the most relevant methods for unsupervised data: forward, forward-backward and Viterbi decoding algorithms. We acknowledge the authors of the Stan Manual for the thorough illustrations and code snippets, some of which served as a starting point for our own code. As estimation is treated later, we assume that model parameters \(\mat{\theta}\) are known.
Summary of the hidden quantities and their corresponding inference algorithm.
Name
|
Hidden Quantity
|
Availability at
|
Algorithm
|
Complexity
|
Filtering
|
\(p(z_t | \mat{y}_{1:t})\)
|
\(t\) (online)
|
Forward
|
\(O(K^2T)\) \(O(KT)\) if left-to-right
|
Smoothing
|
\(p(z_t | \mat{y}_{1:T})\)
|
\(T\) (offline)
|
Forward-backward
|
\(O(K^2T)\) \(O(KT)\) if left-to-right
|
Fixed lag smoothing
|
\(p(z_{t - \ell} | \mat{y}_{1:t})\), \(\ell \ge 1\)
|
\(t + \ell\) (lagged)
|
Forward-backward
|
\(O(K^2T)\) \(O(KT)\) if left-to-right
|
State prediction
|
\(p(z_{t+h} | \mat{y}_{1:t})\), \(h\ge 1\)
|
\(t\)
|
|
|
Observation prediction
|
\(p(y_{t+h} | \mat{y}_{1:t})\), \(h\ge 1\)
|
\(t\)
|
|
|
MAP Estimation
|
\(\argmax_{\mat{z}_{1:T}} p(\mat{z}_{1:T} | \mat{y}_{1:T})\)
|
\(T\)
|
Viterbi decoding
|
\(O(K^2T)\)
|
Log likelihood
|
\(p(\mat{y}_{1:T})\)
|
\(T\)
|
Forward
|
\(O(K^2T)\) \(O(KT)\) if left-to-right
|
Filtering
A filter infers the posterior distribution of the hidden states at a given step \(t\) based on all the information available up to that point \(p(z_t | \mat{y}_{1:t})\). It achieves better noise reduction than simply estimating the hidden state based on the current estimate \(p(z_t | \mat{y}_{t})\). The filtering process can be run online, or recursively, as new data streams in.
Filtered marginals can be computed recursively using the forward algorithm (Baum and Eagon 1967). Let \(\psi_t(j) = p(\mat{y}_t | z_t = j)\) be the local evidence at step \(t\) and \(\Psi(i, j) = p(z_t = j | z_{t-1} = i)\) be the transition probability. First, the one-step-ahead predictive density is computed
\[
p(z_t = j | \mat{y}_{1:t-1}) = \sum_{i}{\Psi(i, j) p(z_{t-1} = i | \mat{y}_{1:t-1})}.
\]
Acting as prior information, this quantity is updated with observed data at the step \(t\) using the Bayes rule,
\[\begin{align*}
\label{eq:filtered-beliefstate}
\alpha_t(j)
& \triangleq p(z_t = j | \mat{y}_{1:t}) \\
&= p(z_t = j | \mat{y}_{t}, \mat{y}_{1:t-1}) \\
&= Z_t^{-1} \psi_t(j) p(z_t = j | \mat{y}_{1:t-1}) \\
\end{align*}\]
where the normalization constant is given by
\[
Z_t
\triangleq p(\mat{y}_t | \mat{y}_{1:t-1})
= \sum_{l=1}^{K}{p(\mat{y}_{t} | z_t = l) p(z_t = l | \mat{y}_{1:t-1})}
= \sum_{l=1}^{K}{\psi_t(l) p(z_t = l | \mat{y}_{1:t-1})}.
\]
This predict-update cycle results in the filtered belief states at step \(t\). As this algorithm only requires the evaluation of the quantities \(\psi_t(j)\) for each value of \(z_t\) for every \(t\) and fixed \(\mat{y}_t\), the posterior distribution of the latent states is independent of the form of the observation density or indeed of whether the observed variables are continuous or discrete (Jordan 2003). In other words, \(\alpha_t(j)\) does not depend on the complete form of the density function \(p(\mat{y} | \mat{z})\) but only on the point values \(p(\mat{y}_t | z_t = j)\) for every \(\mat{y}_t\) and \(z_t\).
Let \(\mat{\alpha}_t\) be a \(K\)-sized vector with the filtered belief states at step \(t\), \(\mat{\psi}_t(j)\) be the \(K\)-sized vector of local evidence at step \(t\), \(\mat{\Psi}\) be the transition matrix and \(\mat{u} \odot \mat{v}\) be the Hadamard product, representing element-wise vector multiplication. Then, the Bayesian updating procedure can be expressed in matrix notation as
\[
\mat{\alpha}_t \propto \mat{\psi}_t \odot (\mat{\Psi}^T \mat{\alpha}_{t-1}).
\]
In addition to computing the hidden states, the algorithm yields the log likelihood
\[
\LL = \log p(\mat{y}_{1:T} | \mat{\theta}) = \sum_{t=1}^{T}{\log p(\mat{y}_{t} | \mat{y}_{1:t-1})} = \sum_{t=1}^{T}{\log Z_t}.
\]
transformed parameters {
vector[K] logalpha[T];
{ // Forward algorithm log p(z_t = j | y_{1:t})
real accumulator[K];
logalpha[1] = log(pi1) + normal_lpdf(y[1] | mu, sigma);
for (t in 2:T) {
for (j in 1:K) { // j = current (t)
for (i in 1:K) { // i = previous (t-1)
// Murphy (2012) p. 609 eq. 17.48
// belief state + transition prob + local evidence at t
accumulator[i] = logalpha[t-1, i] + log(A[i, j]) + normal_lpdf(y[t] | mu[j], sigma[j]);
}
logalpha[t, j] = log_sum_exp(accumulator);
}
}
} // Forward
}
The Stan code makes evident that the time complexity of the algorithm is \(O(K^2T)\): there are \(K \times K\) iterations within each of the \(T\) iterations of the outer loop. Brute-forcing through all possible hidden states \(K^T\) would prove prohibitive for realistic problems as time complexity increases exponentially with sequence length \(O(K^TT)\).
The implementation is representative of the matrix notation in Murphy (2012 eq. 17.48). The accumulator
variable carries the element-wise operations for all possible previous states which are later combined as indicated by the matrix multiplication.
Since log domain should be preferred to avoid numerical underflow, multiplications are translated into sums of logs. Furthermore, we use Stan’s implementation of the linear sums on the log scale to prevent underflow and overflow in the exponentiation (Stan Development Team 2017c, 192). In consequence, logalpha
represents the forward quantity in log scale and needs to be exponentially normalized for interpretability.
generated quantities {
vector[K] alpha[T];
{ // Forward algortihm
for (t in 1:T)
alpha[t] = softmax(logalpha[t]);
} // Forward
}
Since the unnormalized forward probability is sufficient to compute the posterior log density and estimate the parameters, it should be part of either the model
or the transformed parameters
blocks. We chose the latter to keep track of the estimates. We expand on estimation afterward.
Smoothing
A smoother infers the posterior distribution of the hidden states at a given state based on all the observations or evidence \(p(z_t | \mat{y}_{1:T})\). Although noise and uncertainty are significantly reduced as a result of conditioning on past and future data, the smoothing process can only be run offline.
Inference can be done by means of the forward-backward algorithm, which also plays an important role in the Baum-Welch algorithm for learning model parameters (Baum and Eagon 1967; Baum et al. 1970). Let \(\gamma_t(j)\) be the desired smoothed posterior marginal,
\[
\gamma_t(j)
\triangleq p(z_t = j | \mat{y}_{1:T}),
\]
\(\alpha_t(j)\) be the filtered belief state at the step \(t\) as defined previously, and \(\beta_t(j)\) be the conditional likelihood of future evidence given that the hidden state at step \(t\) is \(j\),
\[
\beta_t(j)
\triangleq p(\mat{y}_{t+1:T} | z_t = j).
\]
Then, the chain of smoothed marginals can be segregated into the past and the future components by conditioning on the belief state \(z_t\),
\[
\gamma_t(j)
= \frac{\alpha_t(j) \beta_t(j)}{p(\mat{y}_{1:T})}
\propto \alpha_t(j) \beta_t(j).
\]
The future component can be computed recursively from right to left:
\[\begin{align*}
\beta_{t-1}(i)
&= p(\mat{y}_{t:T} | z_{t-1} = i) \\
&= \sum_{j=1}^{K}{p(z_t =j, \mat{y}_{t}, \mat{y}_{t+1:T} | z_{t-1} = i)} \\
&= \sum_{j=1}^{K}{p(\mat{y}_{t+1:T} | z_t = j)p(z_t = j, \mat{y}_{t} | z_{t-1} = i)} \\
&= \sum_{j=1}^{K}{p(\mat{y}_{t+1:T} | z_t = j)p(\mat{y}_t | z_t = j)p(z_t = j | z_{t-1} = i)} \\
&= \sum_{j=1}^{K}{\beta_t(j) \psi_t(j) \Psi(i, j)}
\end{align*}\]
Let \(\mat{\beta}_t\) be a \(K\)-sized vector with the conditional likelihood of future evidence given the hidden state at step \(t\). Then, the backward procedure can be expressed in matrix notation as
\[
\mat{\beta}_{t-1} \propto \mat{\Psi} (\mat{\psi}_t \odot \mat{\beta}_{t}).
\]
At the last step, the base case is given by \[
\beta_{T}(i)
= p(\mat{y}_{T+1:T} | z_{T} = i) = p(\varnothing | z_t = i) = 1.
\]
Intuitively, the forward-backward algorithm passes information from left to right and then from right to left, combining them at each node. A straightforward implementation of the algorithm runs in \(O(K^2 T)\) time because of the \(K \times K\) matrix multiplication at each step. Nonetheless the frequent description as two subsequent passes, these procedures are not inherently sequential and share no information. As a result, they could be implemented in parallel.
There is a significant reduction if the transition matrix is sparse. Inference for a left-to-right (upper triangular) transition matrix, a model where the state index increases or stays the same as time passes, runs in \(O(TK)\) time (Bakis 1976; Jelinek 1976). Additional assumptions about the form of the transition matrix may ease complexity further, for example decreasing the time to \(O(TK\log K)\) if \(\psi(i, j) \propto \exp(-\sigma^2 |\mat{z}_i - \mat{z}_j|)\). Finally, ad-hoc data pre-processing strategies may help control complexity, for example by pruning nodes with low conditional probability of occurrence.
functions {
vector normalize(vector x) {
return x / sum(x);
}
}
generated quantities {
vector[K] alpha[T];
vector[K] logbeta[T];
vector[K] loggamma[T];
vector[K] beta[T];
vector[K] gamma[T];
{ // Forward algortihm
for (t in 1:T)
alpha[t] = softmax(logalpha[t]);
} // Forward
{ // Backward algorithm log p(y_{t+1:T} | z_t = j)
real accumulator[K];
for (j in 1:K)
logbeta[T, j] = 1;
for (tforward in 0:(T-2)) {
int t;
t = T - tforward;
for (j in 1:K) { // j = previous (t-1)
for (i in 1:K) { // i = next (t)
// Murphy (2012) Eq. 17.58
// backwards t + transition prob + local evidence at t
accumulator[i] = logbeta[t, i] + log(A[j, i]) + normal_lpdf(y[t] | mu[i], sigma[i]);
}
logbeta[t-1, j] = log_sum_exp(accumulator);
}
}
for (t in 1:T)
beta[t] = softmax(logbeta[t]);
} // Backward
{ // forward-backward algorithm log p(z_t = j | y_{1:T})
for(t in 1:T) {
loggamma[t] = alpha[t] .* beta[t];
}
for(t in 1:T)
gamma[t] = normalize(loggamma[t]);
} // forward-backward
}
The reader should not be deceived by the similarity to the code shown in the filtering section. Note that the indices in the log transition matrix are inverted and the evidence is now computed for the next state. We need to invert the time index as backward ranges are not available in Stan.
The forward-backward algorithm was designed to exploit via recursion the conditional independencies in the HMM. First, the posterior marginal probability of the latent states at a given time step is broken down into two quantities: the past and the future components. Second, taking advantage of the Markov properties, each of the two are further broken down into simpler pieces via conditioning and marginalizing, thus creating an efficient predict-update cycle.
This strategy makes otherwise unfeasible calculations possible. Consider for example a time series with \(T = 100\) observations and \(K = 5\) hidden states. Summing the joint probability over all possible state sequences would involve \(2 \times 100 \times 5^{100} \approx 10^{72}\) computations, while the forward and backward passes only take \(3,000\) each. Moreover, one pass may be avoided depending on the goal of the data analysis. Summing the forward probabilities at the last time step is enough to compute the likelihood, and the backwards recursion would be only needed if the posterior probabilities of the states were also required.
MAP: Viterbi
It is also of interest to compute the most probable state sequence or path,
\[
\mat{z}^* = \argmax_{\mat{z}_{1:T}} p(\mat{z}_{1:T} | \mat{y}_{1:T}).
\]
The jointly most probable sequence of states can be inferred via maximum a posteriori (MAP) estimation. Note that the jointly most probable sequence is not necessarily the same as the sequence of marginally most probable states given by the maximizer of the posterior marginals (MPM),
\[
\mat{\hat{z}} = (\argmax_{z_1} p(z_1 | \mat{y}_{1:T}), \ \dots, \ \argmax_{z_T} p(z_T | \mat{y}_{1:T})),
\]
which maximizes the expected number of correct individual states.
The MAP estimate is always globally consistent: while locally a state may be most probable at a given step, the Viterbi or max-sum algorithm decodes the most likely single plausible path (Viterbi 1967). Furthermore, the MPM sequence may have zero joint probability if it includes two successive states that, while being individually the most probable, are connected in the transition matrix by a zero. On the other hand, MPM may be considered more robust since the state at each step is estimated by averaging over its neighbors rather than conditioning on a specific value of them.
The Viterbi applies the max-sum algorithm in a forward fashion plus a traceback procedure to recover the most probable path. In simple terms, once the most probable state \(z_t\) is estimated, the procedure conditions the previous states on it. Let \(\delta_t(j)\) be the probability of arriving to the state \(j\) at step \(t\) given the most probable path was taken,
\[
\delta_t(j)
\triangleq \max_{z_1, \dots, z_{t-1}} p(\mat{z}_{1:t-1}, z_t = j | \mat{y}_{1:t}).
\]
The most probable path to state \(j\) at step \(t\) consists of the most probable path to some other state \(i\) at point \(t-1\), followed by a transition from \(i\) to \(j\),
\[
\delta_t(j)
= \max_{i} \delta_{t-1}(i) \ \psi(i, j) \ \psi_t(j).
\]
Additionally, the most likely previous state on the most probable path to \(j\) at step \(t\) is given by \[
a_t(j)
= \argmax_{i} \delta_{t-1}(i) \ \psi(i, j) \ \psi_t(j).
\]
By initializing with \(\delta_1 = \pi_j \phi_1(j)\) and terminating with the most probable final state \(z_T^* = \argmax_{i} \delta_T(i)\), the most probable sequence of states is estimated using the traceback,
\[
z_t^* = a_{t+1}(z_{t+1}^*).
\]
It is advisable to work in the log domain to avoid numerical underflow,
\[
\delta_t(j)
\triangleq \max_{\mat{z}_{1:t-1}} \log p(\mat{z}_{1:t-1}, z_t = j | \mat{y}_{1:t})
= \max_{i} \log \delta_{t-1}(i) + \log \psi(i, j) + \log \psi_t(j).
\]
As with the backward-forward algorithm, the time complexity of Viterbi is \(O(K^2T)\) and the space complexity is \(O(KT)\). If the transition matrix has the form \(\psi(i, j) \propto \exp(-\sigma^2 ||\mat{z}_i - \mat{z}_j||^2)\), implementation runs in \(O(TK)\) time.
generated quantities {
int<lower=1, upper=K> zstar[T];
real logp_zstar;
{ // Viterbi algorithm
int bpointer[T, K]; // backpointer to the most likely previous state on the most probable path
real delta[T, K]; // max prob for the sequence up to t
// that ends with an emission from state k
for (j in 1:K)
delta[1, K] = normal_lpdf(y[1] | mu[j], sigma[j]);
for (t in 2:T) {
for (j in 1:K) { // j = current (t)
delta[t, j] = negative_infinity();
for (i in 1:K) { // i = previous (t-1)
real logp;
logp = delta[t-1, i] + log(A[i, j]) + normal_lpdf(y[t] | mu[j], sigma[j]);
if (logp > delta[t, j]) {
bpointer[t, j] = i;
delta[t, j] = logp;
}
}
}
}
logp_zstar = max(delta[T]);
for (j in 1:K)
if (delta[T, j] == logp_zstar)
zstar[T] = j;
for (t in 1:(T - 1)) {
zstar[T - t] = bpointer[T - t + 1, zstar[T - t + 1]];
}
}
}
The variable delta
is a straightforward implementation of the corresponding equation. bpointer
is the traceback needed to compute the most probable sequence of states after the most probably final state zstar
is computed.
Parameter estimation
The model likelihood can be derived from the definition of the quantity \(\gamma_t(j)\): given that its sum over all possible values of the latent variable must equal one, the log likelihood at time index \(t\) becomes
\[
\LL_t = \sum_{i = 1}^{K}{\alpha_t(i) \beta_{t}(i)}.
\]
The last step \(T\) has two convenient characteristics. First, the recurrent nature of the forward probability implies that the last iteration retains the information of all the intermediate state probabilities. Second, the base case for the backwards quantity is \(\beta_{T}(i) = 1\). Consequently, the log likelihood reduces to
\[
\LL_T \propto \sum_{i = 1}^{K}{\alpha_T(i)}.
\]
model {
target += log_sum_exp(logalpha[T]); // Note: update based only on last logalpha
}
As we expect high multimodality in the posterior density, we use a clustering algorithm to feed the sampler with initialization values for the location and scale parameters. Although \(K\)-means is not a good choice for this data because it does not consider the time-dependent nature of the data, it provides an educated guess sufficient for initialization.
hmm_init <- function(K, y) {
clasif <- kmeans(y, K)
init.mu <- by(y, clasif$cluster, mean)
init.sigma <- by(y, clasif$cluster, sd)
init.order <- order(init.mu)
list(
mu = init.mu[init.order],
sigma = init.sigma[init.order]
)
}
hmm_fit <- function(K, y) {
rstan_options(auto_write = TRUE)
options(mc.cores = parallel::detectCores())
stan.model = 'stan/hmm_gaussian.stan'
stan.data = list(
T = length(y),
K = K,
y = y
)
stan(file = stan.model,
data = stan.data, verbose = T,
iter = 400, warmup = 200,
thin = 1, chains = 1,
cores = 1, seed = 900,
init = function(){hmm_init(K, y)})
}
Worked example
We draw one sample of length \(T = 500\) from a data generating process with \(K = 3\) latent states. We fit the model using the Stan code and the initialization methodology introduced previously.
set.seed(900)
K <- 3
T_length <- 500
dataset <- hmm_generate(K, T_length)
fit <- hmm_fit(K, dataset$y)
##
## TRANSLATING MODEL 'hmm_gaussian' FROM Stan CODE TO C++ CODE NOW.
## successful in parsing the Stan model 'hmm_gaussian'.
##
## CHECKING DATA AND PREPROCESSING FOR MODEL 'hmm_gaussian' NOW.
##
## COMPILING MODEL 'hmm_gaussian' NOW.
##
## STARTING SAMPLER FOR MODEL 'hmm_gaussian' NOW.
##
## SAMPLING FOR MODEL 'hmm_gaussian' NOW (CHAIN 1).
##
## Gradient evaluation took 0.002 seconds
## 1000 transitions using 10 leapfrog steps per transition would take 20 seconds.
## Adjust your expectations accordingly!
##
##
## Iteration: 1 / 400 [ 0%] (Warmup)
## Iteration: 40 / 400 [ 10%] (Warmup)
## Iteration: 80 / 400 [ 20%] (Warmup)
## Iteration: 120 / 400 [ 30%] (Warmup)
## Iteration: 160 / 400 [ 40%] (Warmup)
## Iteration: 200 / 400 [ 50%] (Warmup)
## Iteration: 201 / 400 [ 50%] (Sampling)
## Iteration: 240 / 400 [ 60%] (Sampling)
## Iteration: 280 / 400 [ 70%] (Sampling)
## Iteration: 320 / 400 [ 80%] (Sampling)
## Iteration: 360 / 400 [ 90%] (Sampling)
## Iteration: 400 / 400 [100%] (Sampling)
##
## Elapsed Time: 28.452 seconds (Warm-up)
## 3.768 seconds (Sampling)
## 32.22 seconds (Total)
The estimates are extremely efficient as expected when dealing with generated data. The Markov Chain are well behaved as diagnosed by the low Monte Carlo standard error, the high effective sample size and the near-one shrink factor of Gelman and Rubin (1992). Although not shown, further diagnostics confirm satisfactory mixing, convergence and the absence of divergences. Point estimates and posterior intervals are provided by rstan’s summary
function.
Estimated parameters and hidden quantities. MCSE = Monte Carlo Standard Error, SE = Standard Error, ESS = Effective Sample Size.
pi1[1] |
0.14 |
0.35 |
0.02 |
0.24 |
0.05 |
0.31 |
0.69 |
200.00 |
1.00 |
pi1[2] |
0.38 |
0.29 |
0.02 |
0.22 |
0.04 |
0.26 |
0.61 |
200.00 |
1.01 |
pi1[3] |
0.47 |
0.36 |
0.02 |
0.22 |
0.08 |
0.35 |
0.66 |
200.00 |
1.00 |
A[1,1] |
0.03 |
0.02 |
0.00 |
0.02 |
0.00 |
0.02 |
0.05 |
198.89 |
1.00 |
A[1,2] |
0.54 |
0.53 |
0.00 |
0.05 |
0.47 |
0.53 |
0.59 |
200.00 |
1.00 |
A[1,3] |
0.43 |
0.45 |
0.00 |
0.05 |
0.38 |
0.45 |
0.50 |
200.00 |
1.00 |
A[2,1] |
0.56 |
0.57 |
0.00 |
0.04 |
0.52 |
0.57 |
0.63 |
200.00 |
1.00 |
A[2,2] |
0.31 |
0.30 |
0.00 |
0.04 |
0.24 |
0.30 |
0.35 |
200.00 |
1.00 |
A[2,3] |
0.13 |
0.13 |
0.00 |
0.03 |
0.10 |
0.13 |
0.17 |
200.00 |
1.00 |
A[3,1] |
0.20 |
0.17 |
0.00 |
0.05 |
0.12 |
0.17 |
0.24 |
200.00 |
1.00 |
A[3,2] |
0.72 |
0.79 |
0.00 |
0.05 |
0.72 |
0.80 |
0.85 |
123.20 |
1.00 |
A[3,3] |
0.07 |
0.03 |
0.00 |
0.02 |
0.01 |
0.03 |
0.07 |
78.75 |
1.00 |
mu[1] |
8.94 |
9.16 |
0.01 |
0.14 |
8.98 |
9.15 |
9.34 |
200.00 |
1.00 |
mu[2] |
18.73 |
18.64 |
0.03 |
0.29 |
18.29 |
18.63 |
18.99 |
107.77 |
1.00 |
mu[3] |
29.23 |
29.52 |
0.01 |
0.17 |
29.30 |
29.52 |
29.76 |
200.00 |
1.00 |
sigma[1] |
0.19 |
1.80 |
0.01 |
0.12 |
1.65 |
1.79 |
1.97 |
200.00 |
1.01 |
sigma[2] |
3.65 |
3.98 |
0.02 |
0.24 |
3.67 |
3.97 |
4.30 |
100.08 |
1.01 |
sigma[3] |
1.69 |
1.75 |
0.01 |
0.15 |
1.57 |
1.74 |
1.94 |
200.00 |
1.00 |
We extract the samples for some quantities of interest, namely the filtered probabilities vector \(\mat{\alpha}_t\), the smoothed probability vector \(\mat{\gamma}_t\) and the most probable hidden path \(\mat{z}^*\). As an informal assessment that our software recover the hidden states correctly, we observe that the filtered probability, the smoothed probability and the most likely path are all reasonable accurate to the true values. As expected, because we used an ordering constraint to break symmetry, the MAP estimate is able to successfully recover the (simulated) hidden path without label switching.
alpha <- extract(fit, pars = 'alpha')[[1]]
gamma <- extract(fit, pars = 'gamma')[[1]]
alpha_med <- apply(alpha, c(2, 3), function(x) { quantile(x, c(0.50)) })
alpha_hard <- apply(alpha_med, 1, which.max)
table(true = dataset$z, estimated = alpha_hard)
## estimated
## true 1 2 3
## 1 155 0 0
## 2 6 226 1
## 3 0 5 107
zstar <- extract(fit, pars = 'zstar')[[1]]
plot_statepath(zstar, dataset$z)

table(true = dataset$z, estimated = apply(zstar, 2, median))
## estimated
## true 1 2 3
## 1 154 1 0
## 2 4 227 2
## 3 0 5 107
Finally, we plot the observed series colored according to the jointly most likely state. We identify an insignificant quantity of misclassifications product of the stochastic nature of our software.
plot_outputvit(x = dataset$y,
z = dataset$z,
zstar = zstar,
main = "Most probable path")
