Updates on Hidden Markov Models in 2023 part7(Machine Learning)
<p>Abstract : In the paper, we introduce the maximum entropy estimator based on 2-dimensional empirical distribution of the observation sequence of hidden Markov model , when the sample size is big: in that case computing the maximum likelihood estimator is too consuming in time by the classical Baum-Welch EM algorithm. We prove the consistency and the asymptotic normality of the maximum entropy estimator in a quite general framework, where the asymptotic covariance matrix is explicitly estimated in terms of the 2-dimensional Fisher information. To complement it, the 2-dimensional relative entropy is skillfully used to study the hypotheses testing problem. Furthermore, we propose 2-dimensional maximum entropy algorithm for finding the maximum entropy estimator, which works for very large observation dataset and large hidden states set. Some numerical examples are furnished and commented for illustrating our theoretical results</p>
<p>2. Improving the Runtime of Algorithmic Polarization of Hidden Markov Models(arXiv)</p>
<p>Author : <a href="https://arxiv.org/search/?searchtype=author&query=Bian%2C+V" rel="noopener ugc nofollow" target="_blank">Vincent Bian</a>, <a href="https://arxiv.org/search/?searchtype=author&query=Madhukara%2C+R" rel="noopener ugc nofollow" target="_blank">Rachana Madhukara</a></p>
<p>Abstract : We improve the runtime of the linear compression scheme for hidden Markov sources presented in a 2018 paper of Guruswami, Nakkiran, and Sudan. Under the previous scheme, compressing a message of length n takes O(nlogn) runtime, and decompressing takes O(n1+δ) runtime for any fixed δ>0. We present how to improve the runtime of the decoding scheme to O(nlogn) by caching intermediate results to avoid repeating computation.</p>
<p><a href="https://medium.com/@monocosmo77/updates-on-hidden-markov-models-in-2023-part7-machine-learning-c58fb24cddde"><strong>Read More</strong></a></p>
<p> </p>