zl程序教程

您现在的位置是:首页 >  Java

当前栏目

SP Module 10 Connected Speech & HMM Training

2023-02-18 16:38:43 时间

From subword units to n-grams: hierarchy of models

Defining a hierarchy of models: we can compile different HMMs to create models of utterances

We can do some pruning, remove some tokens while proceeding, reduce computation cost (Maybe Heuristic is also can be helpful in such case.)

Conditional independence and the forward algorithm

We use the Markov property of HMMs (i.e. conditional independence assumptions) to make computing probabilities of observation sequences easier

HMM training with the Baum-Welch algorithm

HMM training using the Baum-Welch algorithm. This gives a very high level overview of forward and backward probability calculation on HMMs and Expectation-Maximization as a way to optimise model parameters. The maths is in the readings (but not examinable).

Origin: Module 10 – Speech Recognition – Connected speech & HMM training Translate + Edit: YangSier (Homepage)