SP Module 10 Connected Speech & HMM Training
2023-02-18 16:38:43 时间
From subword units to n-grams: hierarchy of models
Defining a hierarchy of models: we can compile different HMMs to create models of utterances
We can do some pruning, remove some tokens while proceeding, reduce computation cost (Maybe Heuristic is also can be helpful in such case.)
Conditional independence and the forward algorithm
We use the Markov property of HMMs (i.e. conditional independence assumptions) to make computing probabilities of observation sequences easier
HMM training with the Baum-Welch algorithm
HMM training using the Baum-Welch algorithm. This gives a very high level overview of forward and backward probability calculation on HMMs and Expectation-Maximization as a way to optimise model parameters. The maths is in the readings (but not examinable).
Origin: Module 10 – Speech Recognition – Connected speech & HMM training Translate + Edit: YangSier (Homepage)
相关文章
- 漫谈图像超分辨率技术
- Centos7 运行Springboot打包后的jar文件的相关操作
- OpenFeign设置超时时间
- nacos集群配置
- 各个微服务认证授权的处理方法
- EasyExcel中无法使用Spring事务的解决办法
- 自定义注解实现防重复提交(纯后端解决)
- RabbitMQ + Haproxy 实现高可用镜像集群
- 开源 | 用深度学习让你的照片变得美丽
- 机器视觉入门知识总结
- Screaming Frog SEO Spider for Mac(网络爬虫开发工具) 18.1注册激活版
- 深度好文:理解可变形卷积和光流对齐
- Nacos2.1.0 - 外网服务器上的集群部署
- 听说,3D ToF传感器将成研发主流!
- markdown语法简介
- 让typecho支持table of contents
- 正则表达式教程
- 记一次bypy的使用
- bad interpreter 没有那个文件或目录
- vim入门