일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | ||||
4 | 5 | 6 | 7 | 8 | 9 | 10 |
11 | 12 | 13 | 14 | 15 | 16 | 17 |
18 | 19 | 20 | 21 | 22 | 23 | 24 |
25 | 26 | 27 | 28 | 29 | 30 | 31 |
- CF
- 부스트캠프 AI Tech
- Ai
- Recommender System
- RecSys
- word2vec
- Dilated convolution
- Tacotron2
- 백준
- Negative Sampling
- TTS
- Collaborative Filtering
- Skip-gram
- NEG
- BOJ
- SGNS
- Noise Contrastive Estimation
- ALS
- Implicit feedback
- 논문리뷰
- FastSpeech
- FastSpeech2
- 추천시스템
- matrix factorization
- Tacotron
- CV
- Item2Vec
- Neural Collaborative Filtering
- ANNOY
- wavenet
- Today
- Total
목록FastSpeech (2)
devmoon

https://arxiv.org/abs/2006.04558 FastSpeech 2: Fast and High-Quality End-to-End Text to Speech Non-autoregressive text to speech (TTS) models such as FastSpeech can synthesize speech significantly faster than previous autoregressive models with comparable quality. The training of FastSpeech model relies on an autoregressive teacher model for duratio arxiv.org 한국어 음성합성 프로젝트를 진행하기 위해서 살펴본 TTS 논문들 ..

https://arxiv.org/abs/1905.09263 FastSpeech: Fast, Robust and Controllable Text to Speech Neural network based end-to-end text to speech (TTS) has significantly improved the quality of synthesized speech. Prominent methods (e.g., Tacotron 2) usually first generate mel-spectrogram from text, and then synthesize speech from the mel-spectrogram us arxiv.org Abstract FastSpeech 이전 등장하는 TTS 모델들은 공통적으..