일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 |
8 | 9 | 10 | 11 | 12 | 13 | 14 |
15 | 16 | 17 | 18 | 19 | 20 | 21 |
22 | 23 | 24 | 25 | 26 | 27 | 28 |
29 | 30 | 31 |
- FastSpeech
- 논문리뷰
- Collaborative Filtering
- wavenet
- BOJ
- Neural Collaborative Filtering
- Dilated convolution
- FastSpeech2
- TTS
- NEG
- Recommender System
- RecSys
- Implicit feedback
- 부스트캠프 AI Tech
- Ai
- CV
- 백준
- 추천시스템
- Negative Sampling
- ANNOY
- Item2Vec
- Tacotron
- SGNS
- Skip-gram
- ALS
- CF
- matrix factorization
- Tacotron2
- word2vec
- Noise Contrastive Estimation
- Today
- Total
목록FastSpeech (2)
devmoon
https://arxiv.org/abs/2006.04558 FastSpeech 2: Fast and High-Quality End-to-End Text to Speech Non-autoregressive text to speech (TTS) models such as FastSpeech can synthesize speech significantly faster than previous autoregressive models with comparable quality. The training of FastSpeech model relies on an autoregressive teacher model for duratio arxiv.org 한국어 음성합성 프로젝트를 진행하기 위해서 살펴본 TTS 논문들 ..
https://arxiv.org/abs/1905.09263 FastSpeech: Fast, Robust and Controllable Text to Speech Neural network based end-to-end text to speech (TTS) has significantly improved the quality of synthesized speech. Prominent methods (e.g., Tacotron 2) usually first generate mel-spectrogram from text, and then synthesize speech from the mel-spectrogram us arxiv.org Abstract FastSpeech 이전 등장하는 TTS 모델들은 공통적으..