일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | |||||
3 | 4 | 5 | 6 | 7 | 8 | 9 |
10 | 11 | 12 | 13 | 14 | 15 | 16 |
17 | 18 | 19 | 20 | 21 | 22 | 23 |
24 | 25 | 26 | 27 | 28 | 29 | 30 |
- Dilated convolution
- 백준
- Noise Contrastive Estimation
- Collaborative Filtering
- 부스트캠프 AI Tech
- Tacotron
- 논문리뷰
- wavenet
- CF
- BOJ
- SGNS
- Neural Collaborative Filtering
- ALS
- Item2Vec
- Negative Sampling
- TTS
- Ai
- FastSpeech
- FastSpeech2
- NEG
- ANNOY
- word2vec
- Skip-gram
- Tacotron2
- Recommender System
- 추천시스템
- Implicit feedback
- matrix factorization
- RecSys
- CV
- Today
- Total
목록FastSpeech (2)
devmoon
https://arxiv.org/abs/2006.04558 FastSpeech 2: Fast and High-Quality End-to-End Text to Speech Non-autoregressive text to speech (TTS) models such as FastSpeech can synthesize speech significantly faster than previous autoregressive models with comparable quality. The training of FastSpeech model relies on an autoregressive teacher model for duratio arxiv.org 한국어 음성합성 프로젝트를 진행하기 위해서 살펴본 TTS 논문들 ..
https://arxiv.org/abs/1905.09263 FastSpeech: Fast, Robust and Controllable Text to Speech Neural network based end-to-end text to speech (TTS) has significantly improved the quality of synthesized speech. Prominent methods (e.g., Tacotron 2) usually first generate mel-spectrogram from text, and then synthesize speech from the mel-spectrogram us arxiv.org Abstract FastSpeech 이전 등장하는 TTS 모델들은 공통적으..