toggle quoted messageShow quoted text
So can eSpeak NG will support other synth front ends?
Reece H. Dunn wrote:
On Wed, Oct 11, 2017 at 12:55 pm, Josh Kennedy wrote:
Neural network based speech synthesis is trained around a given language, so it is harder to get it to properly articulate sounds from another language. ESpeak supports around 100 languages, some of which have sounds that are hard to get the neural network
synthesizers to pronounce.
oh yes I agree. ESpeak put through some sort of neural net would be quite good.
That said, some of my longer-term goals are to support different voices at the phoneme data level, to provide support for better quality voice data (including klatt-based voices), and to provide better tools to develop and experiment with espeak-based voices.
This includes supporting the mbrola voices at the phoneme data level, making it easier to use those voices in a different language (e.g. the German voices speaking English, or the Spanish voice speaking Italian). I also want to use this to provide higher-quality
Right now the espeak code is making it complex to do that, as it is making several assumptions about how the voices are structured in order to try to be as compact as possible. As a result, big changes like this will take time.
On 10/11/2017 3:32 PM, Devin Prater wrote:
eSpeak put through a neural net would be pretty good, I think.
Assistive Technology instructor in training, JAWS Sertified.
On Oct 11, 2017, at 11:22 AM, Sarah k Alawami <marrie12@...
I don't think this would be possible. Now when lirabird or what ever that is comes out, maybe then can we add our own. I don't want to hear my own voice, but someone else might.
sent with mozilla thunderbird