Date   
Pull Request Updated #github

espeak-ng@groups.io Integration <espeak-ng@...>
 

[espeak-ng/espeak-ng] Pull request updated by valdisvi:

#319 Documentation fix, improvements for Latvian language

Pull Request Updated #github

espeak-ng@groups.io Integration <espeak-ng@...>
 

[espeak-ng/espeak-ng] Pull request updated by valdisvi:

#319 Documentation fix, improvements for Latvian language

Pull Request Updated #github

espeak-ng@groups.io Integration <espeak-ng@...>
 

[espeak-ng/espeak-ng] Pull request updated by valdisvi:

#319 Documentation fix, improvements for Latvian language

Pull Request Opened #github

espeak-ng@groups.io Integration <espeak-ng@...>
 

[espeak-ng/espeak-ng] Pull request opened by valdisvi:

#319 Documentation fix, improvements for Latvian language

[espeak-ng:master] reported: making bg dictionary Segmentation fault #github

espeak-ng@groups.io Integration <espeak-ng@...>
 

[espeak-ng:master] New Comment on Issue #318 making bg dictionary Segmentation fault
By rhdunn:

I cannot reproduce this locally. The espeak program will try to load the old version of the dictionary when compiling the new one. I made changes on 2017-07-26 to fix writing and reading the emoji dictionary entries, which will affect reading existing dictionary files written using the old, broken code.

Is this with a clean build? Can you use git bisect to identify the problem commit?

Does rm -f espeak-ng-data/bg_dict && make bg work?

[espeak-ng:master] new issue: making bg dictionary Segmentation fault #github

espeak-ng@groups.io Integration <espeak-ng@...>
 

[espeak-ng:master] New Issue Created by davidweenink:
#318 making bg dictionary Segmentation fault

On my Linux Ububuntu 16.04 system I recently get a segmentation fault: $ make dictionaries DICT espeak-ng-data/bg_dict /bin/bash: line 1: 1893 Segmentation fault (core dumped) ESPEAK_DATA_PATH=/home/david/projects/espeak-ng LD_LIBRARY_PATH=../src: ../src/espeak-ng --compile=echo espeak-ng-data/bg_dict | sed -e 's,espeak-ng-data/,,g' -e 's,_dict,,g' Makefile:2596 recipe for target 'espeak-ng-data/bg_dict' failed make: *** [espeak-ng-data/bg_dict] Error 139

This occurs only for bg, making other dictionaries like is ok. $ make fi DICT espeak-ng-data/fi_dict Using phonemetable: 'fi' Compiling: 'fi_list' 336 entries Compiling: 'fi_emoji' 1568 entries Compiling: 'fi_extra' 0 entries Compiling: 'fi_rules' 130 rules, 29 groups (0)

This error was very recently introduced (< 3 weeks ago)

a question

Karl Eick
 

Hi to every one,

I have a question regarding espeak. May it be possible to let the romanizer for non-latin-character languages use the language of the operating system if the standard language is a language using latin characters? If not, does someone of you know where the language of the romanizer is specified?

Thanks in advance,

Karl Eick

[espeak-ng:master] reported: Add a custom voice #github

espeak-ng@groups.io Integration <espeak-ng@...>
 

[espeak-ng:master] New Comment on Issue #316 Add a custom voice
By SadaleNet:

I'm also interested in doing that. It seems to be possible, but rather difficult. And I haven't tried out this idea.

The espeakedit program can be used to edit to vowel/diphones. To add custom voice, I'd first record the sound of all vowels and diphones of a language. Then use the program to find the formants of the vowel/diphones.

Since espeak is using sinusoidal synthesis algorithm (See list of popular speech synth algorithm), the algorithm cannot synthesize most type consonants well (except the nasals one). So you'd also need to prepare sound sample for each of those consonants, in .wav format IIRC. It'll be concated to the vowel by espeak.

I'm no way being a professional on espeak. It's just the case that I'm also interested in doing something similar. And that's my finding on this topic after extensive research on it. Please do notify me or post something here when you managed to do that. I'd like to check if this idea is feasible and the quality of the result. Thank you very much! :)

Re: [espeak-ng:master] new issue: How to use espeak in our Linux applications? #github

Willem van der Walt
 

The easiest way would be to simply call the espeak program from your code.
If you need to use things like indexing, you need to include espeak/speak_lib.h
HTH, Willem

On Mon, 16 Oct 2017, espeak-ng@groups.io wrote:



[espeak-ng:master] New Issue Created by rezaee ( https://github.com/rezaee ) :
#317 How to use espeak in our Linux applications? ( https://github.com/espeak-ng/espeak-ng/issues/317 )

Hi I like to know how can I call espeak in my Linux program? Which header files should be included to my program? Which function can I use to pass a string to espeak in my program?


[espeak-ng:master] new issue: How to use espeak in our Linux applications? #github

espeak-ng@groups.io Integration <espeak-ng@...>
 

[espeak-ng:master] New Issue Created by rezaee:
#317 How to use espeak in our Linux applications?

Hi I like to know how can I call espeak in my Linux program? Which header files should be included to my program? Which function can I use to pass a string to espeak in my program?

[espeak-ng:master] reported: Rhythm Types, Trochaic, Iambic #github

espeak-ng@groups.io Integration <espeak-ng@...>
 

[espeak-ng:master] New Comment on Issue #132 Rhythm Types, Trochaic, Iambic
By jaacoppi:

A very simple proof of concept is in the attached file. I think some discussion is needed before a pull request.

What this code does: 1) implement a new option -r for choosing a rhythm. espeak-ng -r 1 chooses trochaic, -r 2 chooses iambic. Use SetParamater to make it globally known 2) in SetWordStress, overwrite all language stress rules if -r is used. I used the stress rule #9 (stress all) as a base. Trochaic stresses every other syllable, starting from the first one (=odd syllables). Iambic stresses every other syllable, starting from the second one (=even syllabes)

TODO: 1) clean up the code 2) possibly make command line argument take "trochaic" and "iambic" instead of 1 and 2 3) overwrite stressing the last word of a clause 4) test this in many ways

edit: 5) (for poetry purposes) take last syllable of previous word into account. How is this done?

edit2: the answer to the original question is: src/libespeak-ng/tr_languages.c contains the stress rules for a specific language. void SetWordStress() in src/libespeak-ng/dictionary.c contains the implementations of the rules.

My solution overwrites these language specific rules. This is necessary for reading poetry or other text that has a rhythm in a certain language.

rhythm.txt

[espeak-ng:master] reported: Rhythm Types, Trochaic, Iambic #github

espeak-ng@groups.io Integration <espeak-ng@...>
 

[espeak-ng:master] New Comment on Issue #132 Rhythm Types, Trochaic, Iambic
By jaacoppi:

A very simple proof of concept is in the attached file. I think some discussion is needed before a pull request.

What this code does: 1) implement a new option -r for choosing a rhythm. espeak-ng -r 1 chooses trochaic, -r 2 chooses iambic. Use SetParamater to make it globally known 2) in SetWordStress, overwrite all language stress rules if -r is used. I used the stress rule #9 (stress all) as a base. Trochaic stresses every other syllable, starting from the first one (=odd syllables). Iambic stresses every other syllable, starting from the second one (=even syllabes)

TODO: 1) clean up the code 2) possibly make command line argument take "trochaic" and "iambic" instead of 1 and 2 3) overwrite stressing the last word of a clause 4) test this in many ways

edit: 5) (for poetry purposes) take last syllable of previous word into account. How is this done?

rhythm.txt

[espeak-ng:master] reported: Rhythm Types, Trochaic, Iambic #github

espeak-ng@groups.io Integration <espeak-ng@...>
 

[espeak-ng:master] New Comment on Issue #132 Rhythm Types, Trochaic, Iambic
By jaacoppi:

A very simple proof of concept is in the attached file. I think some discussion is needed before a pull request.

What this code does: 1) implement a new option -r for choosing a rhythm. espeak-ng -r 1 chooses trochaic, -r 2 chooses iambic. Use SetParamater to make it globally known 2) in SetWordStress, overwrite all language stress rules if -r is used. I used the stress rule #9 (stress all) as a base. Trochaic stresses every other syllable, starting from the first one (=odd syllables). Iambic stresses every other syllable, starting from the second one (=even syllabes)

TODO: 1) clean up the code 2) possibly make command line argument take "trochaic" and "iambic" instead of 1 and 2 3) overwrite stressing the last word of a clause 4) test this in many ways

rhythm.txt

Re: [espeak-ng:master] new issue: Add a custom voice #github

Luis Carlos Gonzáles Moráles
 

So can eSpeak NG will support other synth front ends?

Reece H. Dunn wrote:

On Wed, Oct 11, 2017 at 12:55 pm, Josh Kennedy wrote:

oh yes I agree. ESpeak put through some sort of neural net would be quite good.

Neural network based speech synthesis is trained around a given language, so it is harder to get it to properly articulate sounds from another language. ESpeak supports around 100 languages, some of which have sounds that are hard to get the neural network synthesizers to pronounce.

That said, some of my longer-term goals are to support different voices at the phoneme data level, to provide support for better quality voice data (including klatt-based voices), and to provide better tools to develop and experiment with espeak-based voices. This includes supporting the mbrola voices at the phoneme data level, making it easier to use those voices in a different language (e.g. the German voices speaking English, or the Spanish voice speaking Italian). I also want to use this to provide higher-quality voices.

Right now the espeak code is making it complex to do that, as it is making several assumptions about how the voices are structured in order to try to be as compact as possible. As a result, big changes like this will take time.

Kind regards,
Reece

 On 10/11/2017 3:32 PM, Devin Prater wrote:

eSpeak put through a neural net would be pretty good, I think. 

Devin Prater
Assistive Technology instructor in training, JAWS Sertified.

On Oct 11, 2017, at 11:22 AM, Sarah k Alawami <marrie12@...> wrote:
I don't think this would be possible. Now when lirabird or what ever that is comes out,  maybe then can we add our own. I don't want to hear my  own voice, but someone else might.

On Oct 11, 2017, at 6:10 AM, espeak-ng@groups.io Integration <espeak-ng@groups.io> wrote:

[espeak-ng:master] New Issue Created by matteke-games:
#316 Add a custom voice
 

Hi,

is it possible to add custom voices? Our own for example?

Best regards

 

-- 
sent with mozilla thunderbird

Github push to espeak-ng:espeak-ng #github

espeak-ng@groups.io Integration <espeak-ng@...>
 

2 New Commits:

[espeak-ng:master] By Reece H. Dunn <msclrhd@...>:
c59c9633de42: Fix -Wuninitialized warnings.

Modified: configure.ac
Modified: src/libespeak-ng/compiledata.c


[espeak-ng:master] By Reece H. Dunn <msclrhd@...>:
22bbd28d2c02: Add -Wimplicit warning checks.

Modified: configure.ac

Github push to espeak-ng:espeak-ng #github

espeak-ng@groups.io Integration <espeak-ng@...>
 

2 New Commits:

[espeak-ng:master] By Reece H. Dunn <msclrhd@...>:
6a735f19f2ff: ieee80.c: Fix -Wmissing-prototypes warnings (create an ieee80.h header file).

Added: src/libespeak-ng/ieee80.h
Modified: README.md
Modified: src/libespeak-ng/ieee80.c
Modified: src/libespeak-ng/spect.c


[espeak-ng:master] By Reece H. Dunn <msclrhd@...>:
f4248fd72845: tests: Fix -Wmissing-prototypes warnings.

Modified: configure.ac
Modified: tests/api.c
Modified: tests/encoding.c
Modified: tests/readclause.c

Re: [espeak-ng:master] new issue: Add a custom voice #github

Reece H. Dunn
 

On Wed, Oct 11, 2017 at 12:55 pm, Josh Kennedy wrote:

oh yes I agree. ESpeak put through some sort of neural net would be quite good.

Neural network based speech synthesis is trained around a given language, so it is harder to get it to properly articulate sounds from another language. ESpeak supports around 100 languages, some of which have sounds that are hard to get the neural network synthesizers to pronounce.

That said, some of my longer-term goals are to support different voices at the phoneme data level, to provide support for better quality voice data (including klatt-based voices), and to provide better tools to develop and experiment with espeak-based voices. This includes supporting the mbrola voices at the phoneme data level, making it easier to use those voices in a different language (e.g. the German voices speaking English, or the Spanish voice speaking Italian). I also want to use this to provide higher-quality voices.

Right now the espeak code is making it complex to do that, as it is making several assumptions about how the voices are structured in order to try to be as compact as possible. As a result, big changes like this will take time.

Kind regards,
Reece

 On 10/11/2017 3:32 PM, Devin Prater wrote:

eSpeak put through a neural net would be pretty good, I think. 

Devin Prater
Assistive Technology instructor in training, JAWS Sertified.

On Oct 11, 2017, at 11:22 AM, Sarah k Alawami <marrie12@...> wrote:
I don't think this would be possible. Now when lirabird or what ever that is comes out,  maybe then can we add our own. I don't want to hear my  own voice, but someone else might.

On Oct 11, 2017, at 6:10 AM, espeak-ng@groups.io Integration <espeak-ng@groups.io> wrote:

[espeak-ng:master] New Issue Created by matteke-games:
#316 Add a custom voice
 

Hi,

is it possible to add custom voices? Our own for example?

Best regards

 

-- 
sent with mozilla thunderbird

Re: [espeak-ng:master] new issue: Add a custom voice #github

Josh Kennedy <joshknnd1982@...>
 

oh yes I agree. ESpeak put through some sort of neural net would be quite good.



On 10/11/2017 3:32 PM, Devin Prater wrote:
eSpeak put through a neural net would be pretty good, I think. 

Devin Prater
Assistive Technology instructor in training, JAWS Sertified.

On Oct 11, 2017, at 11:22 AM, Sarah k Alawami <marrie12@...> wrote:

I don't think this would be possible. Now when lirabird or what ever that is comes out,  maybe then can we add our own. I don't want to hear my  own voice, but someone else might.

On Oct 11, 2017, at 6:10 AM, espeak-ng@groups.io Integration <espeak-ng@groups.io> wrote:

[espeak-ng:master] New Issue Created by matteke-games:
#316 Add a custom voice

Hi,

is it possible to add custom voices? Our own for example?

Best regards




-- 
sent with mozilla thunderbird

Re: [espeak-ng:master] new issue: Add a custom voice #github

Devin Prater
 

eSpeak put through a neural net would be pretty good, I think. 

Devin Prater
Assistive Technology instructor in training, JAWS Sertified.

On Oct 11, 2017, at 11:22 AM, Sarah k Alawami <marrie12@...> wrote:

I don't think this would be possible. Now when lirabird or what ever that is comes out,  maybe then can we add our own. I don't want to hear my  own voice, but someone else might.

On Oct 11, 2017, at 6:10 AM, espeak-ng@groups.io Integration <espeak-ng@groups.io> wrote:

[espeak-ng:master] New Issue Created by matteke-games:
#316 Add a custom voice

Hi,

is it possible to add custom voices? Our own for example?

Best regards



Re: [espeak-ng:master] new issue: Add a custom voice #github

Sarah k Alawami
 

I don't think this would be possible. Now when lirabird or what ever that is comes out,  maybe then can we add our own. I don't want to hear my  own voice, but someone else might.

On Oct 11, 2017, at 6:10 AM, espeak-ng@groups.io Integration <espeak-ng@groups.io> wrote:

[espeak-ng:master] New Issue Created by matteke-games:
#316 Add a custom voice

Hi,

is it possible to add custom voices? Our own for example?

Best regards