Topics

Question about synthDriver.speak

Ben Mustill-Rose
 

Hi all

I’m looking for information about the synthDriver.speak method.

For some background, I’m currently taking a stab at modifying an
add-on that lets NVDA interface with a product called SAM (Synthesizer
Access Manager) to work under Python 3 and the new speech system. I'm
a software engineer by day and reasonably familiar with Python but
haven't written anything for NVDA before.

I’ve managed to get it talking but not everything’s being spoken. If I
open run for example, I hear Run, dialog, Type the name of a program,
folder, document, or Internet resource, and Windows will open it for
you but not the subsequent speech telling me what's in the edit area.
Examining the list that's being sent to speak suggests that the
missing items are never actually sent, even if the debug output
suggests that they are.

Clearly I'm not doing something correctly but I'm not sure what. I can
share my code if people would find it useful but I'm not really
looking for someone to fix it for me, more information around what
might have changed internally to cause this kind of behaviour. I
originally assumed I wasn't handling some of the strings correctly but
as per above, inspecting the list that gets passed to the speak method
seems to point to large chunks of the speech just not being sent to
it.

Any pointers would be amazing - hoping for something simple, all be it
non-obvious.

Cheers,
Ben.

Reef Turner
 

I can only guess, but based on what you are describing it seems possible that the speech indexes aren't calling back to the speech system. When a synth reaches an index it is expected to call notify on the synthIndexReached action. When the synth has no more queued speech it is expected to call notify on synthDoneSpeaking.

These are defined in source/synthDriverHandler.py see synthIndexReached, synthDoneSpeaking

For an example inspect synthDrivers.espeak.SynthDriver._onIndexReached

Hope this helps.

Ben Mustill-Rose
 

That did it! Many thanks for your help and have a great day.

On 5/20/20, Reef Turner <reef@...> wrote:
I can only guess, but based on what you are describing it seems possible
that the speech indexes aren't calling back to the speech system. When a
synth reaches an index it is expected to call notify on the
synthIndexReached action. When the synth has no more queued speech it is
expected to call notify on synthDoneSpeaking.

These are defined in source/synthDriverHandler.py see synthIndexReached,
synthDoneSpeaking

For an example inspect synthDrivers.espeak.SynthDriver._onIndexReached

Hope this helps.



Brian's Mail list account
 

SAM is a dolphin free piece of software that allows other software to access synths and indeed other things these days as well. Maybe then the synth you are accessing via SAM is either not handshaking properly or sam does not pass it on. This may well be for rights reasons of course, if the synths are also made as stand alone salable products.
Brian

bglists@...
Sent via blueyonder.
Please address personal E-mail to:-
briang1@..., putting 'Brian Gaff'
in the display name field.
Newsgroup monitored: alt.comp.blind-users

----- Original Message -----
From: "Reef Turner" <reef@...>
To: <nvda-devel@groups.io>
Sent: Wednesday, May 20, 2020 2:34 PM
Subject: Re: [nvda-devel] Question about synthDriver.speak


I can only guess, but based on what you are describing it seems possible that the speech indexes aren't calling back to the speech system. When a synth reaches an index it is expected to call notify on the synthIndexReached action. When the synth has no more queued speech it is expected to call notify on synthDoneSpeaking.

These are defined in source/synthDriverHandler.py see synthIndexReached, synthDoneSpeaking

For an example inspect synthDrivers.espeak.SynthDriver._onIndexReached

Hope this helps.