Date   

Re: Passing keys or entering text

Sam Byrne
 

Thanks Tyler, I'm going to need a bit more background though as I currently get:

NameError: name 'KeyboardInputGesture' is not defined


I'm still very new to this as explained earlier.

On 28/05/2020 5:38 pm, Tyler Spivey wrote:
KeyboardInputGesture.fromName("shift+insert").send()

On 5/28/2020 12:25 AM, Sam Byrne wrote:
Thanks Julien,


Yes I knew of api.getClipData but how to actually get the info from NVDA
memory to being entered on screen was where I got stuck. I was also
unsure of how to write the actual keys that I want to pass when using
passKey.


Cheers,

Sam

On 28/05/2020 5:14 pm, Julien Cochuyt wrote:
Hi,

Another commonly used standard paste gesture often supported in
terminals is shift+insert
(Insert might need to be doubled to be passed through depending on
your NVDA configuration)

To access the clipboard content programmatically with NVDA, you can
use api.getClipData()

Best regards,

Julien Cochuyt
Accessolutions

Le jeu. 28 mai 2020 à 09:01, Sam Byrne <sam.byrne.90@...
<mailto:sam.byrne.90@...>> a écrit :

Hi All,


I need some direction please on where to start for being able to
perform
standard keystrokes as part of my gestures or scripts I'm trying to
write for a Putty app module. IE, I need to be able to paste the
clipboard contents into a putty window, but as it is emulating a 3270
mainframe window, Control+V keystroke isn't accepted. I've tried
write
() but that can only be used for adding content to files, not
windows or
screens it seems.


How else could I write, type or import the clipboard text to the
current
cursor position? Please provide some basic example code if
possible, as
I really have no idea.


I've done some googling, but anything with using keyboard and
standard
python appears overly complicated for what I'm trying to achieve.


Thanks,


Sam




Re: Passing keys or entering text

Bill Dengler
 

Tangentially related, how would I type arbitrary text with NVDA? For example, I'd like to create a script to type – And — As they're not easy to type on my keyboard layout...

Bill

-----Original Message-----
From: nvda-devel@groups.io <nvda-devel@groups.io> On Behalf Of Tyler Spivey
Sent: 28 May 2020 03:39
To: nvda-devel@groups.io
Subject: Re: [nvda-devel] Passing keys or entering text

KeyboardInputGesture.fromName("shift+insert").send()

On 5/28/2020 12:25 AM, Sam Byrne wrote:
Thanks Julien,


Yes I knew of api.getClipData but how to actually get the info from
NVDA memory to being entered on screen was where I got stuck. I was
also unsure of how to write the actual keys that I want to pass when
using passKey.


Cheers,

Sam

On 28/05/2020 5:14 pm, Julien Cochuyt wrote:
Hi,

Another commonly used standard paste gesture often supported in
terminals is shift+insert (Insert might need to be doubled to be
passed through depending on your NVDA configuration)

To access the clipboard content programmatically with NVDA, you can
use api.getClipData()

Best regards,

Julien Cochuyt
Accessolutions

Le jeu. 28 mai 2020 à 09:01, Sam Byrne <sam.byrne.90@...
<mailto:sam.byrne.90@...>> a écrit :

Hi All,


I need some direction please on where to start for being able to
perform
standard keystrokes as part of my gestures or scripts I'm trying to
write for a Putty app module. IE, I need to be able to paste the
clipboard contents into a putty window, but as it is emulating a 3270
mainframe window, Control+V keystroke isn't accepted. I've tried
write
() but that can only be used for adding content to files, not
windows or
screens it seems.


How else could I write, type or import the clipboard text to the
current
cursor position? Please provide some basic example code if
possible, as
I really have no idea.


I've done some googling, but anything with using keyboard and
standard
python appears overly complicated for what I'm trying to achieve.


Thanks,


Sam




Re: Passing keys or entering text

Tyler Spivey
 

KeyboardInputGesture.fromName("shift+insert").send()

On 5/28/2020 12:25 AM, Sam Byrne wrote:
Thanks Julien,


Yes I knew of api.getClipData but how to actually get the info from NVDA
memory to being entered on screen was where I got stuck. I was also
unsure of how to write the actual keys that I want to pass when using
passKey.


Cheers,

Sam

On 28/05/2020 5:14 pm, Julien Cochuyt wrote:
Hi,

Another commonly used standard paste gesture often supported in
terminals is shift+insert
(Insert might need to be doubled to be passed through depending on
your NVDA configuration)

To access the clipboard content programmatically with NVDA, you can
use api.getClipData()

Best regards,

Julien Cochuyt
Accessolutions

Le jeu. 28 mai 2020 à 09:01, Sam Byrne <sam.byrne.90@...
<mailto:sam.byrne.90@...>> a écrit :

Hi All,


I need some direction please on where to start for being able to
perform
standard keystrokes as part of my gestures or scripts I'm trying to
write for a Putty app module. IE, I need to be able to paste the
clipboard contents into a putty window, but as it is emulating a 3270
mainframe window, Control+V keystroke isn't accepted. I've tried
write
() but that can only be used for adding content to files, not
windows or
screens it seems.


How else could I write, type or import the clipboard text to the
current
cursor position? Please provide some basic example code if
possible, as
I really have no idea.


I've done some googling, but anything with using keyboard and
standard
python appears overly complicated for what I'm trying to achieve.


Thanks,


Sam




Re: Passing keys or entering text

Sam Byrne
 

Thanks Julien,


Yes I knew of api.getClipData but how to actually get the info from NVDA memory to being entered on screen was where I got stuck. I was also unsure of how to write the actual keys that I want to pass when using passKey.


Cheers,

Sam

On 28/05/2020 5:14 pm, Julien Cochuyt wrote:
Hi,

Another commonly used standard paste gesture often supported in terminals is shift+insert
(Insert might need to be doubled to be passed through depending on your NVDA configuration)

To access the clipboard content programmatically with NVDA, you can use api.getClipData()

Best regards,

Julien Cochuyt
Accessolutions

Le jeu. 28 mai 2020 à 09:01, Sam Byrne <sam.byrne.90@...> a écrit :
Hi All,


I need some direction please on where to start for being able to perform
standard keystrokes as part of my gestures or scripts I'm trying to
write for a Putty app module. IE, I need to be able to paste the
clipboard contents into a putty window, but as it is emulating a 3270
mainframe window, Control+V keystroke isn't accepted. I've tried write
() but that can only be used for adding content to files, not windows or
screens it seems.


How else could I write, type or import the clipboard text to the current
cursor position? Please provide some basic example code if possible, as
I really have no idea.


I've done some googling, but anything with using keyboard and standard
python appears overly complicated for what I'm trying to achieve.


Thanks,


Sam





Re: Passing keys or entering text

Julien Cochuyt
 

Hi,

Another commonly used standard paste gesture often supported in terminals is shift+insert
(Insert might need to be doubled to be passed through depending on your NVDA configuration)

To access the clipboard content programmatically with NVDA, you can use api.getClipData()

Best regards,

Julien Cochuyt
Accessolutions


Le jeu. 28 mai 2020 à 09:01, Sam Byrne <sam.byrne.90@...> a écrit :
Hi All,


I need some direction please on where to start for being able to perform
standard keystrokes as part of my gestures or scripts I'm trying to
write for a Putty app module. IE, I need to be able to paste the
clipboard contents into a putty window, but as it is emulating a 3270
mainframe window, Control+V keystroke isn't accepted. I've tried write
() but that can only be used for adding content to files, not windows or
screens it seems.


How else could I write, type or import the clipboard text to the current
cursor position? Please provide some basic example code if possible, as
I really have no idea.


I've done some googling, but anything with using keyboard and standard
python appears overly complicated for what I'm trying to achieve.


Thanks,


Sam





Passing keys or entering text

Sam Byrne
 

Hi All,


I need some direction please on where to start for being able to perform standard keystrokes as part of my gestures or scripts I'm trying to write for a Putty app module. IE, I need to be able to paste the clipboard contents into a putty window, but as it is emulating a 3270 mainframe window, Control+V keystroke isn't accepted. I've tried write () but that can only be used for adding content to files, not windows or screens it seems.


How else could I write, type or import the clipboard text to the current cursor position? Please provide some basic example code if possible, as I really have no idea.


I've done some googling, but anything with using keyboard and standard python appears overly complicated for what I'm trying to achieve.


Thanks,


Sam


Re: Performance logging

 

Hey Karl-Otto,


I'm currently investigating the very issues you're describing. You can find progress information as well as a link to a NVDA try build at the following url: https://github.com/nvaccess/nvda/issues/11209

Github is also the place to report issues.


Regards,

Leonard

On 26/05/2020 21:41, Karl-Otto Rosenqvist wrote:
Hi!
I wonder how I can track or log performance the best way?
I experience quite some lag in Visual Studio 2019 in certain situations such as when hitting a breakpoint. It takes a long time until Visual Studio becomes responsive.

There are other situations too and it would be nice to be able to understand what NVDA is doing at that time to see if there’s something that could be optimized.


Kind regards

Karl-Otto
MAWINGU
0701-75 98 56
https://mawingu.se
Orgnr: 750804-3937


Re: Performance logging

 

Hey Karl-Otto,


I'm currently investigating the very issues you're describing. You can find progress information as well as a link to a NVDA try build at the following url: https://github.com/nvaccess/nvda/issues/11209

Github is also the place to report issues.


Regards,

Leonard

On 26/05/2020 21:41, Karl-Otto Rosenqvist wrote:
Hi!
I wonder how I can track or log performance the best way?
I experience quite some lag in Visual Studio 2019 in certain situations such as when hitting a breakpoint. It takes a long time until Visual Studio becomes responsive.

There are other situations too and it would be nice to be able to understand what NVDA is doing at that time to see if there’s something that could be optimized.


Kind regards

Karl-Otto
MAWINGU
0701-75 98 56
https://mawingu.se
Orgnr: 750804-3937


Re: Where do I report bugs?

James Scholes
 

On 26/05/2020 at 2:36 pm, Karl-Otto Rosenqvist wrote:
Hi!
I’m sorry for this question but I’m unsure where I report bugs.
There are some issues with the visual highligting in some applications where it’s drawn wrongly, probably because it doesn’t detect edges correctly.
Kind regards
Karl-Otto
MAWINGU
0701-75 98 56
https://mawingu.se
Orgnr: 750804-3937


Performance logging

Karl-Otto Rosenqvist
 

Hi!
I wonder how I can track or log performance the best way?
I experience quite some lag in Visual Studio 2019 in certain situations such as when hitting a breakpoint. It takes a long time until Visual Studio becomes responsive.

There are other situations too and it would be nice to be able to understand what NVDA is doing at that time to see if there’s something that could be optimized.


Kind regards

Karl-Otto
MAWINGU
0701-75 98 56
https://mawingu.se
Orgnr: 750804-3937


Where do I report bugs?

Karl-Otto Rosenqvist
 

Hi!
I’m sorry for this question but I’m unsure where I report bugs.

There are some issues with the visual highligting in some applications where it’s drawn wrongly, probably because it doesn’t detect edges correctly.


Kind regards

Karl-Otto
MAWINGU
0701-75 98 56
https://mawingu.se
Orgnr: 750804-3937


Re: NVDA 2020 release schedule

Bill Dengler
 

Hi Reef et al,
Any updates on the 2020.2 release? In particular:
I'd consider #11132 (Say-all repeats some lines) to be a blocking issue. Maybe consider reverting cancellable speech for 2020.2, then restore it again for 2020.3 if a fix can't be found to avoid blocking the release?
I'd like to see #11206 (Browse mode with auto focus focusable elements disabled: Focus focusable ancestor at caret when forcing focus mode with NVDA+space) make it into the release, assuming it's stable. This way there, new fast browse mode can be tested in 2020.2 (as the non-default) and #11190 (which enables fast browse mode by default) can be pushed to 2020.3, giving users lots of time to test and report issues as this new default represents a major change to NVDA functionality.
Windows Terminal has now made it to version 1.0 (stable release), and NVDA 2020.2 makes this application accessible (it currently isn't on previous NVDA builds). We should release this functionality to users as soon as practical.

Thanks,
Bill

-----Original Message-----
From: nvda-devel@groups.io <nvda-devel@groups.io> On Behalf Of Noelia Ruiz
Sent: 29 April 2020 10:28
To: nvda-devel@groups.io
Subject: Re: [nvda-devel] NVDA 2020 release schedule

Imo this may be documented if you find it useful, since the article about the release process was updated more than a year ago.
https://github.com/nvaccess/nvda/wiki/ReleaseProcess
Kind regards

2020-04-29 14:59 GMT+02:00, Reef Turner <reef@...>:
In the past we have aimed for 4 releases a year. The new approach will
likely result in a variable number of releases per year. The releases
will be smaller but more often. The release process takes a minimum of
6 weeks from first beta until release. That said, releases may be
delayed up by high priority issues. Realistically, I don't expect the
number of releases to change drastically.

-----Original Message-----
From: nvda-devel@groups.io <nvda-devel@groups.io> On Behalf Of Akash
Kakkar
Sent: Tuesday, 21 April 2020 8:37 PM
To: nvda-devel@groups.io
Subject: Re: [nvda-devel] NVDA 2020 release schedule

Hey Reef,
You said:
--We are aiming for a shorter release cycle, so we will likely start
the
2020.2 beta as soon as the 2020.1 release is made.-- So, will it be
for always or only for this time? and if it will be persistent, then
how many releases we can expect in a year?
Although, I'm interested in having more releases throughout the year.


On 4/21/20, Reef Turner <reef@...> wrote:
The 2020.1 RC is currently blocked waiting for:
https://github.com/nvaccess/nvda/pull/11040
We are aiming for a shorter release cycle, so we will likely start
the
2020.2 beta as soon as the 2020.1 release is made.










Re: Question about synthDriver.speak

Brian's Mail list account
 

SAM is a dolphin free piece of software that allows other software to access synths and indeed other things these days as well. Maybe then the synth you are accessing via SAM is either not handshaking properly or sam does not pass it on. This may well be for rights reasons of course, if the synths are also made as stand alone salable products.
Brian

bglists@...
Sent via blueyonder.
Please address personal E-mail to:-
briang1@..., putting 'Brian Gaff'
in the display name field.
Newsgroup monitored: alt.comp.blind-users

----- Original Message -----
From: "Reef Turner" <reef@...>
To: <nvda-devel@groups.io>
Sent: Wednesday, May 20, 2020 2:34 PM
Subject: Re: [nvda-devel] Question about synthDriver.speak


I can only guess, but based on what you are describing it seems possible that the speech indexes aren't calling back to the speech system. When a synth reaches an index it is expected to call notify on the synthIndexReached action. When the synth has no more queued speech it is expected to call notify on synthDoneSpeaking.

These are defined in source/synthDriverHandler.py see synthIndexReached, synthDoneSpeaking

For an example inspect synthDrivers.espeak.SynthDriver._onIndexReached

Hope this helps.


Decomposing gestures

Andy B.
 

Hi,

 

I am working on an add-on that toggles between different numpad modes, assuming the user has a numpad to use. In one of my numpad modes, it is necessary to decompose the provided gesture and determine if it is an NVDA command, or at least has an NVDA key present in the gesture’s modifiers list. Can someone advise on how to determine if a gesture is a true NVDA gesture, and how to determine if the NVDA key was pressed as a part of that same gesture? I can decompose the gesture down to key names, so that part of code is not needed in any samples.

 

Thanks for the time and help.

 

 

 

 

Sent from Mail for Windows 10

 


Re: Question about synthDriver.speak

Ben Mustill-Rose
 

That did it! Many thanks for your help and have a great day.

On 5/20/20, Reef Turner <reef@...> wrote:
I can only guess, but based on what you are describing it seems possible
that the speech indexes aren't calling back to the speech system. When a
synth reaches an index it is expected to call notify on the
synthIndexReached action. When the synth has no more queued speech it is
expected to call notify on synthDoneSpeaking.

These are defined in source/synthDriverHandler.py see synthIndexReached,
synthDoneSpeaking

For an example inspect synthDrivers.espeak.SynthDriver._onIndexReached

Hope this helps.




Re: Question about synthDriver.speak

Reef Turner
 

I can only guess, but based on what you are describing it seems possible that the speech indexes aren't calling back to the speech system. When a synth reaches an index it is expected to call notify on the synthIndexReached action. When the synth has no more queued speech it is expected to call notify on synthDoneSpeaking.

These are defined in source/synthDriverHandler.py see synthIndexReached, synthDoneSpeaking

For an example inspect synthDrivers.espeak.SynthDriver._onIndexReached

Hope this helps.


Question about synthDriver.speak

Ben Mustill-Rose
 

Hi all

I’m looking for information about the synthDriver.speak method.

For some background, I’m currently taking a stab at modifying an
add-on that lets NVDA interface with a product called SAM (Synthesizer
Access Manager) to work under Python 3 and the new speech system. I'm
a software engineer by day and reasonably familiar with Python but
haven't written anything for NVDA before.

I’ve managed to get it talking but not everything’s being spoken. If I
open run for example, I hear Run, dialog, Type the name of a program,
folder, document, or Internet resource, and Windows will open it for
you but not the subsequent speech telling me what's in the edit area.
Examining the list that's being sent to speak suggests that the
missing items are never actually sent, even if the debug output
suggests that they are.

Clearly I'm not doing something correctly but I'm not sure what. I can
share my code if people would find it useful but I'm not really
looking for someone to fix it for me, more information around what
might have changed internally to cause this kind of behaviour. I
originally assumed I wasn't handling some of the strings correctly but
as per above, inspecting the list that gets passed to the speak method
seems to point to large chunks of the speech just not being sent to
it.

Any pointers would be amazing - hoping for something simple, all be it
non-obvious.

Cheers,
Ben.


Re: Running win10 without a monitor causes window size to become small?

Brian's Mail list account
 

Actually the main problem for me in updating to 10, is that it screws up special software that windows 10 removes and trashes all the data. it then needs manual configuration again which is a pain in the rear. I personally feel that you should be able to tell windows 10s bi yearly updates to leave some software no matter what it is, alone even if it is a modified version of an older Microsoft product like word, Outlook Express or a legacy bespoke sound editor for talking newspapers. Yes all of these can run in 10, but it never lets you decide to keep them.
Now we have Edge for 7, then this is how it stays till something big happens. Instead I may buy a win 10 machine for those times when I need it.
Brian

bglists@...
Sent via blueyonder.
Please address personal E-mail to:-
briang1@..., putting 'Brian Gaff'
in the display name field.
Newsgroup monitored: alt.comp.blind-users

----- Original Message -----
From: "Luke Davis" <luke@...>
To: <nvda-devel@groups.io>
Sent: Tuesday, May 19, 2020 3:42 PM
Subject: Re: [nvda-devel] Running win10 without a monitor causes window size to become small?


On Tue, 19 May 2020, Rich Caloggero wrote:

will be using my win7 laptop goind forward, but I also know that it's a bit risky since win7 is no longer being updated.
Out of curiocity: why not update it? I have been told that upgrading to win 10 for free is still possible. I can't confirm that, but at least in the days right after the D Date for Win 7, I know it was.

Luke



Re: Running win10 without a monitor causes window size to become small?

Brian's Mail list account
 

I hope not since I never have a monitor on my machines since its just clutter.
I'd have thought one could over ride it somehow by settings? I'm using 7 too.
Brian

bglists@...
Sent via blueyonder.
Please address personal E-mail to:-
briang1@..., putting 'Brian Gaff'
in the display name field.
Newsgroup monitored: alt.comp.blind-users

----- Original Message -----
From: "Rich Caloggero" <richcaloggero@...>
To: <nvda-devel@groups.io>
Sent: Tuesday, May 19, 2020 2:50 PM
Subject: [nvda-devel] Running win10 without a monitor causes window size to become small?


I'm a website evaluator for my day job and I was testing on my win10 machine and noticed that the site I was testing thought I was on a small screen. I tried with three different browsers with same results, so seems that windows 10 is telling the browser that I have a very small window size causing the site to switch into mobile mode.

Can anyone else confirm this? If this is indeed what is happening, is there a solution? I have no space for a monitor so cannot use one on this machine. I will be using my win7 laptop goind forward, but I also know that it's a bit risky since win7 is no longer being updated.

Thanx for any info.
-- Rich

--
-- Rich


Re: Running win10 without a monitor causes window size to become small?

David Csercsics
 

Yes, this probably has to do with the fact that Windows 10 when it doesn't have a monitor connected will set your video card to the maximum resolution it supports. At least that seems to be the case here since the recommended resolution for this machine is 2048x1536. If I understand things, that means you'd have really small icons and windows and things, but a lot of them. You'd probably have to check that and set it manually. The scaling options may have an effect as well, I know next to nothing about graphics. So I have no idea what you could set it to, Or if this is even  a problem. I get quite a few things that tell me they're off screen when I use object navigation and I don't know if this is related to no connected monitor as well.