Date   

Re: sounds on wsl commendline

UMIT ERDEM Yigitoglu
 

thank you for all of you it works. do you have any suggestiond/advice for a biggener comendline user? is there a good resource to learn it more or anything I should be aware of?
Thank you again. 
best regards


collection of very useful resources for developing NVDA

J.G
 

Hello,

I found a huge collection of libraries, frameworks etc in/for python, which may help developers to develop NVDA core and/or addons.
https://github.com/vinta/awesome-python
some resources have been already used byNVDA though.

hope, it could be helpful.

best regards,
Jožef


Enhanced Touch gestures notice: global touch support toggle and touch gesture passthrough features to be removed in a future release

 

Hello NVDA developers and add-ons community,

 

I’m delighted to announce that two major features from Enhanced Touch Gestures add-on are being considered for inclusion in a future NVDA release (or rather, one is already here, the other is being looked at). As such, the following features will be removed from the add-on in a future release:

 

  • Enable/disable touch support completely: the ability to enable or disable touch support (also being aware of profiles) is now part of latest NVDA alpha snapshot.
  • Command to enable/disable touch support and touch passthrough: until now it was possible to enable or disable touch support through a command that comes with the add-on, and to suspend touch support for up to ten seconds (touch passthrough). There is an NVDA pull request (PR 11297) to bring this to NVDA itself (not written by me), and if approved, touch passthrough (automatic and manual) will be removed from the add-on, as it is no longer needed.

 

Because many people are using NVDA stable releases (or trying out betas), these changes will not happen overnight – it will happen in a future version of Enhanced Touch Gestures. However, in order to make this process as smooth as possible, I’m laying the foundation in version 20.07 of the add-on (July). Specifically:

 

  • Last tested version will be set to 2020.3 (at least in the add-on manifest). This is to remind me that I need to conduct more tests in coming days.
  • If touch support is disabled globally, an appropriate debug log message will be printed.
  • The add-on does come with an extension point action handler for configuration profile switches. If an NVDA version with the ability to toggle touch support is in use, the action handler will ignore config profile switches altogether, as that is now handled by NVDA.
  • Automatic touch passthrough (where touch support can be suspended for up to ten seconds) will be deprecated. The only passthrough mode will be manual i.e. touch support can be toggled from everywhere, not suspended for a time. Automatic touch passthrough will be removed in a fall release. The user interface responsible for setting up automatic touch passthrough will be removed, however i.e. no more “Enhanced Touch Gestures” settings panel (the only thing left in there will be enable/disable touch support checkbox, meant for older NVDA releases).
  • As part of toggling touch support from everywhere, a keyboard command is proposed (see the mentioned PR). If this comes to NVDA, Enhanced Touch Gestures will add the same keyboard command to toggle manual touch passthrough mode for ease of transition.

 

The result of this work is that Enhanced Touch Gestures add-on will be left with the following features:

 

  • Global object mode gestures (read title, read status bar, for instance)
  • Additional NVDA command gestures (four finger double tap to toggle input help, for instance)
  • Web and synth settings touch modes
  • Touch keyboard enhancements

 

P.S. Coordinate announcement beep feature will be removed, too (that feature is beyond the scope of this post).

Thanks.

Cheers,

Joseph


Re: sounds on wsl commendline

Sean
 

There are errors in the command line such as syntaks errors and like access errors.
NVDA has no special option for these. This is completely part of Windows.

It is necessary to parse the outputs and it is challenging for this console.
In this case, there is no other option than to examine the errors that occur with virtual cursors.

On 22/06/2020 21:25, UMIT ERDEM Yigitoglu wrote:
Hello, 
I have just started to learn about unix commendline with windows subsystem for linux. I use commendline with a braille display and want to sielence nvda. However, I want to be able to know if my commend executed correctly without using review currcer all the time. I was wanderring if there is a tool that will have different tones for different kinds of output and different kinds of error messages? if such programm is possible and it is yet to be done I would love to work on it later when ı advance in programming, I believe it will be very important tool for blind programmers. any ideas?  
--

Sean

👨‍🦯 I’m student and programmer. I write often Python, sometimes Go and rarely C++.


Re: Custom screen access

Travis Siegel
 
Edited

That does make a lot more sense.  But, NVDA already has read word/sentence/line capabilities, I'm sure you could call those functions when the app needs to read the corresponding pieces of text.  You'd not even need to write any new functionality, just tie to the existing function calls when the functions are requested.  That should save you considerable time.


On 6/22/2020 8:35 PM, Christian Comaschi wrote:

Ok, here are the details of what I'm trying to achieve. I have almost made my decision, so if you don't have ethis time to read this lengthy message, just ignore it and consider my problem solved.
Quoted message redacted at the request of Christian Comaschi.


Re: sounds on wsl commendline

Luke Davis
 

I second all of what Tage said, except the things to paste in .bashrc.

All you really need is to paste the following:

PROMPT_COMMAND='echo -n "($?) "'

To get the same effect, without the obscurity of the function call or the reassignment of $PS1.

Also, technically it is more correct to paste this in .profile, as .profile is only loaded for interactive shells.

Luke

On Mon, 22 Jun 2020, Tage Johansson wrote:

Hello,
I'm glad that you are trying out a unix shell. I've used a linux command line for three years now and I can say that it is very fun. I use less and less
graphical programs and use the command line for more and more. Today I only use a mail program (Thunderbird) and a web browser (Firefox) outside the
terminal. Command line applications are much more understandable, accessible and customizable than graphical programs. It takes some more time to learn it,
but you learn what you are really doing rather than how to use a complicated gui. WSL is also really awsum.
Anyway, on to your question. You want to know if the last commandsucceeded or not, quicly. In unix, the exit status of a command is denoted by a return
code. If the return code is 0 the command was successful otherwise it failed. The simplest solution is to customize your prompt string so that it contains
the return code of the last command.
I don't know what shell you are using. Most probably you are using bash but you can check that by issuing the command `echo $shell`. If the output is
something similar to /bin/bash or /usr/bin/bash, you are good to go.
Now you should edit your .bashrc file located right under your home directory. Paste the following text at the bottom of the file:
__old_ps1=$PS1
__prompt_command() {
    PS1="($?)$__old_ps1"
}
PROMPT_COMMAND=__prompt_command
Save the file and restart your shell. The prompt should now contain the return code in parenthesis at the start of the line.
Feel free to experiment with that code and if you have any questions feel free to ask.
Best regards,
Tage
On 6/22/2020 8:25 PM, UMIT ERDEM Yigitoglu wrote:
Hello, 
I have just started to learn about unix commendline with windows subsystem for linux. I use commendline with a braille display and want to
sielence nvda. However, I want to be able to know if my commend executed correctly without using review currcer all the time. I was wanderring
if there is a tool that will have different tones for different kinds of output and different kinds of error messages? if such programm is
possible and it is yet to be done I would love to work on it later when ı advance in programming, I believe it will be very important tool for
blind programmers. any ideas?  


Re: sounds on wsl commendline

Tage Johansson <frans.tage@...>
 

Hello,

I'm glad that you are trying out a unix shell. I've used a linux command line for three years now and I can say that it is very fun. I use less and less graphical programs and use the command line for more and more. Today I only use a mail program (Thunderbird) and a web browser (Firefox) outside the terminal. Command line applications are much more understandable, accessible and customizable than graphical programs. It takes some more time to learn it, but you learn what you are really doing rather than how to use a complicated gui. WSL is also really awsum.


Anyway, on to your question. You want to know if the last commandsucceeded or not, quicly. In unix, the exit status of a command is denoted by a return code. If the return code is 0 the command was successful otherwise it failed. The simplest solution is to customize your prompt string so that it contains the return code of the last command.


I don't know what shell you are using. Most probably you are using bash but you can check that by issuing the command `echo $shell`. If the output is something similar to /bin/bash or /usr/bin/bash, you are good to go.


Now you should edit your .bashrc file located right under your home directory. Paste the following text at the bottom of the file:


__old_ps1=$PS1
__prompt_command() {
    PS1="($?)$__old_ps1"
}
PROMPT_COMMAND=__prompt_command


Save the file and restart your shell. The prompt should now contain the return code in parenthesis at the start of the line.


Feel free to experiment with that code and if you have any questions feel free to ask.


Best regards,

Tage


On 6/22/2020 8:25 PM, UMIT ERDEM Yigitoglu wrote:
Hello, 
I have just started to learn about unix commendline with windows subsystem for linux. I use commendline with a braille display and want to sielence nvda. However, I want to be able to know if my commend executed correctly without using review currcer all the time. I was wanderring if there is a tool that will have different tones for different kinds of output and different kinds of error messages? if such programm is possible and it is yet to be done I would love to work on it later when ı advance in programming, I believe it will be very important tool for blind programmers. any ideas?  


sounds on wsl commendline

UMIT ERDEM Yigitoglu
 

Hello, 
I have just started to learn about unix commendline with windows subsystem for linux. I use commendline with a braille display and want to sielence nvda. However, I want to be able to know if my commend executed correctly without using review currcer all the time. I was wanderring if there is a tool that will have different tones for different kinds of output and different kinds of error messages? if such programm is possible and it is yet to be done I would love to work on it later when ı advance in programming, I believe it will be very important tool for blind programmers. any ideas?  


Re: Custom screen access

James Scholes
 

Maybe this will help: https://github.com/dictationbridge

Regards,

James Scholes

On 21/06/2020 at 3:32 pm, Christian Comaschi wrote:
Thanks for the explanation, but unfortunately I’m working on a steno application that can be thought more as a driver than as a normal application. In fact, its main goal is to translate steno keyboards key combinations to normal key presses and send them to other apps, e.g. text editing apps.
So I don’t have to make its windows accessible, they already are, but I have to do what I wrote in my previous mail, read the text and caret position of the most common applications in a screen-reader like manner.
I could explain more in detail why I have to do that but it would take pages!

Il giorno 21 giu 2020, alle ore 20:47, Travis Siegel <tsiegel@...> ha scritto:

In general, if you have the source code for an application, (and with opensource, you do), there's no need to fiddle with screen reader built-in functions at all, just rewrite the actual application to use standard windows api calls, (instead of custom functionality) such as gui elements, buttons, and the like. This will automatically translate to better functionality in screen readers, because they're already built to watch the regular apis for information. I.E.
If the app is written in java, instead of drawing your text onto a canvas, like so many apps do, simply use a standard text control instead, it may take more work to make it look the way you want (which is why some folks use the graphical canvas), but it will automatically become more accessible without you having to do anything at all. in regards to the screen readers. Other languages have similar functionality issues. In general, using a nongraphical method to get the text to the screen, properly labeling graphical elements, and using standard windows controls instead of creating your own from scratch will make the applications completely accessible, with very few tweaks being necessary to complete any accessibility issues that may remain.
In general, the more custom gui elements you use, the less accessible your application becomes. Obviously, there's ways to get around this, but few (If any) developers know enough about accessibility out of the box to make those required modifications to custom elements so they work with screen readers. I don't know specifically what you're trying to fix, I've never heard of the application you're trying to fix, neither do I know what language it's written in, but most of the time, making an application more accessible doesn't require writing scripts or screen reader modules, simply make the application use standard windows controls at the source level, and most of those things will solve themselves.

On Sun, 21 Jun 2020, Christian Comaschi wrote:

Hello,
I'm asking a question that might be a little off topic because I'm not planning to develop anything for NVDA at the moment; but I'm working on an accessibility project and I'd like to know more about screen readers internals, and I think that someone here can help me find the info I need.
I'm writing custom code to improve the accessibility of an open source application (Plover), because common screen reader scripts and app modules alone don't allow me to bring it to the needed accessiblity requirements.
The problem is that at some point I need to read text from editable controls of any application in a "screen reader"-like manner, so I would like to know how screen readers can get the caret position and read the text of an editable control and the different approach of JAWS and NVDA.
I'm asking you the details of this functionality because I am trying to figure out if it could be a viable solution to read text from the screen in a "screen-reader like" manner with an approach that is valid for almost every application, or if it'stoo complex because it would require re-inventing a screen driver from scratch or re-inventing scripts for common application. In this latter case, I would consider a less stand-alone approach and make the application work in tandem with JAWS or NVDA.

After some analysis I have come to a conclusion and I would like to know if it's righgt:
- NVDA has no generic way to "read" the text given a screen position, but there are scripts for the most common applications that provide this information to the main module using the most proper technique for the single application (Win32 API, MSAA, UIA or other means);
- JAWS seems to have generic functions such as "SayLine", "SayRow" or "SaySentence" that work for most of the applications because of its video intercept driver.

As a first try, I wrote some small scripts to use just UIA to read the text and caret position inside Notepad or Winword, but it didn't work; I also tried to use the inspect tool from Microsoft, meant to analyze the windows of any application to get accessibility info, but even that tool wasn't able to get the caret position inside the edit windows.
Am I missing something or is it really that complex?
Thanks in advance
Best,
Christian











Re: Custom screen access

Travis Siegel
 

Not sure why you need to read anything if you're just translating keys, simply hook the keyboard functions, then do a simple replace on required keystrokes, no screen reading necessary.  By dong that, the new key combinations would automatically be placed in the field they were targeted for in the first place, and your apps don't even to know anything changed.  That would be the simplest method.  Now, how you get windows to send you all the key information could certainly benefit from studying screen reader functionality, but I see no need to bother with reading screens at all in your particular case.

On 6/21/2020 4:32 PM, Christian Comaschi wrote:
Thanks for the explanation, but unfortunately I’m working on a steno application that can be thought more as a driver than as a normal application. In fact, its main goal is to translate steno keyboards key combinations to normal key presses and send them to other apps, e.g. text editing apps.
So I don’t have to make its windows accessible, they already are, but I have to do what I wrote in my previous mail, read the text and caret position of the most common applications in a screen-reader like manner.
I could explain more in detail why I have to do that but it would take pages!


Il giorno 21 giu 2020, alle ore 20:47, Travis Siegel <tsiegel@...> ha scritto:

In general, if you have the source code for an application, (and with opensource, you do), there's no need to fiddle with screen reader built-in functions at all, just rewrite the actual application to use standard windows api calls, (instead of custom functionality) such as gui elements, buttons, and the like. This will automatically translate to better functionality in screen readers, because they're already built to watch the regular apis for information. I.E.
If the app is written in java, instead of drawing your text onto a canvas, like so many apps do, simply use a standard text control instead, it may take more work to make it look the way you want (which is why some folks use the graphical canvas), but it will automatically become more accessible without you having to do anything at all. in regards to the screen readers. Other languages have similar functionality issues. In general, using a nongraphical method to get the text to the screen, properly labeling graphical elements, and using standard windows controls instead of creating your own from scratch will make the applications completely accessible, with very few tweaks being necessary to complete any accessibility issues that may remain.
In general, the more custom gui elements you use, the less accessible your application becomes. Obviously, there's ways to get around this, but few (If any) developers know enough about accessibility out of the box to make those required modifications to custom elements so they work with screen readers. I don't know specifically what you're trying to fix, I've never heard of the application you're trying to fix, neither do I know what language it's written in, but most of the time, making an application more accessible doesn't require writing scripts or screen reader modules, simply make the application use standard windows controls at the source level, and most of those things will solve themselves.

On Sun, 21 Jun 2020, Christian Comaschi wrote:

Hello,
I'm asking a question that might be a little off topic because I'm not planning to develop anything for NVDA at the moment; but I'm working on an accessibility project and I'd like to know more about screen readers internals, and I think that someone here can help me find the info I need.
I'm writing custom code to improve the accessibility of an open source application (Plover), because common screen reader scripts and app modules alone don't allow me to bring it to the needed accessiblity requirements.
The problem is that at some point I need to read text from editable controls of any application in a "screen reader"-like manner, so I would like to know how screen readers can get the caret position and read the text of an editable control and the different approach of JAWS and NVDA.
I'm asking you the details of this functionality because I am trying to figure out if it could be a viable solution to read text from the screen in a "screen-reader like" manner with an approach that is valid for almost every application, or if it'stoo complex because it would require re-inventing a screen driver from scratch or re-inventing scripts for common application. In this latter case, I would consider a less stand-alone approach and make the application work in tandem with JAWS or NVDA.

After some analysis I have come to a conclusion and I would like to know if it's righgt:
- NVDA has no generic way to "read" the text given a screen position, but there are scripts for the most common applications that provide this information to the main module using the most proper technique for the single application (Win32 API, MSAA, UIA or other means);
- JAWS seems to have generic functions such as "SayLine", "SayRow" or "SaySentence" that work for most of the applications because of its video intercept driver.

As a first try, I wrote some small scripts to use just UIA to read the text and caret position inside Notepad or Winword, but it didn't work; I also tried to use the inspect tool from Microsoft, meant to analyze the windows of any application to get accessibility info, but even that tool wasn't able to get the caret position inside the edit windows.
Am I missing something or is it really that complex?
Thanks in advance
Best,
Christian










Re: The "Microsoft Sound Mapper" does not appear correctly in the list of audio output devices in NVDA

Ralf Kefferpuetz
 

Same here in english and german languages…

 

From: nvda-devel@groups.io <nvda-devel@groups.io> On Behalf Of Kostadin Kolev
Sent: Montag, 22. Juni 2020 11:34
To: NVDA screen reader development <nvda-devel@groups.io>
Subject: [nvda-devel] The "Microsoft Sound Mapper" does not appear correctly in the list of audio output devices in NVDA

 

Hello all,

Before filing a bug for this, I want to gather some opinions from other people's experience.

I think the strange problem started occurring after the Windows 10 May 2020 update. The "Microsoft Sound Mapper" does not show correctly in the list of audio output devices in NVDA's dialog for selecting a synthesizer. The device can be selected and used, but instead of a name for it, there is only a blank item in the list. On my machine Windows 10 is in bulgarian with all locale settings set to "bulgarian". But I just for the test changed the Windows display language to english and the problem remained. I haven't changed the other locale settings (e.g. the settings for time, date, currency and other similar things format) during that testing. Should I? Could the locale settings affect how that device's name is displayed?

I can reproduce this on at least 3 machines. All of them are running Windows 10 May 2020 update. On 2 of them I'm running the latest alpha snapshot of NVDA. On the 3rd one - NVDA 2020.1. All of the machines have their locale settings set to "bulgarian".

I'm not for now filing a bug against NVDA due to one more reason - I can reproduce similar (if not the same) behavior with Audacity. If I open Audacity and press Shift+O to invoke the dialog for choosing an audio output device, the "Microsoft Sound Mapper" one, which is also selectible in Audacity, is displayed only as " - output" and I think it should be displayed as "Microsoft Sound Mapper - output".

Can someone else reproduce this in NVDA and/or Audacity? If so and it is not an NVDA issue, how to report this to Microsoft, so they can easily reproduce it? Because it is not like the "Microsoft Sound Mapper" is easy to encounter and check out in Windows.

Thanks much in advance!

______
Best wishes,
Kostadin Kolev


Re: The "Microsoft Sound Mapper" does not appear correctly in the list of audio output devices in NVDA

Rui Fontes
 

Same here...

NVDA 2020.2 Beta1 and last Alpha (20354)

Windows 10 2004

All language settings in portuguese.


Rui Fontes


Às 10:33 de 22/06/2020, Kostadin Kolev escreveu:

Hello all,

Before filing a bug for this, I want to gather some opinions from other people's experience.

I think the strange problem started occurring after the Windows 10 May 2020 update. The "Microsoft Sound Mapper" does not show correctly in the list of audio output devices in NVDA's dialog for selecting a synthesizer. The device can be selected and used, but instead of a name for it, there is only a blank item in the list. On my machine Windows 10 is in bulgarian with all locale settings set to "bulgarian". But I just for the test changed the Windows display language to english and the problem remained. I haven't changed the other locale settings (e.g. the settings for time, date, currency and other similar things format) during that testing. Should I? Could the locale settings affect how that device's name is displayed?

I can reproduce this on at least 3 machines. All of them are running Windows 10 May 2020 update. On 2 of them I'm running the latest alpha snapshot of NVDA. On the 3rd one - NVDA 2020.1. All of the machines have their locale settings set to "bulgarian".

I'm not for now filing a bug against NVDA due to one more reason - I can reproduce similar (if not the same) behavior with Audacity. If I open Audacity and press Shift+O to invoke the dialog for choosing an audio output device, the "Microsoft Sound Mapper" one, which is also selectible in Audacity, is displayed only as " - output" and I think it should be displayed as "Microsoft Sound Mapper - output".

Can someone else reproduce this in NVDA and/or Audacity? If so and it is not an NVDA issue, how to report this to Microsoft, so they can easily reproduce it? Because it is not like the "Microsoft Sound Mapper" is easy to encounter and check out in Windows.

Thanks much in advance!

______
Best wishes,
Kostadin Kolev


The "Microsoft Sound Mapper" does not appear correctly in the list of audio output devices in NVDA

 

Hello all,

Before filing a bug for this, I want to gather some opinions from other people's experience.

I think the strange problem started occurring after the Windows 10 May 2020 update. The "Microsoft Sound Mapper" does not show correctly in the list of audio output devices in NVDA's dialog for selecting a synthesizer. The device can be selected and used, but instead of a name for it, there is only a blank item in the list. On my machine Windows 10 is in bulgarian with all locale settings set to "bulgarian". But I just for the test changed the Windows display language to english and the problem remained. I haven't changed the other locale settings (e.g. the settings for time, date, currency and other similar things format) during that testing. Should I? Could the locale settings affect how that device's name is displayed?

I can reproduce this on at least 3 machines. All of them are running Windows 10 May 2020 update. On 2 of them I'm running the latest alpha snapshot of NVDA. On the 3rd one - NVDA 2020.1. All of the machines have their locale settings set to "bulgarian".

I'm not for now filing a bug against NVDA due to one more reason - I can reproduce similar (if not the same) behavior with Audacity. If I open Audacity and press Shift+O to invoke the dialog for choosing an audio output device, the "Microsoft Sound Mapper" one, which is also selectible in Audacity, is displayed only as " - output" and I think it should be displayed as "Microsoft Sound Mapper - output".

Can someone else reproduce this in NVDA and/or Audacity? If so and it is not an NVDA issue, how to report this to Microsoft, so they can easily reproduce it? Because it is not like the "Microsoft Sound Mapper" is easy to encounter and check out in Windows.

Thanks much in advance!

______
Best wishes,
Kostadin Kolev


Re: Problems during translation

Rui Fontes
 

Thanks!


I have gone to read the issue, and works as expected!


Rui Fontes

NVDA portuguese team


Às 22:03 de 21/06/2020, Cyrille via groups.io escreveu:

Hello,

These messages are not announced anymore when pressing Shift+Numpad7, Shift+Numpad9, Shift+Numpad1 and Shift+Numpad3 (desktop layout).

However they are still announced when pressing Numpad7, Numpad9, Numpad1 and Numpad3 if the review cursor is respectively at top line, bottom line, left char or right char and this is intended.

Cheers,

Cyrille


Le 21/06/2020 à 13:58, Rui Fontes a écrit :
Hello!


Translating NVDA 2020.2 I found the following problems:


1 - In changes.t2t I have found this:

- Removed "top" and "bottom" messages when moving the review cursor to the first or last line of the current navigator object. (#9551)
- Removed "left" and "right" messages when moving the refiew cursor to the first or last character of the line for the current navigator object. (#9551)

but my NVDA 2020.2 Beta1 still announce those messages...


2 - Also in changes.t2t, found this:

- NVDA no longer freezes when you open the context menu for 1Password in the system notification area. (#11017)

- The tool-tips of the icons in the system tray are no longer reported upon keyboard navigation if their text is equal to the name of the icons, to avoid a double announcing. (#6656)


Should we use "system notification area" or "system tray"?


For me, we should use the first...


3 - In NVDA/Preferences/Settings, Advanced the itens in the Enabled logging categories list, are not translatable...



Rui Fontes

NVDA portuguese team






Re: Problems during translation

Cyrille
 

Hello,

These messages are not announced anymore when pressing Shift+Numpad7, Shift+Numpad9, Shift+Numpad1 and Shift+Numpad3 (desktop layout).

However they are still announced when pressing Numpad7, Numpad9, Numpad1 and Numpad3 if the review cursor is respectively at top line, bottom line, left char or right char and this is intended.

Cheers,

Cyrille

Le 21/06/2020 à 13:58, Rui Fontes a écrit :
Hello!


Translating NVDA 2020.2 I found the following problems:


1 - In changes.t2t I have found this:

- Removed "top" and "bottom" messages when moving the review cursor to the first or last line of the current navigator object. (#9551)
- Removed "left" and "right" messages when moving the refiew cursor to the first or last character of the line for the current navigator object. (#9551)

but my NVDA 2020.2 Beta1 still announce those messages...


2 - Also in changes.t2t, found this:

- NVDA no longer freezes when you open the context menu for 1Password in the system notification area. (#11017)

- The tool-tips of the icons in the system tray are no longer reported upon keyboard navigation if their text is equal to the name of the icons, to avoid a double announcing. (#6656)


Should we use "system notification area" or "system tray"?


For me, we should use the first...


3 - In NVDA/Preferences/Settings, Advanced the itens in the Enabled logging categories list, are not translatable...



Rui Fontes

NVDA portuguese team





Re: Custom screen access

Christian Comaschi
 

Thanks for the explanation, but unfortunately I’m working on a steno application that can be thought more as a driver than as a normal application. In fact, its main goal is to translate steno keyboards key combinations to normal key presses and send them to other apps, e.g. text editing apps.
So I don’t have to make its windows accessible, they already are, but I have to do what I wrote in my previous mail, read the text and caret position of the most common applications in a screen-reader like manner.
I could explain more in detail why I have to do that but it would take pages!

Il giorno 21 giu 2020, alle ore 20:47, Travis Siegel <tsiegel@...> ha scritto:

In general, if you have the source code for an application, (and with opensource, you do), there's no need to fiddle with screen reader built-in functions at all, just rewrite the actual application to use standard windows api calls, (instead of custom functionality) such as gui elements, buttons, and the like. This will automatically translate to better functionality in screen readers, because they're already built to watch the regular apis for information. I.E.
If the app is written in java, instead of drawing your text onto a canvas, like so many apps do, simply use a standard text control instead, it may take more work to make it look the way you want (which is why some folks use the graphical canvas), but it will automatically become more accessible without you having to do anything at all. in regards to the screen readers. Other languages have similar functionality issues. In general, using a nongraphical method to get the text to the screen, properly labeling graphical elements, and using standard windows controls instead of creating your own from scratch will make the applications completely accessible, with very few tweaks being necessary to complete any accessibility issues that may remain.
In general, the more custom gui elements you use, the less accessible your application becomes. Obviously, there's ways to get around this, but few (If any) developers know enough about accessibility out of the box to make those required modifications to custom elements so they work with screen readers. I don't know specifically what you're trying to fix, I've never heard of the application you're trying to fix, neither do I know what language it's written in, but most of the time, making an application more accessible doesn't require writing scripts or screen reader modules, simply make the application use standard windows controls at the source level, and most of those things will solve themselves.

On Sun, 21 Jun 2020, Christian Comaschi wrote:

Hello,
I'm asking a question that might be a little off topic because I'm not planning to develop anything for NVDA at the moment; but I'm working on an accessibility project and I'd like to know more about screen readers internals, and I think that someone here can help me find the info I need.
I'm writing custom code to improve the accessibility of an open source application (Plover), because common screen reader scripts and app modules alone don't allow me to bring it to the needed accessiblity requirements.
The problem is that at some point I need to read text from editable controls of any application in a "screen reader"-like manner, so I would like to know how screen readers can get the caret position and read the text of an editable control and the different approach of JAWS and NVDA.
I'm asking you the details of this functionality because I am trying to figure out if it could be a viable solution to read text from the screen in a "screen-reader like" manner with an approach that is valid for almost every application, or if it'stoo complex because it would require re-inventing a screen driver from scratch or re-inventing scripts for common application. In this latter case, I would consider a less stand-alone approach and make the application work in tandem with JAWS or NVDA.

After some analysis I have come to a conclusion and I would like to know if it's righgt:
- NVDA has no generic way to "read" the text given a screen position, but there are scripts for the most common applications that provide this information to the main module using the most proper technique for the single application (Win32 API, MSAA, UIA or other means);
- JAWS seems to have generic functions such as "SayLine", "SayRow" or "SaySentence" that work for most of the applications because of its video intercept driver.

As a first try, I wrote some small scripts to use just UIA to read the text and caret position inside Notepad or Winword, but it didn't work; I also tried to use the inspect tool from Microsoft, meant to analyze the windows of any application to get accessibility info, but even that tool wasn't able to get the caret position inside the edit windows.
Am I missing something or is it really that complex?
Thanks in advance
Best,
Christian











Re: Custom screen access

Travis Siegel
 

In general, if you have the source code for an application, (and with opensource, you do), there's no need to fiddle with screen reader built-in functions at all, just rewrite the actual application to use standard windows api calls, (instead of custom functionality) such as gui elements, buttons, and the like. This will automatically translate to better functionality in screen readers, because they're already built to watch the regular apis for information. I.E.
If the app is written in java, instead of drawing your text onto a canvas, like so many apps do, simply use a standard text control instead, it may take more work to make it look the way you want (which is why some folks use the graphical canvas), but it will automatically become more accessible without you having to do anything at all. in regards to the screen readers. Other languages have similar functionality issues. In general, using a nongraphical method to get the text to the screen, properly labeling graphical elements, and using standard windows controls instead of creating your own from scratch will make the applications completely accessible, with very few tweaks being necessary to complete any accessibility issues that may remain.
In general, the more custom gui elements you use, the less accessible your application becomes. Obviously, there's ways to get around this, but few (If any) developers know enough about accessibility out of the box to make those required modifications to custom elements so they work with screen readers. I don't know specifically what you're trying to fix, I've never heard of the application you're trying to fix, neither do I know what language it's written in, but most of the time, making an application more accessible doesn't require writing scripts or screen reader modules, simply make the application use standard windows controls at the source level, and most of those things will solve themselves.

On Sun, 21 Jun 2020, Christian Comaschi wrote:

Hello,
I'm asking a question that might be a little off topic because I'm not planning to develop anything for NVDA at the moment; but I'm working on an accessibility project and I'd like to know more about screen readers internals, and I think that someone here can help me find the info I need.
I'm writing custom code to improve the accessibility of an open source application (Plover), because common screen reader scripts and app modules alone don't allow me to bring it to the needed accessiblity requirements.
The problem is that at some point I need to read text from editable controls of any application in a "screen reader"-like manner, so I would like to know how screen readers can get the caret position and read the text of an editable control and the different approach of JAWS and NVDA.
I'm asking you the details of this functionality because I am trying to figure out if it could be a viable solution to read text from the screen in a "screen-reader like" manner with an approach that is valid for almost every application, or if it'stoo complex because it would require re-inventing a screen driver from scratch or re-inventing scripts for common application. In this latter case, I would consider a less stand-alone approach and make the application work in tandem with JAWS or NVDA.

After some analysis I have come to a conclusion and I would like to know if it's righgt:
- NVDA has no generic way to "read" the text given a screen position, but there are scripts for the most common applications that provide this information to the main module using the most proper technique for the single application (Win32 API, MSAA, UIA or other means);
- JAWS seems to have generic functions such as "SayLine", "SayRow" or "SaySentence" that work for most of the applications because of its video intercept driver.

As a first try, I wrote some small scripts to use just UIA to read the text and caret position inside Notepad or Winword, but it didn't work; I also tried to use the inspect tool from Microsoft, meant to analyze the windows of any application to get accessibility info, but even that tool wasn't able to get the caret position inside the edit windows.
Am I missing something or is it really that complex?
Thanks in advance
Best,
Christian










Custom screen access

Christian Comaschi
 

Hello,
I'm asking a question that might be a little off topic because I'm not planning to develop anything for NVDA at the moment; but I'm working on an accessibility project and I'd like to know more about screen readers internals, and I think that someone here can help me find the info I need.
I'm writing custom code to improve the accessibility of an open source application (Plover), because common screen reader scripts and app modules alone don't allow me to bring it to the needed accessiblity requirements.
The problem is that at some point I need to read text from editable controls of any application in a "screen reader"-like manner, so I would like to know how screen readers can get the caret position and read the text of an editable control and the different approach of JAWS and NVDA.
I'm asking you the details of this functionality because I am trying to figure out if it could be a viable solution to read text from the screen in a "screen-reader like" manner with an approach that is valid for almost every application, or if it'stoo complex because it would require re-inventing a screen driver from scratch or re-inventing scripts for common application. In this latter case, I would consider a less stand-alone approach and make the application work in tandem with JAWS or NVDA.

After some analysis I have come to a conclusion and I would like to know if it's righgt:
- NVDA has no generic way to "read" the text given a screen position, but there are scripts for the most common applications that provide this information to the main module using the most proper technique for the single application (Win32 API, MSAA, UIA or other means);
- JAWS seems to have generic functions such as "SayLine", "SayRow" or "SaySentence" that work for most of the applications because of its video intercept driver.

As a first try, I wrote some small scripts to use just UIA to read the text and caret position inside Notepad or Winword, but it didn't work; I also tried to use the inspect tool from Microsoft, meant to analyze the windows of any application to get accessibility info, but even that tool wasn't able to get the caret position inside the edit windows.
Am I missing something or is it really that complex?
Thanks in advance
Best,
Christian


Problems during translation

Rui Fontes
 

Hello!


Translating NVDA 2020.2 I found the following problems:


1 - In changes.t2t I have found this:

- Removed "top" and "bottom" messages when moving the review cursor to the first or last line of the current navigator object. (#9551)
- Removed "left" and "right" messages when moving the refiew cursor to the first or last character of the line for the current navigator object. (#9551)

but my NVDA 2020.2 Beta1 still announce those messages...


2 - Also in changes.t2t, found this:

- NVDA no longer freezes when you open the context menu for 1Password in the system notification area. (#11017)

- The tool-tips of the icons in the system tray are no longer reported upon keyboard navigation if their text is equal to the name of the icons, to avoid a double announcing. (#6656)


Should we use "system notification area" or "system tray"?


For me, we should use the first...


3 - In NVDA/Preferences/Settings, Advanced the itens in the Enabled logging categories list, are not translatable...



Rui Fontes

NVDA portuguese team


Re: Sending keystrokes

Karl-Otto Rosenqvist
 

Absolutely, that could be one solution. My thought here would be if one could catch the key down event, announce the function key and if the key up event comes from the same function key alone, i e no key down event for the adjecent function key immediately after you could count it as a press of that function key. One could experiment with the timing.

The problem here is that I don’t have the necessary hardware and I’d like to test some kind of working solution before spending all that money.


Regards

Karl-Otto
MAWINGU
0701-75 98 56
https://mawingu.se
Orgnr: 750804-3937

20 juni 2020 kl. 00:13 skrev Bill Dengler <codeofdusk@...>:

I’ve actually thought about an alternative solution for this: how about writing a global plug-in that intersects the function keys, and on first press announces them, then on second press sends them through? this would make the Touch Bar at least partially accessible, as well as machines with capacitive media keys…

Bill
On Jun 19, 2020, at 18:07, Karl-Otto Rosenqvist <Karl-otto@...> wrote:

Hi!
I wonder if anyone could point me in the right driection. I’d like to create a global plugin that converts some keystrokes to others. How can I send new key events from the Python code? It’s easy to bind the gestures to functions but how the heck do I generate a key press?

The goal is to use it on a MacBook Pro with Touch Bar running Windows. The MacBooks with Touch Bar lacs the physical function keys and your only option is to use the touch screen where the function keys are drawn.

I want to test using Ctrl + Win + 1 for F1, Ctrl + Win + 2 for F2 and so on.

The MacBook Pros have a very bright screen which I benefit from and I’d like to test this and see if it’s a good enough solution or not before I buy one, they are quite expensive...


Kind regards

Karl-Otto
MAWINGU
0701-75 98 56
https://mawingu.se
Orgnr: 750804-3937