Re: Introducing our Google Summer of Code 2020 student: Shubham Jain
Marlon Brandão de Sousa
I would focus instead on common interface elements recognition, for example buttons, checkboxes, label associations and etc.
Although less glamorous with the final user, for me screen readers will have to use this approach sooner or latter because nobody can keep up with the pace of technology and accessibility will be each time more broken in the sense that the time new technology arises and the time they keep up with accessibility before being replaced by newer technology is inversely proportional, which means that the time between one technology becoming accessibility mature and being replaced by newer imature technology will be each time smaller while the time for new technology to become mature in terms of accessibility will be equals or greater than it is today, given that more resources tend to be allocated in new stuff development than in becoming current stuff mature.
This is a marketing tendency and there is nothing we can do about it, think about how accessibility and usability as a whole has decreased in Apple systems because the pressure to release new features is imposed by the marketing and each time greater.
Today Microsoft is spending lots of resources in accessibility. This has made lives of screen readers for Windows easier than ever, but who knows how much time this will least. It might be forever, it might be for six months before the company redirects efforts to other priorities. What if a foreign company arises and starts imposing pressure for new stuff on Microsoft for Windows matters, just like the marketing is moving faster and faster on the mobile arena?
Fact of life is the only thing we can assume that will be considered are the visual interfaces for sighted people. These will never become inaccessible to the sighted for obvious reasons and my understanding is that they are standardized enough to be recognizable (a button is relatively the same in qt, gtk, win32, windows forms or exposed by a remote desktop screen) because people can recognize it as a button and when it is clicked it behaves like a button. If sighted people can recognize it as a button, then should image recognition IA, because unless screen readers start to use a IA approach they won't be able to resist in the long run.
Of course this doesn't solve all the possible problems, system focus, context information, OS events and such wouldn't be but at least one could focus more on scripts to correlation stuff than on querying apps to extract visual element descriptions, which ultimately depends on developers that, history shows, are usually either because they lack knowledge, resources or will, unable to keep up in time.
On 05/05/2020 15:49, shubhamdjain7@... wrote:
Thank you for the introduction Reef!