Hi,
Now you have a bit of an overview of the add-on, let's dive into the core components: SPL app module and support modules. In this article, we'll go through life of the SPL Studio app module: from start to finish and everything in between. We'll cover more specific features of the app module in the next few Internals articles.
Note: Due to indirect requests, I'll try including some portions of the source code to let you better understand how something works (mostly pseudo code will be provided). Also, certain things will require explaining how NVDA Core (the screen reader itself) works (so you'll learn several things at once).
SPL Studio app module: design and code overview
As noted previously, the SPL Studio app module (splstudio/__init__.py) consists of several sections. These include (from top to bottom):
* Imports: Many modules from Python packages and from NVDA screen reader are imported here, including IAccessible controls support, configuration manager and so on.
* Layer command wrapper: I talked about how layer commands work in a previous article, and the "finally" function at the top is the one that makes this possible.
* Few helper functions and checks: This includes a flag specifying minimum version of Studio needed, the cached value for Studio window handle (SPLWin) and place holders for threads such as microphone alarm timer (more on this in threads article). This section also includes helper functions such as "messageSound" (displays a message on a braille display and plays a wave file) and other helper functions.
* Track item overlay classes: two classes are provided to support Playlist Viewer items in Studio 5.0x and 5.10, respectivley. We'll come back to these objects later.
* App module class: This is the core of not only the app module, but the entire add-on package. The app module class (appModules.splstudio.AppModule) is further divided into sections as described in add-on design article.
Let's now tour the lifecycle of the app module object in question.
Before birth: NVDA's app module import routines
Before we go any further, it is important for you to understand how NVDA loads various app modules. This routine, available from source/appModuleHandler.py (NVDA Core), can be summarized as follows:
1. If a new process (program) runs, NvDA will try to obtain the process ID (PID) for the newly loaded process.
2. Next, NvDA will look for an app module matching the name of the executable for the newly created process. It looks in various places, including source/appModules, userConfigDirectory/appModules and addonname/appModules, then resorting to the default app module if no app module with the given name is found.
3. Next, NvDA will attempt to use Python's built-in __import__ function to load the app module, raising errors if necessary. No errors means the app module is ready for use.
4. Once the newly loaded module is ready, NVDA will instantiate appModule.AppModule class (make it available). If a constructor (__init__ method) is defined, Python (not NVDA) will call the app module constructor (more on this below).
In case the app module's AppModule class has a constructor defined, Python will follow directions specified in the constructor. Just prior to performing app module specific constructor routines, it is important to call the constructor for the default app module first as in the following code:
def __init__(self, *args, **kwargs):
super(AppModule, self).__init__(*args, **kwargs)
This is a must because the default app module constructor performs important activities, including:
1. The default app module constructor will call another base constructor (this time, it is baseObject.ScriptableObject, containing gestures support among other important properties).
2. Initializes various properties, such as PID (process ID), app module name (if defined), application name and the handle to the app in question via kernel32.dll's OpenProcess function (XP/Server 2003 and Vista/Server 2008 and later requires different arguments).
3. Lastly, the constructor initializes process injection handle and helper binding handle in case such routines are required.
Birth: SPL Studio app module construction
Certain app module add-ons ship with an app module with a constructor define, and SPL Studio is one of them. After calling the base constructor as described above, SPL app module's constructor (__init__ method that runs when the app module starts) does the following:
1. Checks whether a supported version of Studio is running, and if not, raises RuntimeError exception, preventing you from using the app module while an unsupported version of Studio is in use (as of add-on 5.3/6.0, you need to use Studio 5.00 and later).
2. NvDA announces, "Using SPL Studio version 5.01" if Studio 5.01 is in use (of course, NVDA will say 5.10 when Studio 5.10 is in use). This is done via ui.message function (part of NVDA Core) which lets you hear spoken messages or read the message on a braille display. In reality, ui.message function calls two functions serially (one after the other): speech.speakMessage (speaking something via a synthesizer) and braille.handler.message (brailling messages on a braille display if connected).
3. Next, add-on settings are initialized by calling splconfig.initConfig(). This is done as follows:
A. Loads a predefined configuration file named userConfigPath/splstudio.ini. In add-on 6.0 and later, this is known as "normal profile). This is done by calling splconfig.unlockConfig() function that handles configuration validation via ConfigObj and Validator.
B. For add-on 6.0 and later, loads broadcast profiles from addonDir/profiles folder. These are .ini files and are processed just like the normal profile.
C. Each profile is then appended to splconfig.SPLConfigPool, a list of profiles in use. Then the active profile is set and splconfig.SPLConfig (user configuration map) is set to the first profile in the configuration pool (normal profile; for add-on 5.x and earlier, there is just one profile so append step is skipped).
D. If errors were found, NvDA either displays an error dialog (5.x and earlier) or a status dialog (6.0 and later) detailing the error in question and what NvDA has done to faulty profiles. This can range from applying default values to some settings to resetting everything to defaults.
4. Starting with NVDA 2015.3, it became possible for an app module to request NvDA to monitor certain events for certain controls even if the app is not being used. This is done by calling eventHandler.requestEvents function with three arguments: process ID, window class for the control in question and the event to be monitored. For earlier versions of NvDA (checked via built-in hasattr function), this step is skipped, and background status monitor flag is then set accordingly. We'll come back to event handling in a future installment.
5. Next, GUI subsystem is initialized (NVDA uses wxPython). This routine adds an entry in NVDA's preferences menu entitled "SPL Studio Settings", the add-on configuration dialog.
6. Lastly, as described in the previous article on SPL Studio handle, the app module will look for the window handle for the Studio app.
Life of the app module: events, commands and output
Once the Studio app module is ready, you can then move to Studio window and perform activities such as:
* Press commands, and NvDA will respond by either opening a dialog or speaking what it did.
* Announce status changes such as microphone status.
* Find tracks.
* Examine information in columns via Track Dial.
* Listen to progress of a library scan in the background.
* Perform SPL Assistant gestures.
* For 6.0 and later, manage broadcast profiles (we'll talk about broadcast profiles in configuration management article).
Death: termination routines
While using Studio add-on, you can stop using the add-on in various ways, including exiting or restarting NVDA, turning off your computer or logging off or closing Studio. Just like initialization routines, the Studio app module has specific directions to follow when add-on is closed.
Here is a list of steps Studio app module performs when it is about to leave this world:
1. The "terminate" method is called. Just like the startup (constructor) routine, this method first calls the terminate method defined in the default app module, which closes handles and performs other closing routines.
2. Calls splconfig.saveConfig() function to save add-on settings. This function goes through following steps in add-on 6.0:
A. Any global settings used by the active profile is first copied to normal profile.
B. Profile-specific settings are then saved to disk.
C. Finally, normal profile is saved and various flags, active profile and config pool is cleared.
D. For add-on 5.x and earlier, there is only one broadcast profile to worry about, and this profile is saved at this point.
3. NVDA then attempts to remove SPL Studio Settings entry from NVDA's preferences menu, then various maps used by Studio add-on (such as Cart Explorer map) are cleared.
4. As the app module is laid to rest, the window handle value for Studio window is cleared. This is a must, as the handle will be different next time Studio runs. At this point, NVDA removes splstudio (Studio app module) from list of app modules in use.
Conclusion
By this point, you should have a better understanding of life of an app module such as SPL Studio. Many add-ons, especially those that ships with app modules go through similar lifecycle as described above.
Now that we know how Studio app module is born and dies, it is time for us to look at what happens while the Studio add-on is alive, and we'll start with how Studio add-on announces time, work with alarms and uses basic settings.
References:
1. OpenProcess (kernel32.dll) reference (Windows API): https://msdn.microsoft.com/en-us/library/windows/desktop/ms684320(v=vs.85).aspx
2. wxPython online docs: http://www.wxpython.org/onlinedocs.php
Saturday, August 15, 2015
Friday, August 14, 2015
StationPlaylist Add-on Internals: Source code layout, overall design and development philosophy, layer commands, Studio API and Studio window handle
Hi,
In the previous installment, you learned what StationPlaylist Studio is and a brief history behind this add-on. Starting with this installment, we'll tour the real internals of this add-on, starting with overall design and a few important notes. But before we get into that, there are some things we need to go over such as programming background, user experience and a few definitions.
A place to start: reader questions and definitions
I'm sure some readers might ask, "doesn't writing a series on internals require programming knowledge?" Yes and no. Yes, as you may need some basic exposure to programming such as what a variable is, conditional execution and so forth. On the flip side, you don't have to be a programmer to write about internal workings of an add-on (the basic requirement is passion for teaching and a hope for users to learn something new). Same could be said about reading this series: you may need some exposure to programming, but you don't have to be a programmer to follow along.
Another question might be, "will this series teach me all there is to it when writing an add-on of my own?" Yes and no. Yes, as you'll learn how add-on writers think when it comes to taking care of their add-ons and get a glimpse into add-on development processes. On the other side of the coin is scope of this series - this series does not serve as a definitive guide on add-on writing (there are documentation, linked at the end of this article that'll give you some basic overview). If you are familiar with add-on development and/or NVDA screen reader development and source code, you'll have slightly easier time understanding this series of articles. I tried my best to make it easy for users to understand (although I do have to include some technical details).
Some definitions:
* Add-on: An add-on is a module for a program that adds additional features or changes the behavior of a program (1).
* API: Application Programming Interface, a set of specifications for programmers for using services offered by a program such as modules, functions and documentation (2). One of the most well-known API's is Python and its documentation (3).
With some basics out of the way, let's dive into SPL add-on internals (you should download the add-on source code, which can be found at http://bitbucket.org/nvdaaddonteam/stationplaylist).
Overall design and source code layout
StationPlaylist Studio add-on for NVDA consists of two app modules and a global plugin. Because Studio comes with Track Tool for managing tracks, the add-on includes an app module for Track Tool in addition to the main app module for Studio.
The overall design is that of a partnership between the main Studio app module and the Studio Utilities (SPLStudioUtils) global plugin. Studio app module performs things expected from scripts such as responding to key presses, announcing status information, configuration management and so forth, while the global plugin is responsible for running Studio commands from anywhere and for encoder support (the add-on supports SAM and SPL encoders). In reality, the global plugin is subordinate to the app module, as the app module controls overall functionality of the add-on and because the global plugin requires Studio to be running to unlock some features (here, unlock means using layer commands and encoder support).
The source code consists of:
* appModules: This folder contains the main splstudio (app module) package and the app module for Track Tool.
* The SPL Studio package consists of various modules, which include __init__ (main app module and track item classes), configuration manager (splconfig) and miscellaneous services (splmisc) as well as support modules and various wave files used by the add-on.
* The main app module file is divided into sections. First, the overlay classes for track items are defined, then comes the app module, further divided into four sections: fundamental methods (constructor, events and others), time commands (end of track, broadcaster time, etc.), other commands (track Finder and others) and SPL Assistant layer. This allows me to identify where a bug is coming from and to add features in appropriate sections.
* globalPlugins: This folder contains SPLStudioUtils package, which consists of __init__ (main plugin and SPL Controller layer) and encoder support module.
Design philosophy
When I set out to write the add-on in 2013, I put forth certain things the add-on should adhere to, including:
* Consistency: The add-on should have a consistent interface and command structure. Interface includes various GUI's such as add-on configuration dialog. For layer commands, I tried using native Studio command assignments.
* Extensibility: The add-on should be organized and written in such a way that permits easy extensibility, hence the app module and the global plugin were divided into submodules, with each of them being a specialist of some kind (such as configuration management).
* Separation of concerns: Coupled with extensibility, this allowed me to provide just needed commands at the right time, which resulted in two layer command sets (explained below).
* Easy to follow source code: Although some may say excessive documentation is a noise, I believe it is important for a developer to understand how a function or a module came about. Also, I have used and read user guides for other screen reader scripts to better understand how a feature worked and come up with some enhancements to a point where I found some major bugs with JAWS scripts (one of them, which I hope Brian patched by now is microphone alarm where the alarm would go off despite the fact that microphone was turned off before alarm timeout has expired).
* Unique feature labels: One way to stand out was to give features interesting names. For instance, during add-on 3.0 development, I decided to give cart learn mode a name that better reflects what the feature does: Cart Explorer to explore cart assignments. Same could be set about NvDA's implementation of enhanced arrow keys (called Track Dial, as the feature is similar to flipping a dial on a remote control).
* Extensive collaboration and feedback cycle between users and developers: I believed that the real stars of the show were not the add-on code files, but broadcasters who'll use various add-on features. Because of this, I worked with users early on, and their continued feedback shapes future add-on releases. This collaboration and feedback cycle also helped me (the add-on author) understand how the add-on was used and to plan future features to meet the needs of broadcasters who may use this add-on in various scenarios (a good example is broadcast profiles, as you'll see in add-on configuration article).
Why two layer sets?
When I first sat down to design the add-on, I knew I had to write both an app module and a global plugin (to perform Studio commands from anywhere), which led to defining two layer command sets for specific purposes:
* SPL Assistant: This layer command set is available in the app module and is intended to obtain status information and to manage app module features. I called this Assistant because this layer serves as an assistant to a broadcaster in reading various status information. More details can be found in a future installment on SPL Assistant layer commands.
* SPL Controller: This layer is for the global plugin and performs Studio commands from anywhere. I called this "controller" because it controls various functions of Studio from other programs. More details will be provided in a future installment.
In the early days, I enforced this separation, but in add-on 6.0, it will be possible to invoke SPL Assistant layer by pressing the command used to invoke SPL Controller.
The "magic" behind layer commands
In order for layer commands to work, I borrowed code from another add-on: Toggle and ToggleX by Tyler Spivey. Toggle/ToggleX allows one to toggle various formatting announcement settings via a layer command set. It works like this:
* Dynamic Command:script binding and removal: It is possible to bind gestures dynamically via bindGesture/bindGestures method for an app module or a global plugin (bindGesture binds a single command to a script, whereas bindGestures binds commands to scripts from a gestures map or another container). To remove gesture map dynamically, the main/layer gestures combo was cleared, then the main gestures were bound.
* Use of two gesture maps in the app module/global plugin: Normally, an app module or a global plugin that accepts keyboard input uses a single gestures map (called __gestures; a map is another term for dictionaries or associative array where there is a value tied to a key). But in order for layers to work, a second gestures map was provided to store layer commands (command and the bound script of the form "command":"script").
* Wrapped functions: Tyler used "wraps" decorator from functools to wrap how "finally" function is called from within the layer set (this was needed to remove bindings for layer commands after they are done). Also, a custom implementation of getScript function (app module/global plugin) was used to return either the main script of the layer version depending on context.
A typical layer command execution is as follows:
1. First, assign a command to a layer (entry) command (add-on 2.0 and later; add-on 1.x used NvDA+Grave for SPL Controller and Control+NVDA+Grave for the Assistant layer; removed in 2.0 to prevent conflicts with language-specific gestures).
2. You press the layer entry command. This causes the app module/global plugin to perform the following:
A. Layer conditions are checked. For the app module, making sure that you are in the Playlist Viewer, and for the global plugin, checks if Studio is running.
B. Sets a flag telling NvDA that the Assistant/Controller layer is active.
C. Adds gestures for the layer set to the main gestures map via bindGestures function.
3. You press a command in the layer set (such as A from Assistant to hear automation status, or press A to turn automation on if using SPL Controller layer). Depending on how the layer script is implemented, it either calls Studio API (for SPL Controller layer and for some Assistant commands) or simulates object navigation to fetch needed information (Assistant layer). In the app module, for performance reasons, the object is cached. More details on mechanics of this procedure in subsequent articles.
4. After the layer command is done, it calls "finish" function (app module/global plugin) to perform clean up actions such as:
A. Clears layer flags.
B. Removes the "current" gestures (main gestures and layer commands) and reassigns it to the main gestures map (this is dynamic binding removal).
C. Performs additional actions depending on context (for example, if Cart Explorer was in use).
The importance of Studio window handle and Studio API
In order to use services offered by Studio, one has to use Studio API, which in turn requires one to keep an eye on window handle to Studio (in Windows API, a window handle (just called handle) is a reference to something, such as a window, a file, connection routines and so on). This is important if one wishes to perform Studio commands from other programs (Studio uses messages to communicate with the outside program in question via user32.dll's SendMessage function).
In add-on 6.0, (Git master branch in the add-on source code) one of the activities the app module performs when started (besides announcing the version of Studio you are using) is to look for the handle to Studio's main window until it is found (this is done via a thread which calls user32.dll's FindWindowA (not FindWindowW) function every second), and once found, the app module caches this information for later use. A similar check is performed by SPL Controller command, as without this, SPL Controller is useless (as noted earlier). Because of the prominence of the Studio API and the window handle, one of the first things I do when a new version of Studio is released is to ask for the latest Studio API and modify the app module and/or global plugin accordingly.
Conclusion
I hope this provided a good overview of how the add-on is organized and let you learn at least some basics of how the add-on operates. Starting with the next installment, we'll dive deeper into the add-on internals, starting with the app module's lifecycle (from birth to death and all activities in between). Along the way we'll learn more about how various commands, alarms and dialogs are implemented, culminating in a tour of the global plugin and encoder support.
//JL
References:
1. Plug-in (Wikipedia): https://en.wikipedia.org/wiki/Plug-in_(computing)
2. Application Programming Interface (Wikipedia): https://en.wikipedia.org/wiki/Application_programming_interface
3. Python 2.7.10 documentation overview (Python Software Foundation): https://docs.python.org/2/
4. Handle (Wikipedia): https://en.wikipedia.org/wiki/Handle_(computing)
5. What is a Windows handle (Stack Overflow): http://stackoverflow.com/questions/902967/what-is-a-windows-handle
6. FindWindow (user32.dll) reference (Windows API): https://msdn.microsoft.com/en-us/library/windows/desktop/ms633499(v=vs.85).aspx
7. SendMessage (user32.dll) reference (Windows API): https://msdn.microsoft.com/en-us/library/windows/desktop/ms644950(v=vs.85).aspx
8. NVDA Developer Guide (NV Access): http://www.nvaccess.org/files/nvda/documentation/developerGuide.html
In the previous installment, you learned what StationPlaylist Studio is and a brief history behind this add-on. Starting with this installment, we'll tour the real internals of this add-on, starting with overall design and a few important notes. But before we get into that, there are some things we need to go over such as programming background, user experience and a few definitions.
A place to start: reader questions and definitions
I'm sure some readers might ask, "doesn't writing a series on internals require programming knowledge?" Yes and no. Yes, as you may need some basic exposure to programming such as what a variable is, conditional execution and so forth. On the flip side, you don't have to be a programmer to write about internal workings of an add-on (the basic requirement is passion for teaching and a hope for users to learn something new). Same could be said about reading this series: you may need some exposure to programming, but you don't have to be a programmer to follow along.
Another question might be, "will this series teach me all there is to it when writing an add-on of my own?" Yes and no. Yes, as you'll learn how add-on writers think when it comes to taking care of their add-ons and get a glimpse into add-on development processes. On the other side of the coin is scope of this series - this series does not serve as a definitive guide on add-on writing (there are documentation, linked at the end of this article that'll give you some basic overview). If you are familiar with add-on development and/or NVDA screen reader development and source code, you'll have slightly easier time understanding this series of articles. I tried my best to make it easy for users to understand (although I do have to include some technical details).
Some definitions:
* Add-on: An add-on is a module for a program that adds additional features or changes the behavior of a program (1).
* API: Application Programming Interface, a set of specifications for programmers for using services offered by a program such as modules, functions and documentation (2). One of the most well-known API's is Python and its documentation (3).
With some basics out of the way, let's dive into SPL add-on internals (you should download the add-on source code, which can be found at http://bitbucket.org/nvdaaddonteam/stationplaylist).
Overall design and source code layout
StationPlaylist Studio add-on for NVDA consists of two app modules and a global plugin. Because Studio comes with Track Tool for managing tracks, the add-on includes an app module for Track Tool in addition to the main app module for Studio.
The overall design is that of a partnership between the main Studio app module and the Studio Utilities (SPLStudioUtils) global plugin. Studio app module performs things expected from scripts such as responding to key presses, announcing status information, configuration management and so forth, while the global plugin is responsible for running Studio commands from anywhere and for encoder support (the add-on supports SAM and SPL encoders). In reality, the global plugin is subordinate to the app module, as the app module controls overall functionality of the add-on and because the global plugin requires Studio to be running to unlock some features (here, unlock means using layer commands and encoder support).
The source code consists of:
* appModules: This folder contains the main splstudio (app module) package and the app module for Track Tool.
* The SPL Studio package consists of various modules, which include __init__ (main app module and track item classes), configuration manager (splconfig) and miscellaneous services (splmisc) as well as support modules and various wave files used by the add-on.
* The main app module file is divided into sections. First, the overlay classes for track items are defined, then comes the app module, further divided into four sections: fundamental methods (constructor, events and others), time commands (end of track, broadcaster time, etc.), other commands (track Finder and others) and SPL Assistant layer. This allows me to identify where a bug is coming from and to add features in appropriate sections.
* globalPlugins: This folder contains SPLStudioUtils package, which consists of __init__ (main plugin and SPL Controller layer) and encoder support module.
Design philosophy
When I set out to write the add-on in 2013, I put forth certain things the add-on should adhere to, including:
* Consistency: The add-on should have a consistent interface and command structure. Interface includes various GUI's such as add-on configuration dialog. For layer commands, I tried using native Studio command assignments.
* Extensibility: The add-on should be organized and written in such a way that permits easy extensibility, hence the app module and the global plugin were divided into submodules, with each of them being a specialist of some kind (such as configuration management).
* Separation of concerns: Coupled with extensibility, this allowed me to provide just needed commands at the right time, which resulted in two layer command sets (explained below).
* Easy to follow source code: Although some may say excessive documentation is a noise, I believe it is important for a developer to understand how a function or a module came about. Also, I have used and read user guides for other screen reader scripts to better understand how a feature worked and come up with some enhancements to a point where I found some major bugs with JAWS scripts (one of them, which I hope Brian patched by now is microphone alarm where the alarm would go off despite the fact that microphone was turned off before alarm timeout has expired).
* Unique feature labels: One way to stand out was to give features interesting names. For instance, during add-on 3.0 development, I decided to give cart learn mode a name that better reflects what the feature does: Cart Explorer to explore cart assignments. Same could be set about NvDA's implementation of enhanced arrow keys (called Track Dial, as the feature is similar to flipping a dial on a remote control).
* Extensive collaboration and feedback cycle between users and developers: I believed that the real stars of the show were not the add-on code files, but broadcasters who'll use various add-on features. Because of this, I worked with users early on, and their continued feedback shapes future add-on releases. This collaboration and feedback cycle also helped me (the add-on author) understand how the add-on was used and to plan future features to meet the needs of broadcasters who may use this add-on in various scenarios (a good example is broadcast profiles, as you'll see in add-on configuration article).
Why two layer sets?
When I first sat down to design the add-on, I knew I had to write both an app module and a global plugin (to perform Studio commands from anywhere), which led to defining two layer command sets for specific purposes:
* SPL Assistant: This layer command set is available in the app module and is intended to obtain status information and to manage app module features. I called this Assistant because this layer serves as an assistant to a broadcaster in reading various status information. More details can be found in a future installment on SPL Assistant layer commands.
* SPL Controller: This layer is for the global plugin and performs Studio commands from anywhere. I called this "controller" because it controls various functions of Studio from other programs. More details will be provided in a future installment.
In the early days, I enforced this separation, but in add-on 6.0, it will be possible to invoke SPL Assistant layer by pressing the command used to invoke SPL Controller.
The "magic" behind layer commands
In order for layer commands to work, I borrowed code from another add-on: Toggle and ToggleX by Tyler Spivey. Toggle/ToggleX allows one to toggle various formatting announcement settings via a layer command set. It works like this:
* Dynamic Command:script binding and removal: It is possible to bind gestures dynamically via bindGesture/bindGestures method for an app module or a global plugin (bindGesture binds a single command to a script, whereas bindGestures binds commands to scripts from a gestures map or another container). To remove gesture map dynamically, the main/layer gestures combo was cleared, then the main gestures were bound.
* Use of two gesture maps in the app module/global plugin: Normally, an app module or a global plugin that accepts keyboard input uses a single gestures map (called __gestures; a map is another term for dictionaries or associative array where there is a value tied to a key). But in order for layers to work, a second gestures map was provided to store layer commands (command and the bound script of the form "command":"script").
* Wrapped functions: Tyler used "wraps" decorator from functools to wrap how "finally" function is called from within the layer set (this was needed to remove bindings for layer commands after they are done). Also, a custom implementation of getScript function (app module/global plugin) was used to return either the main script of the layer version depending on context.
A typical layer command execution is as follows:
1. First, assign a command to a layer (entry) command (add-on 2.0 and later; add-on 1.x used NvDA+Grave for SPL Controller and Control+NVDA+Grave for the Assistant layer; removed in 2.0 to prevent conflicts with language-specific gestures).
2. You press the layer entry command. This causes the app module/global plugin to perform the following:
A. Layer conditions are checked. For the app module, making sure that you are in the Playlist Viewer, and for the global plugin, checks if Studio is running.
B. Sets a flag telling NvDA that the Assistant/Controller layer is active.
C. Adds gestures for the layer set to the main gestures map via bindGestures function.
3. You press a command in the layer set (such as A from Assistant to hear automation status, or press A to turn automation on if using SPL Controller layer). Depending on how the layer script is implemented, it either calls Studio API (for SPL Controller layer and for some Assistant commands) or simulates object navigation to fetch needed information (Assistant layer). In the app module, for performance reasons, the object is cached. More details on mechanics of this procedure in subsequent articles.
4. After the layer command is done, it calls "finish" function (app module/global plugin) to perform clean up actions such as:
A. Clears layer flags.
B. Removes the "current" gestures (main gestures and layer commands) and reassigns it to the main gestures map (this is dynamic binding removal).
C. Performs additional actions depending on context (for example, if Cart Explorer was in use).
The importance of Studio window handle and Studio API
In order to use services offered by Studio, one has to use Studio API, which in turn requires one to keep an eye on window handle to Studio (in Windows API, a window handle (just called handle) is a reference to something, such as a window, a file, connection routines and so on). This is important if one wishes to perform Studio commands from other programs (Studio uses messages to communicate with the outside program in question via user32.dll's SendMessage function).
In add-on 6.0, (Git master branch in the add-on source code) one of the activities the app module performs when started (besides announcing the version of Studio you are using) is to look for the handle to Studio's main window until it is found (this is done via a thread which calls user32.dll's FindWindowA (not FindWindowW) function every second), and once found, the app module caches this information for later use. A similar check is performed by SPL Controller command, as without this, SPL Controller is useless (as noted earlier). Because of the prominence of the Studio API and the window handle, one of the first things I do when a new version of Studio is released is to ask for the latest Studio API and modify the app module and/or global plugin accordingly.
Conclusion
I hope this provided a good overview of how the add-on is organized and let you learn at least some basics of how the add-on operates. Starting with the next installment, we'll dive deeper into the add-on internals, starting with the app module's lifecycle (from birth to death and all activities in between). Along the way we'll learn more about how various commands, alarms and dialogs are implemented, culminating in a tour of the global plugin and encoder support.
//JL
References:
1. Plug-in (Wikipedia): https://en.wikipedia.org/wiki/Plug-in_(computing)
2. Application Programming Interface (Wikipedia): https://en.wikipedia.org/wiki/Application_programming_interface
3. Python 2.7.10 documentation overview (Python Software Foundation): https://docs.python.org/2/
4. Handle (Wikipedia): https://en.wikipedia.org/wiki/Handle_(computing)
5. What is a Windows handle (Stack Overflow): http://stackoverflow.com/questions/902967/what-is-a-windows-handle
6. FindWindow (user32.dll) reference (Windows API): https://msdn.microsoft.com/en-us/library/windows/desktop/ms633499(v=vs.85).aspx
7. SendMessage (user32.dll) reference (Windows API): https://msdn.microsoft.com/en-us/library/windows/desktop/ms644950(v=vs.85).aspx
8. NVDA Developer Guide (NV Access): http://www.nvaccess.org/files/nvda/documentation/developerGuide.html
StationPlaylist Add-on Internals: Introduction and history
Hi,
If you are a radio broadcaster, you might be accustomed to activities involved when producing a show. This may include playlist selection, scheduling break notes, responding to requests, monitoring listener count and encoding status and so on. To assist a broadcaster, a broadcast automation program is used, and one of the popular apps is called StationPlaylist Studio.
In this first installment of NVDA Add-on Internals: StationPlaylist Studio, we'll learn about what Studio is and how the NVDA add-on was born. You don't have to install or use the NVDA add-on to understand the ins and outs of this powerful add-on (using the add-on might help you better appreciate the depth of this material; for fuller experience, it is handy to have the add-on source code in front of you as you navigate this series). So let's get started by learning more about SPL Studio.
Introducing StationPlaylist Studio and the NVDA add-on
StationPlaylist Studio (www.stationplaylist.com) is a broadcast automation software that helps broadcasters schedule tracks, play jingles and more. It includes support for break notes, hourly playlist, track tagging and comes with tools to manage track playback such as setting track intros. In studio 5.00 and later, it includes its own stream encoder.
Is Studio accessible? Surprisingly, yes. It is possible to use Studio features without using screen reader scripts and add-ons. However, there are times when a broadcaster would use scripts, such as announcing status changes, monitoring track intros and endings, enhanced support for encoders and so on, and NVDA add-on for StationPlaylist Studio (usually referred to as SPL or just Studio) accomplishes this well.
Studio add-on: a history
In 2011, Geoff Shang, a seasoned blind broadcaster, started working on SPL Studio add-on. This early version (numbered 0.01) was developed to let NVDA announce various status changes such as automation toggle and so on. This initial version, co-developed by James Teh (one of the lead developers of NVDA screen reader) was considered a quick project, and further development ceased until 2013.
In 2013, I received several emails regarding NVDA's support for SPL Studio with a request for someone to write an add-on for it. As I was still new to add-on development then (this was after I developed Control Usage Assistant and GoldWave), I decided to take on this challenge in order to learn more Python and to practice what I learned in computer science labs at UC Riverside. I first downloaded the existing add-on (0.01) and installed Studio 5.01 on my computer to learn more about this program and to gather suggestions from SPL users. After little over a month of development and preview releases, I released Studio add-on 1.0 in January 2014.
Most of the early versions (1.x, 2.x, 3.x, released throughout 2014) were mostly quick projects that bridged the gap between NVDA and other screen readers (Brian Hartgen's JAWS scripts were my inspiration and have studied documentation for Jeff Bishop's Window-Eyes scripts). These early versions, supporting Studio 4.33 and later, were also used to fix bugs encountered by Studio users - for instance, a broadcaster posted a YouTube video explaining how NVDA was not reading edit fields, which was fixed early on. Later releases (4.x, 5.x, released throughout 2015), further bridged the gap with other screen readers and introduced unique features (for instance, add-on 5.0 introduced a configuration dialog). As of time of writing (August 2015), add-on 6.0 is in development which adds new features (some of which will be discussed in this series).
Highlights of past major releases and subsequent maintenance releases include:
* 1.x: Initial release, added end of track alarm and other features.
* 2.x: Track Finder and better routines to recognize Studio versions.
* 3.x: first long term support (LTS) release, Cart Explorer, support for SAM Encoder and no need to stay on the encoder window during connection attempts. This was the last version to support Studio 4.33.
* 4.x: Library scan, support for SPL encoder and studio 5.10.
* 5.x: Track Dial, dedicated configuration dialog.
In the next few installments, you'll get a chance to see how the add-on works, design philosophy and how the add-on is being developed, with glimpses into the past and future. My hope is that this add-on internals series would be a valuable reference for users and developers - for users to see the inner workings of this add-on, and for developers to use this add-on as an example of how an add-on is planned, implemented, tested, released and maintained.
To download the add-on, go to http://addons.nvda-project.org/addons/StationPlaylist.en.html.
//JL
References:
1. JAWS scripts for StationPlaylist Studio (Hartgen Consultancy): http://www.hartgen.org/studio.html
2. Window-Eyes app for StationPlaylist (Jeff Bishop/AI Squared): https://www.gwmicro.com/App_Central/Apps/App_Details/index.php?scriptid=1268&readMore&media=print
If you are a radio broadcaster, you might be accustomed to activities involved when producing a show. This may include playlist selection, scheduling break notes, responding to requests, monitoring listener count and encoding status and so on. To assist a broadcaster, a broadcast automation program is used, and one of the popular apps is called StationPlaylist Studio.
In this first installment of NVDA Add-on Internals: StationPlaylist Studio, we'll learn about what Studio is and how the NVDA add-on was born. You don't have to install or use the NVDA add-on to understand the ins and outs of this powerful add-on (using the add-on might help you better appreciate the depth of this material; for fuller experience, it is handy to have the add-on source code in front of you as you navigate this series). So let's get started by learning more about SPL Studio.
Introducing StationPlaylist Studio and the NVDA add-on
StationPlaylist Studio (www.stationplaylist.com) is a broadcast automation software that helps broadcasters schedule tracks, play jingles and more. It includes support for break notes, hourly playlist, track tagging and comes with tools to manage track playback such as setting track intros. In studio 5.00 and later, it includes its own stream encoder.
Is Studio accessible? Surprisingly, yes. It is possible to use Studio features without using screen reader scripts and add-ons. However, there are times when a broadcaster would use scripts, such as announcing status changes, monitoring track intros and endings, enhanced support for encoders and so on, and NVDA add-on for StationPlaylist Studio (usually referred to as SPL or just Studio) accomplishes this well.
Studio add-on: a history
In 2011, Geoff Shang, a seasoned blind broadcaster, started working on SPL Studio add-on. This early version (numbered 0.01) was developed to let NVDA announce various status changes such as automation toggle and so on. This initial version, co-developed by James Teh (one of the lead developers of NVDA screen reader) was considered a quick project, and further development ceased until 2013.
In 2013, I received several emails regarding NVDA's support for SPL Studio with a request for someone to write an add-on for it. As I was still new to add-on development then (this was after I developed Control Usage Assistant and GoldWave), I decided to take on this challenge in order to learn more Python and to practice what I learned in computer science labs at UC Riverside. I first downloaded the existing add-on (0.01) and installed Studio 5.01 on my computer to learn more about this program and to gather suggestions from SPL users. After little over a month of development and preview releases, I released Studio add-on 1.0 in January 2014.
Most of the early versions (1.x, 2.x, 3.x, released throughout 2014) were mostly quick projects that bridged the gap between NVDA and other screen readers (Brian Hartgen's JAWS scripts were my inspiration and have studied documentation for Jeff Bishop's Window-Eyes scripts). These early versions, supporting Studio 4.33 and later, were also used to fix bugs encountered by Studio users - for instance, a broadcaster posted a YouTube video explaining how NVDA was not reading edit fields, which was fixed early on. Later releases (4.x, 5.x, released throughout 2015), further bridged the gap with other screen readers and introduced unique features (for instance, add-on 5.0 introduced a configuration dialog). As of time of writing (August 2015), add-on 6.0 is in development which adds new features (some of which will be discussed in this series).
Highlights of past major releases and subsequent maintenance releases include:
* 1.x: Initial release, added end of track alarm and other features.
* 2.x: Track Finder and better routines to recognize Studio versions.
* 3.x: first long term support (LTS) release, Cart Explorer, support for SAM Encoder and no need to stay on the encoder window during connection attempts. This was the last version to support Studio 4.33.
* 4.x: Library scan, support for SPL encoder and studio 5.10.
* 5.x: Track Dial, dedicated configuration dialog.
In the next few installments, you'll get a chance to see how the add-on works, design philosophy and how the add-on is being developed, with glimpses into the past and future. My hope is that this add-on internals series would be a valuable reference for users and developers - for users to see the inner workings of this add-on, and for developers to use this add-on as an example of how an add-on is planned, implemented, tested, released and maintained.
To download the add-on, go to http://addons.nvda-project.org/addons/StationPlaylist.en.html.
//JL
References:
1. JAWS scripts for StationPlaylist Studio (Hartgen Consultancy): http://www.hartgen.org/studio.html
2. Window-Eyes app for StationPlaylist (Jeff Bishop/AI Squared): https://www.gwmicro.com/App_Central/Apps/App_Details/index.php?scriptid=1268&readMore&media=print
A warm welcome to NVDA Add-on Internals
Hi,
One of the most influential books I read was Windows Internals (sixth edition) by Alex Ionescu, David Solomon and Mark Russinovich. This book covers everything a programmer, IT professional and power users need to know about Windows 7 and Server 2008 R2, including startup and shutdown sequence, running programs, user management and so on.
After reading that book, I thought, "why not write a series of articles describing internals of NVDA screen reader and some NVDA add-ons?" As a developer and a student interested in good quality documentation, I found that some of the NVDA add-ons were poorly documented, and some had source code layout that puzzled me at first (but I figured out how the add-on worked in the end). Thus this series was born: NVDA Add-on Internals (perhaps we should talk about NVDA Core internals at a later time, which would be a volume if written (I imagine)).
There were several reasons (besides the above ones) that prompted me to start writing this series. For add-on users (especially for power users), I think letting you see the heart of the add-on(s) you are using will help you appreciate the work required for add-on writers and better understand what the add-on does. For add-on developers (mostly blind NVDA users and developers), I think the Internals series will become a handy reference in future add-on developments, and eventually help you assist NVDA screen reader development. For sighted computer users, programmers, IT professionals, students and teachers (mostly those studying computer science, communication studies and related fields), I hope this series will help you gain insight into how blind people write amazing code and to let you glimpse what a good documentation looks and feels like.
Our first add-on is an add-on that is being used by many blind broadcasters: StationPlaylist Studio, developed by Geoff Shang, James Teh, I (Joseph Lee) and others (currently I maintain this add-on). The style I employed (mostly from user's perspective with some technical details thrown into the mix) and the material I'll cover (literally everything about this add-on) will set the stage for other add-ons in the future. So sit back and enjoy a detailed tour of internals of StationPlaylist Studio add-on for NVDA.
P.S. I should also mention that one of my "documentation heroes" was David Pogue (New York Times), author of such books as iPhone: The Missing Manual and others (O'Reilly). Thank you David for teaching me how to write good documentation from users' perspectives through your books (I'm glad to see that you did mention VoiceOver in your iPhone books, and many blind people have benefited from your insight).
//JL
One of the most influential books I read was Windows Internals (sixth edition) by Alex Ionescu, David Solomon and Mark Russinovich. This book covers everything a programmer, IT professional and power users need to know about Windows 7 and Server 2008 R2, including startup and shutdown sequence, running programs, user management and so on.
After reading that book, I thought, "why not write a series of articles describing internals of NVDA screen reader and some NVDA add-ons?" As a developer and a student interested in good quality documentation, I found that some of the NVDA add-ons were poorly documented, and some had source code layout that puzzled me at first (but I figured out how the add-on worked in the end). Thus this series was born: NVDA Add-on Internals (perhaps we should talk about NVDA Core internals at a later time, which would be a volume if written (I imagine)).
There were several reasons (besides the above ones) that prompted me to start writing this series. For add-on users (especially for power users), I think letting you see the heart of the add-on(s) you are using will help you appreciate the work required for add-on writers and better understand what the add-on does. For add-on developers (mostly blind NVDA users and developers), I think the Internals series will become a handy reference in future add-on developments, and eventually help you assist NVDA screen reader development. For sighted computer users, programmers, IT professionals, students and teachers (mostly those studying computer science, communication studies and related fields), I hope this series will help you gain insight into how blind people write amazing code and to let you glimpse what a good documentation looks and feels like.
Our first add-on is an add-on that is being used by many blind broadcasters: StationPlaylist Studio, developed by Geoff Shang, James Teh, I (Joseph Lee) and others (currently I maintain this add-on). The style I employed (mostly from user's perspective with some technical details thrown into the mix) and the material I'll cover (literally everything about this add-on) will set the stage for other add-ons in the future. So sit back and enjoy a detailed tour of internals of StationPlaylist Studio add-on for NVDA.
P.S. I should also mention that one of my "documentation heroes" was David Pogue (New York Times), author of such books as iPhone: The Missing Manual and others (O'Reilly). Thank you David for teaching me how to write good documentation from users' perspectives through your books (I'm glad to see that you did mention VoiceOver in your iPhone books, and many blind people have benefited from your insight).
//JL
Wednesday, July 8, 2015
Windows 10: Advisories from screen reader developers and final thoughts
Update (July 29): The official build number is 10240 (1 kilobyte times 10). Also, NV Access and Freedom scientific issued statements on Windows 10 - JAWS and MAGic are compatible as of latest updates, while NVDA 2015.3 includes fixes for Windows 10.
Update (July 13): AI Squared has published more information on Windows 10 support and says Window-Eyes 9.2 or later will be compatible with Windows 10. NV Access has merged Microsoft Edge support into latest next (alpha) snapshots for testing purposes.
Hi,
In the last article, I talked about history of recent Windows releases, updates on accessibility and upgrade paths. In this article, I'll take you on a behind the scenes tour of activities screen reader developers are doing to support Windows 10, as well as my thoughts on when to upgrade and some predictions.
Windows 10: Still not fully accessible
When we look at progress of accessibility of Windows 10, we see some remarkable improvements. From inability to navigate Start menu items to searching for anything via search box, we've come a long way.
However, just like any man-made structure, Windows 10 has flaws and has room for improvements. The biggest stumbling block is Microsoft Edge, the new web browser from Microsoft which embraces modern standards. However, because it is built on top of a newer rendering engine, coupled with extensive use of UIA, screen reader developers found themselves spending major part of their development time on supporting Edge and controls based on the new engine.
Another issue is accessibility of universal apps. While some apps such as Calendar is usable, others such as certain parts of Insider Hub isn't. Coupled with extensive use of UIA, 100 percent support for universal apps may not materialize for a while.
Windows 10 and screen readers: what's new, what's available and what needs to be done
For some screen reader developers, their greatest fear is when new Windows versions are released. Not only they have to support older Windows versions, they have to deal with newer technologies introduced in the just released version. Blind computer users using screen readers tasted this when Windows 8 was released. With the removal of older display technologies, screen reader vendors found themselves coding alternate ways of accessing screen content to mixed success. This will be more prominent when more universal apps are released once Windows 10 goes live on July 29th.
Because of the display driver fiasco thanks to Windows 8 and due to potential accessibility issues with Microsoft Edge and other technologies, coupled with introduction of Windows Insider program, screen reader users and vendors such as Freedom Scientific, AI Squared, NV Access and others showed keen interest in Windows 10 from early on. For example, users of JAWS 16 installed JAWS on computers or virtual machines running Windows 10 Previews, while some NVDA contributors wrote code to support Windows 10 features such as announcing search suggestions from Cortana and so on. But despite early adoption and mitigation, Windows 10 is far from fully accessible.
As of July 2015, various screen reader vendors published advisories on Windows 10 for users, or set up sites to explain more about Windows 10 support from their screen readers. For instance, Freedom scientific released a build of JAWS 16 that'll at least run on Windows 10, AI Squared announced Window-Eyes 9.0 and onwards will support Windows 10, and NVDA 2015.2 recognizes Windows 10 and NvDA 2015.3 will add support for additional Windows 10 features, according to NV Access. In regards to Microsoft Edge, current screen reader releases does not support it, but vendors promised support for it in a future release.
When to upgrade
Some users may stay up all night on July 28th in order to be the first ones to upgrade to Windows 10. Although this is fine for some, majority of users don't have to upgrade this year. In fact, they have until July 2016 to upgrade to Windows 10 without paying a single penny.
For screen reader users, it is better to wait until screen reader vendors declare support for Windows 10 from their screen readers before upgrading. This can be as early as August when screen readers would be updated to support Windows 10, at least to provide basic support for it. However, it might be best to upgrade early to mid-2016 (before July) when screen reader developers announce advisories on support for Microsoft Edge. This means you might want to wait for JAWS 17 or later, Window-Eyes 9.2 or later, NVDA 2015.3 or later or whatever version of your screen reader supports Windows 10.
Note that the above advisories are for Windows 10 PC editions (Home, Pro, Enterprise, Education). Due to underlying philosophy and API differences, Windows 10 Mobile series will not run third-party screen readers unless this changes in the future.
Predictions on Windows 10 and final impressions and thoughts
One of Microsoft's goals is to have a base of one billion Windows 10 users. Given that Windows 7 will be around until 2020 and Microsoft's track record on accessibility, I expect this goal to not be met for a while.
In regards to overall accessibility, Microsoft is finally coming to terms with power of collaboration: listening to feedback from consumers, working with developers and giving its best at attempts to improve accessibility. Certainly there are rooms for improvement, but we cannot forget the effort that users and developers put in to shaping Windows 10.
If I'm to give a grade to Windows 10, it would be a B- (B minus). If not for continued collaboration with screen reader developers and users, Windows 10 would have been a modern day Windows Vista with a grade of C- (C minus) to C. Windows 10 could have earned at least a B+ (B plus) if Microsoft provided better accessibility implementations in Microsoft Edge, or even a solid A if Narrator was substantially improved or comes up during clean install from start to finish. Time will tell if Windows 10 will become a threshold of improved accessibility.
Thanks. For those upgrading to Windows 10, good luck.
//JL
Update (July 13): AI Squared has published more information on Windows 10 support and says Window-Eyes 9.2 or later will be compatible with Windows 10. NV Access has merged Microsoft Edge support into latest next (alpha) snapshots for testing purposes.
Hi,
In the last article, I talked about history of recent Windows releases, updates on accessibility and upgrade paths. In this article, I'll take you on a behind the scenes tour of activities screen reader developers are doing to support Windows 10, as well as my thoughts on when to upgrade and some predictions.
Windows 10: Still not fully accessible
When we look at progress of accessibility of Windows 10, we see some remarkable improvements. From inability to navigate Start menu items to searching for anything via search box, we've come a long way.
However, just like any man-made structure, Windows 10 has flaws and has room for improvements. The biggest stumbling block is Microsoft Edge, the new web browser from Microsoft which embraces modern standards. However, because it is built on top of a newer rendering engine, coupled with extensive use of UIA, screen reader developers found themselves spending major part of their development time on supporting Edge and controls based on the new engine.
Another issue is accessibility of universal apps. While some apps such as Calendar is usable, others such as certain parts of Insider Hub isn't. Coupled with extensive use of UIA, 100 percent support for universal apps may not materialize for a while.
Windows 10 and screen readers: what's new, what's available and what needs to be done
For some screen reader developers, their greatest fear is when new Windows versions are released. Not only they have to support older Windows versions, they have to deal with newer technologies introduced in the just released version. Blind computer users using screen readers tasted this when Windows 8 was released. With the removal of older display technologies, screen reader vendors found themselves coding alternate ways of accessing screen content to mixed success. This will be more prominent when more universal apps are released once Windows 10 goes live on July 29th.
Because of the display driver fiasco thanks to Windows 8 and due to potential accessibility issues with Microsoft Edge and other technologies, coupled with introduction of Windows Insider program, screen reader users and vendors such as Freedom Scientific, AI Squared, NV Access and others showed keen interest in Windows 10 from early on. For example, users of JAWS 16 installed JAWS on computers or virtual machines running Windows 10 Previews, while some NVDA contributors wrote code to support Windows 10 features such as announcing search suggestions from Cortana and so on. But despite early adoption and mitigation, Windows 10 is far from fully accessible.
As of July 2015, various screen reader vendors published advisories on Windows 10 for users, or set up sites to explain more about Windows 10 support from their screen readers. For instance, Freedom scientific released a build of JAWS 16 that'll at least run on Windows 10, AI Squared announced Window-Eyes 9.0 and onwards will support Windows 10, and NVDA 2015.2 recognizes Windows 10 and NvDA 2015.3 will add support for additional Windows 10 features, according to NV Access. In regards to Microsoft Edge, current screen reader releases does not support it, but vendors promised support for it in a future release.
When to upgrade
Some users may stay up all night on July 28th in order to be the first ones to upgrade to Windows 10. Although this is fine for some, majority of users don't have to upgrade this year. In fact, they have until July 2016 to upgrade to Windows 10 without paying a single penny.
For screen reader users, it is better to wait until screen reader vendors declare support for Windows 10 from their screen readers before upgrading. This can be as early as August when screen readers would be updated to support Windows 10, at least to provide basic support for it. However, it might be best to upgrade early to mid-2016 (before July) when screen reader developers announce advisories on support for Microsoft Edge. This means you might want to wait for JAWS 17 or later, Window-Eyes 9.2 or later, NVDA 2015.3 or later or whatever version of your screen reader supports Windows 10.
Note that the above advisories are for Windows 10 PC editions (Home, Pro, Enterprise, Education). Due to underlying philosophy and API differences, Windows 10 Mobile series will not run third-party screen readers unless this changes in the future.
Predictions on Windows 10 and final impressions and thoughts
One of Microsoft's goals is to have a base of one billion Windows 10 users. Given that Windows 7 will be around until 2020 and Microsoft's track record on accessibility, I expect this goal to not be met for a while.
In regards to overall accessibility, Microsoft is finally coming to terms with power of collaboration: listening to feedback from consumers, working with developers and giving its best at attempts to improve accessibility. Certainly there are rooms for improvement, but we cannot forget the effort that users and developers put in to shaping Windows 10.
If I'm to give a grade to Windows 10, it would be a B- (B minus). If not for continued collaboration with screen reader developers and users, Windows 10 would have been a modern day Windows Vista with a grade of C- (C minus) to C. Windows 10 could have earned at least a B+ (B plus) if Microsoft provided better accessibility implementations in Microsoft Edge, or even a solid A if Narrator was substantially improved or comes up during clean install from start to finish. Time will tell if Windows 10 will become a threshold of improved accessibility.
Thanks. For those upgrading to Windows 10, good luck.
//JL
Windows 10: How did we get here, accessibility progress report and editions and upgrade paths
Hi,
As someone who is excited about trying new things and experienced with life cycle of a software product (from conception to development to release), I sometimes get nervous when waiting for release of a much anticipated product that I rely on. This is more evident if it is something I consider essential, such as iOS, screen readers, Windows and so on. In particular, as the supposed "most comprehensive version of Windows" is just three weeks away, I'm even more nervous about Windows 10, particularly dealing with accessibility. At the same time, I'm happy to report that Microsoft did one thing right: listen to feedback from over five million souls who have joined this nine-month cruise, particularly considering current level of accessibility and cooperation between Microsoft and screen reader developers.
As I continue to research ways of supporting Windows 10 features in NVDA and prepare to upgrade to JAWS 17, I'd like to present status of Windows 10 accessibility as of build 10162. Along the way, I'd like to present comparison between earlier and current builds, as well as to talk about some new features you may expect from Windows 10, particularly in newer builds. Lastly, I'll conclude with WinTen editions and upgrade paths. My thoughts on when to upgrade and a status report on screen readers and predictions will be documented in the next article. So sit back and enjoy a brief tour of Windows 10's life cycle and accessibility so far.
Windows 10: Building on successes and failures of past eight years
Before we talk about Windows 10, it is important to talk about how we got here in the first place, and what a good place to start than a day that Microsoft released a version of Windows that brought widespread disappointment:
Let's turn our clock back eight years to January 30, 2007. For the past six years, Microsoft has been working on a revolutionary operating system that claims will win over hearts of billions of people. This operating system, codenamed "Longhorn", opened up new possibilities, including ability to search for anything from start Menu, tabbed browsing in Internet Explorer and reorganized Control Panel and more. For a select few, Microsoft said, "we'll provide exclusive features such as themes, games and more".
Two years later, Microsoft found itself asking, "what have we done?" The reason: Windows Vista was virtually a flop, and Microsoft was scrambling to come up with a new version of Windows in hopes of restoring its reputation, and also allow consumers to adopt to newer technologies and paradigms. Learning from their mistakes, Microsoft did come up with a version of Windows that not only improved its reputation somewhat, but now powers six out of ten computers worldwide (according to recent sources). We call this Windows 7.
Fast forward three years. With the East Coast of the United States scrambling to recover from effects of Hurricane Sandy, a company from the West Coast proclaimed a new era of computing has begun. Combining the touchscreen interface and PC hardware, this company claimed that their new software will revolutionize how we think about computing for years to come. But the product they have shipped that day proved to be too ambitious at that time, and it took them a year to correct their mistakes and come up with a product that was accepted by consumers. And this is the product we see now on PC's, tablets, hybrids and so on: Windows 8.1.
Fast forward to September 2014. Many online articles claimed that Microsoft will talk about so-called Windows 9 on September 30th. They were partially correct then: Microsoft did announce a new version of Windows, but it was named Windows 10. At the same time, Microsoft invited early adopters to serve as beta testers, and recent statistics published by Microsoft indicate more than five million souls have responded to call from Microsoft to "arm for Windows 10". With millions of Insiders on their side, Microsoft boldly claimed that Windows 10 will be the last major version of Windows (this doesn't mean there won't be new minor versions), the most comprehensive Windows (partially really) and let apps run on any device (partially true). And now with three weeks to go before Windows 10 makes it appearance on PC's, tablets, smartphones and other devices, Microsoft is stepping up to ask consumers to upgrade for free for a while (until July 29, 2016).
Windows 10: A progress report
I'm sure some readers might ask, "what's the progress report on Windows 10, particularly with accessibility?" I'm happy to report that, compared to initial builds, the latest builds are more stable and accessible. Back in October 2014, build 9841 was considered a reworked Windows 8.1, hence users used to Windows 8.1 had no difficulty using that build. Subsequent builds added new features and broke some, particularly when navigating Start menu/screen hybrid, difficulty with moving apps from one desktop to another and continued problems with Microsoft Edge (Internet Explorer will still be there). Because I have covered major features new to Windows 10 in earlier blog posts, I'll talk about what's changed since I posted the older posts.
Moving apps to virtual desktops
One of the most complained aspects of Windows 10 was inability for blind users to move apps to different desktops via Taskview. Previously, the list of desktops shown from context menu wasn't accessible at all. This has been corrected in build 10158 and later. To move apps between desktops, open Taskview (Windows+TAB), select the app, open context menu, select Move to and choose the desktop you want the app to move to.
Narrator updates
One new thing in Windows 10 Narrator is command reassignments. Apart from this, Narrator uses Microsoft David.
Cortana command
In older Windows 10 builds, you had several options for launching Cortana: Start menu, universal search box, Search key (Windows+Q), and in later builds, speech recognition (Windows+C). In build 10158 and later, Windows+C officially becomes the keyboard command to launch Cortana in voice dictation mode.
Microsoft Edge
As I'll mention in part 2, this new browser isn't fully accessible. Currently screen reader vendors are working on Edge support.
Universal apps
Some of them, such as Calendar and Windows Feedback are accessible, while others may need improvement (universal apps use UIA or User Interface Automation).
Windows 10 editions and upgrade paths
Windows 10 targets three device categories: PC's, mobile devices, and appliances and gadgets. They share the same basic technologies, differentiated via user interface, runtime and audience.
The version that comes closest to the kernel, or heart of Windows 10 isn't Windows 10 Home. It is actually Windows 10 IoT (Internet of Things) Core. This version is Windows 10 kernel and specialized technologies to be used on appliances such as home automation systems, Internet of Things such as chips in fridges and toasters and so on.
Next up is Windows 10 Home and Mobile. Although they are powered by two different architectures (some Mobile devices will be powered by X86 chips), they have similar interfaces and features (Home comes with desktop interface, whereas devices running Mobile must be connected to an external display for desktop interface to appear). Although they do have similar interfaces, the underlying philosophy and certain technologies are different.
Building on top of Windows 10 Home, Windows 10 Pro adds business-oriented features, which includes ability to delay Windows Updates for a while, joining domains and so on. Windows 10 Pro is also required in order to run Hyper-V based virtual machines and to secure one's PC with BitLocker.
On top of the pyramids for PC's and mobile devices are Windows 10 Enterprise/Education and Mobile Enterprise, respectively. These versions are suited for large organizations, and in case of Windows 10 Enterprise, allows one to create a bootable USB drive version of Windows 10. Apart from targeting students and faculty and inability to opt into long term servicing (LTS) updates, Windows 10 Education is same as Windows 10 Enterprise.
The upgrade paths are as follows:
* For users using Windows XP and Vista: Upgrade to Windows 7 or 8.1 first before upgrading, or perform clean install.
* Windows 7 Starter, Home Basic, Home Premium, Windows 8.1: Windows 10 Home.
* Windows 7 Professional, Ultimate, Windows 8.1 Pro: Windows 10 Pro.
* Windows 7 Enterprise, Windows 8.1 Enterprise: Windows 10 Enterprise.
* For some Windows Phone 8.1 devices: Windows 10 Mobile.
In the next article, I'll talk about status of screen reader support and when would be the good time to upgrade along with screen reader requirements.
//JL
As someone who is excited about trying new things and experienced with life cycle of a software product (from conception to development to release), I sometimes get nervous when waiting for release of a much anticipated product that I rely on. This is more evident if it is something I consider essential, such as iOS, screen readers, Windows and so on. In particular, as the supposed "most comprehensive version of Windows" is just three weeks away, I'm even more nervous about Windows 10, particularly dealing with accessibility. At the same time, I'm happy to report that Microsoft did one thing right: listen to feedback from over five million souls who have joined this nine-month cruise, particularly considering current level of accessibility and cooperation between Microsoft and screen reader developers.
As I continue to research ways of supporting Windows 10 features in NVDA and prepare to upgrade to JAWS 17, I'd like to present status of Windows 10 accessibility as of build 10162. Along the way, I'd like to present comparison between earlier and current builds, as well as to talk about some new features you may expect from Windows 10, particularly in newer builds. Lastly, I'll conclude with WinTen editions and upgrade paths. My thoughts on when to upgrade and a status report on screen readers and predictions will be documented in the next article. So sit back and enjoy a brief tour of Windows 10's life cycle and accessibility so far.
Windows 10: Building on successes and failures of past eight years
Before we talk about Windows 10, it is important to talk about how we got here in the first place, and what a good place to start than a day that Microsoft released a version of Windows that brought widespread disappointment:
Let's turn our clock back eight years to January 30, 2007. For the past six years, Microsoft has been working on a revolutionary operating system that claims will win over hearts of billions of people. This operating system, codenamed "Longhorn", opened up new possibilities, including ability to search for anything from start Menu, tabbed browsing in Internet Explorer and reorganized Control Panel and more. For a select few, Microsoft said, "we'll provide exclusive features such as themes, games and more".
Two years later, Microsoft found itself asking, "what have we done?" The reason: Windows Vista was virtually a flop, and Microsoft was scrambling to come up with a new version of Windows in hopes of restoring its reputation, and also allow consumers to adopt to newer technologies and paradigms. Learning from their mistakes, Microsoft did come up with a version of Windows that not only improved its reputation somewhat, but now powers six out of ten computers worldwide (according to recent sources). We call this Windows 7.
Fast forward three years. With the East Coast of the United States scrambling to recover from effects of Hurricane Sandy, a company from the West Coast proclaimed a new era of computing has begun. Combining the touchscreen interface and PC hardware, this company claimed that their new software will revolutionize how we think about computing for years to come. But the product they have shipped that day proved to be too ambitious at that time, and it took them a year to correct their mistakes and come up with a product that was accepted by consumers. And this is the product we see now on PC's, tablets, hybrids and so on: Windows 8.1.
Fast forward to September 2014. Many online articles claimed that Microsoft will talk about so-called Windows 9 on September 30th. They were partially correct then: Microsoft did announce a new version of Windows, but it was named Windows 10. At the same time, Microsoft invited early adopters to serve as beta testers, and recent statistics published by Microsoft indicate more than five million souls have responded to call from Microsoft to "arm for Windows 10". With millions of Insiders on their side, Microsoft boldly claimed that Windows 10 will be the last major version of Windows (this doesn't mean there won't be new minor versions), the most comprehensive Windows (partially really) and let apps run on any device (partially true). And now with three weeks to go before Windows 10 makes it appearance on PC's, tablets, smartphones and other devices, Microsoft is stepping up to ask consumers to upgrade for free for a while (until July 29, 2016).
Windows 10: A progress report
I'm sure some readers might ask, "what's the progress report on Windows 10, particularly with accessibility?" I'm happy to report that, compared to initial builds, the latest builds are more stable and accessible. Back in October 2014, build 9841 was considered a reworked Windows 8.1, hence users used to Windows 8.1 had no difficulty using that build. Subsequent builds added new features and broke some, particularly when navigating Start menu/screen hybrid, difficulty with moving apps from one desktop to another and continued problems with Microsoft Edge (Internet Explorer will still be there). Because I have covered major features new to Windows 10 in earlier blog posts, I'll talk about what's changed since I posted the older posts.
Moving apps to virtual desktops
One of the most complained aspects of Windows 10 was inability for blind users to move apps to different desktops via Taskview. Previously, the list of desktops shown from context menu wasn't accessible at all. This has been corrected in build 10158 and later. To move apps between desktops, open Taskview (Windows+TAB), select the app, open context menu, select Move to and choose the desktop you want the app to move to.
Narrator updates
One new thing in Windows 10 Narrator is command reassignments. Apart from this, Narrator uses Microsoft David.
Cortana command
In older Windows 10 builds, you had several options for launching Cortana: Start menu, universal search box, Search key (Windows+Q), and in later builds, speech recognition (Windows+C). In build 10158 and later, Windows+C officially becomes the keyboard command to launch Cortana in voice dictation mode.
Microsoft Edge
As I'll mention in part 2, this new browser isn't fully accessible. Currently screen reader vendors are working on Edge support.
Universal apps
Some of them, such as Calendar and Windows Feedback are accessible, while others may need improvement (universal apps use UIA or User Interface Automation).
Windows 10 editions and upgrade paths
Windows 10 targets three device categories: PC's, mobile devices, and appliances and gadgets. They share the same basic technologies, differentiated via user interface, runtime and audience.
The version that comes closest to the kernel, or heart of Windows 10 isn't Windows 10 Home. It is actually Windows 10 IoT (Internet of Things) Core. This version is Windows 10 kernel and specialized technologies to be used on appliances such as home automation systems, Internet of Things such as chips in fridges and toasters and so on.
Next up is Windows 10 Home and Mobile. Although they are powered by two different architectures (some Mobile devices will be powered by X86 chips), they have similar interfaces and features (Home comes with desktop interface, whereas devices running Mobile must be connected to an external display for desktop interface to appear). Although they do have similar interfaces, the underlying philosophy and certain technologies are different.
Building on top of Windows 10 Home, Windows 10 Pro adds business-oriented features, which includes ability to delay Windows Updates for a while, joining domains and so on. Windows 10 Pro is also required in order to run Hyper-V based virtual machines and to secure one's PC with BitLocker.
On top of the pyramids for PC's and mobile devices are Windows 10 Enterprise/Education and Mobile Enterprise, respectively. These versions are suited for large organizations, and in case of Windows 10 Enterprise, allows one to create a bootable USB drive version of Windows 10. Apart from targeting students and faculty and inability to opt into long term servicing (LTS) updates, Windows 10 Education is same as Windows 10 Enterprise.
The upgrade paths are as follows:
* For users using Windows XP and Vista: Upgrade to Windows 7 or 8.1 first before upgrading, or perform clean install.
* Windows 7 Starter, Home Basic, Home Premium, Windows 8.1: Windows 10 Home.
* Windows 7 Professional, Ultimate, Windows 8.1 Pro: Windows 10 Pro.
* Windows 7 Enterprise, Windows 8.1 Enterprise: Windows 10 Enterprise.
* For some Windows Phone 8.1 devices: Windows 10 Mobile.
In the next article, I'll talk about status of screen reader support and when would be the good time to upgrade along with screen reader requirements.
//JL
Tuesday, May 5, 2015
NVDA: In preparation for a milestone in April 2016: what is happening, what needs to be done, now and for years to come
Hi,
When it comes to being an apprentice and observing changes in an organization or seeing new trends in technology landscape, I'm often reminded of a quote I read from a book on apprenticeship that goes like this: be sure to learn a lot in your early days. Another quote, one that I heard recently, seems to summarize my experiences with NonVisual Desktop Access and other projects: you cannot call yourself an expert unless you have experience. But having just experience isn't enough: I believe we need visionaries to steer the discussion forward - people like Steve Jobs who had a powerful vision for the computer industry, and now we're reaping what he has sowed so many years ago.
Almost a decade ago, two young men came onto the screen reader stage, announcing an idea that was considered outrageous at that time that wasn't accepted by the mainstream screen reader commentators. Nine years later, these two young men, who I came to appreciate as friends, have done something that commentators in 2006 haven't thought was possible: widespread support for open-source, community-driven projects that have made impact not only in blindness organizations, but also in mainstream society. These men, along with countless blind and sighted men and women around the globe, have set the stage for what it may turn out to be the largest shift in screen reader industry for years to come: community participation, learning from source code and appreciating colaboration between users, developers and supporters.
Despite these advances, there are still parts of this project that may haunt the leaders and contributors later. Increased deployment of NVDA will mean increased outcry on lack of support for some professional applications used in enterprise environments. We have some misconceptions about various parts of NVDA development and translations that needs to be rectified now. Worse, we need a way to allow knowledge to be passed down to the next generation of potential developers so NVDA can truly become a testament of the power of community-driven project for decades to come. And we can start working on them now, as this moment (few months before we say "happy tenth birthday" to NVDA) is an excellent opportunity to engage developers, users and supporters of this influential project to get together to plan ahead for the next ten years and beyond.
In a recent statistics published by Source Forge on NVDA downloads, about 70000 or more blind users have downloaded recent versions of NVDA. When we examine raw download numbers (including multiple downloads from the same IP address), we can clearly see increases in number of downloads, more so for more recent versions of NVDA such as 2015.1. This shows that, although NVDA might be developed by a handful of developers, this project is reaping many benefits and have made impact throughout the world, especially in developing countries.
One of the benefits and side effects of this trend has been a general consensus that NVDA lacks support for more advanced features of some software, or in some cases, does not work well with some professional applications used in enterprise environments. This is a growing concern, as adoption of NVDA means more enterprises will embrace this free screen reader, a viable alternative to commercial screen readers. While screen readers such as JAWS will continue to champion support for vast number of professional applications, NVDA has something that is unmatched in the history of screen readers: open-source, use of a mainstream programming language for scripting and freedom to be tailored for specific needs.
But tailoring NVDA to work with enterprise applications doesn't happen magically. You cannot expect NVDA to read contents of the professional apps the day after you suggest support for it. Even if someone promises to write an add-on for the application in question, it will take at least several hours (up to months, and in some cases, years) to prepare a polished add-on that will indeed let you have access to advanced parts of the program you are using at work. A good example is Skype where it took NV Access several weeks to fix a broken typing indicator announcement in Skype 7.2 and later.
But it isn't a good idea to blame just NV Access for lack of support for professional apps. In my previous article on Windows 10 and screen readers, I emphasized that third-party developers also hold the key to accessibility of their applications, and professional and enterprise apps are no exception to this criteria. At the end of the day, what will make a difference for accessibility of professional applications is willingness from app developers to learn best practices to support current and future screen readers, namely taking advantage of accessibility framework in their apps. And without support for screen readers, even the so-called "native screen reader in Windows 10" will not be able to claim that it supports professional apps, more so if third-party screen readers are involved.
Some of us might be wondering, "how can we make third-party app developers aware of screen readers and accessibility frameworks?" This question ties in with one of my biggest concerns in NVDA development: passing our knowledge to the next generation and rectifying misconceptions about NVDA project, its development and translations. As a contributor to NVDA project, my main concern for a number of months has been, "what will happen when we depart, and who will be willing to step in to fill our shoes?" This naturally raises a follow-up question: how can we pass on our knowledge? There are a number of avenues that have sprung up that may help, including a dedicated mailing list for blind people to learn Python, increased awareness of accessibility standards by sighted developers, and willingness for people to contribute source code patches for NVDA.
However, what concerns me the most is perceived misconceptions about NVDA project and development, especially translations. In the past, the NVDA development community have received requests for translating NVDA into new languages with mixed success. While we do have languages that are actively being developed, most are in a state where the only translations available is the version of NVDA that was first translated into that language (a good example is Amheric, for which NvDA 2012.3 is the first and only full (and the latest) translation of NVDA into that language). Other languages are now considered "not up to date" and are in danger of not receiving latest features translated into that language.
Perhaps this came about due to a possible misconception on doing just enough work for the upcoming version. That is, a translator of NVDA may think, "okay, because I've provided translations for the upcoming NVDA version, I've done my part." Translation of a software package, especially a package that is critical for a population of the world that has experienced information blackout, means you are committing to that project, and it isn't a good idea to see it come to a halt after a single version is released. In other words, a translation that doesn't show continued signs of perspiration will come back to haunt the translator, the language community, NVDA developers, users and third-party supporters in the future, which may tarnish the reputation of NVDA and will hinder efforts to pass on our knowledge to the next generation of users and developers.
Another possible misconception might be a notion that NVDA doesn't have flaws, and that NvDA might be portable across operating systems. Like any man-made structure (including software), NVDA does have flaws, some of which are major such as the issue on lack of support for advanced features of programs. Someone can argue, "if we have talented programmers, we can add support for such and such features easily". I, as someone who have watched assistive technologies such as screen readers and notetakers and who has actual experience in software and the engineering field that powers it, am confident that this isn't true in all cases, including screen reader development. This is more so when you are tasked with writing support for advanced features of an app that is critical to successful employment of someone halfway across the globe.
In regards to portability to different operating systems, this can be compared to demolishing and rebuilding a mansion from ground up. Anyone with experiences with two or more different operating systems such as Windows and Linux can confidently tell you that different operating systems work differently. Not only user interfaces are different (although they have blurred somewhat), their internals, design and goals are different. This is more so when a program uses API's from one operating system a lot (NVDA, like any screen reader, is a case of an app that makes heavy use of API's provided by the host operating system). Because of this, do not expect NVDA to cooperate well with Wine, nor think NvDA will become a fully functional screen reader under React OS or replace Orca on Linux. Misconceptions like these, especially something major like the ones described above, may become land mines in the end.
In conclusion, consider the words of Thomas Edison: success requires 99 percent perspiration, 1 percent inspiration. Considering that we're about to prepare to celebrate a major milestone of NvDA next April, let us not forget how much sweat developers poured out to make NvDA to what it is today. Let misconceptions, lack of support for professional apps and no avenues to pass on our know-how to next generation not become hinderences to the ultimate vision of NonVisual Desktop Access: a critical tool for information access for blind people around the world. Let us also start thinking about the long term goals for this project so it can be prepared to open new roads for people with disabilities for decades to come. Long live NVDA!
When it comes to being an apprentice and observing changes in an organization or seeing new trends in technology landscape, I'm often reminded of a quote I read from a book on apprenticeship that goes like this: be sure to learn a lot in your early days. Another quote, one that I heard recently, seems to summarize my experiences with NonVisual Desktop Access and other projects: you cannot call yourself an expert unless you have experience. But having just experience isn't enough: I believe we need visionaries to steer the discussion forward - people like Steve Jobs who had a powerful vision for the computer industry, and now we're reaping what he has sowed so many years ago.
Almost a decade ago, two young men came onto the screen reader stage, announcing an idea that was considered outrageous at that time that wasn't accepted by the mainstream screen reader commentators. Nine years later, these two young men, who I came to appreciate as friends, have done something that commentators in 2006 haven't thought was possible: widespread support for open-source, community-driven projects that have made impact not only in blindness organizations, but also in mainstream society. These men, along with countless blind and sighted men and women around the globe, have set the stage for what it may turn out to be the largest shift in screen reader industry for years to come: community participation, learning from source code and appreciating colaboration between users, developers and supporters.
Despite these advances, there are still parts of this project that may haunt the leaders and contributors later. Increased deployment of NVDA will mean increased outcry on lack of support for some professional applications used in enterprise environments. We have some misconceptions about various parts of NVDA development and translations that needs to be rectified now. Worse, we need a way to allow knowledge to be passed down to the next generation of potential developers so NVDA can truly become a testament of the power of community-driven project for decades to come. And we can start working on them now, as this moment (few months before we say "happy tenth birthday" to NVDA) is an excellent opportunity to engage developers, users and supporters of this influential project to get together to plan ahead for the next ten years and beyond.
In a recent statistics published by Source Forge on NVDA downloads, about 70000 or more blind users have downloaded recent versions of NVDA. When we examine raw download numbers (including multiple downloads from the same IP address), we can clearly see increases in number of downloads, more so for more recent versions of NVDA such as 2015.1. This shows that, although NVDA might be developed by a handful of developers, this project is reaping many benefits and have made impact throughout the world, especially in developing countries.
One of the benefits and side effects of this trend has been a general consensus that NVDA lacks support for more advanced features of some software, or in some cases, does not work well with some professional applications used in enterprise environments. This is a growing concern, as adoption of NVDA means more enterprises will embrace this free screen reader, a viable alternative to commercial screen readers. While screen readers such as JAWS will continue to champion support for vast number of professional applications, NVDA has something that is unmatched in the history of screen readers: open-source, use of a mainstream programming language for scripting and freedom to be tailored for specific needs.
But tailoring NVDA to work with enterprise applications doesn't happen magically. You cannot expect NVDA to read contents of the professional apps the day after you suggest support for it. Even if someone promises to write an add-on for the application in question, it will take at least several hours (up to months, and in some cases, years) to prepare a polished add-on that will indeed let you have access to advanced parts of the program you are using at work. A good example is Skype where it took NV Access several weeks to fix a broken typing indicator announcement in Skype 7.2 and later.
But it isn't a good idea to blame just NV Access for lack of support for professional apps. In my previous article on Windows 10 and screen readers, I emphasized that third-party developers also hold the key to accessibility of their applications, and professional and enterprise apps are no exception to this criteria. At the end of the day, what will make a difference for accessibility of professional applications is willingness from app developers to learn best practices to support current and future screen readers, namely taking advantage of accessibility framework in their apps. And without support for screen readers, even the so-called "native screen reader in Windows 10" will not be able to claim that it supports professional apps, more so if third-party screen readers are involved.
Some of us might be wondering, "how can we make third-party app developers aware of screen readers and accessibility frameworks?" This question ties in with one of my biggest concerns in NVDA development: passing our knowledge to the next generation and rectifying misconceptions about NVDA project, its development and translations. As a contributor to NVDA project, my main concern for a number of months has been, "what will happen when we depart, and who will be willing to step in to fill our shoes?" This naturally raises a follow-up question: how can we pass on our knowledge? There are a number of avenues that have sprung up that may help, including a dedicated mailing list for blind people to learn Python, increased awareness of accessibility standards by sighted developers, and willingness for people to contribute source code patches for NVDA.
However, what concerns me the most is perceived misconceptions about NVDA project and development, especially translations. In the past, the NVDA development community have received requests for translating NVDA into new languages with mixed success. While we do have languages that are actively being developed, most are in a state where the only translations available is the version of NVDA that was first translated into that language (a good example is Amheric, for which NvDA 2012.3 is the first and only full (and the latest) translation of NVDA into that language). Other languages are now considered "not up to date" and are in danger of not receiving latest features translated into that language.
Perhaps this came about due to a possible misconception on doing just enough work for the upcoming version. That is, a translator of NVDA may think, "okay, because I've provided translations for the upcoming NVDA version, I've done my part." Translation of a software package, especially a package that is critical for a population of the world that has experienced information blackout, means you are committing to that project, and it isn't a good idea to see it come to a halt after a single version is released. In other words, a translation that doesn't show continued signs of perspiration will come back to haunt the translator, the language community, NVDA developers, users and third-party supporters in the future, which may tarnish the reputation of NVDA and will hinder efforts to pass on our knowledge to the next generation of users and developers.
Another possible misconception might be a notion that NVDA doesn't have flaws, and that NvDA might be portable across operating systems. Like any man-made structure (including software), NVDA does have flaws, some of which are major such as the issue on lack of support for advanced features of programs. Someone can argue, "if we have talented programmers, we can add support for such and such features easily". I, as someone who have watched assistive technologies such as screen readers and notetakers and who has actual experience in software and the engineering field that powers it, am confident that this isn't true in all cases, including screen reader development. This is more so when you are tasked with writing support for advanced features of an app that is critical to successful employment of someone halfway across the globe.
In regards to portability to different operating systems, this can be compared to demolishing and rebuilding a mansion from ground up. Anyone with experiences with two or more different operating systems such as Windows and Linux can confidently tell you that different operating systems work differently. Not only user interfaces are different (although they have blurred somewhat), their internals, design and goals are different. This is more so when a program uses API's from one operating system a lot (NVDA, like any screen reader, is a case of an app that makes heavy use of API's provided by the host operating system). Because of this, do not expect NVDA to cooperate well with Wine, nor think NvDA will become a fully functional screen reader under React OS or replace Orca on Linux. Misconceptions like these, especially something major like the ones described above, may become land mines in the end.
In conclusion, consider the words of Thomas Edison: success requires 99 percent perspiration, 1 percent inspiration. Considering that we're about to prepare to celebrate a major milestone of NvDA next April, let us not forget how much sweat developers poured out to make NvDA to what it is today. Let misconceptions, lack of support for professional apps and no avenues to pass on our know-how to next generation not become hinderences to the ultimate vision of NonVisual Desktop Access: a critical tool for information access for blind people around the world. Let us also start thinking about the long term goals for this project so it can be prepared to open new roads for people with disabilities for decades to come. Long live NVDA!
Subscribe to:
Comments (Atom)