AT&T’s Approach to Accessibility

Their rules of engagement:

  • Document a11y requirements via an org benchmark
  • Every requirement must have a testable basis
  • The web accessibility “check” = program building block
  • Every “check” has an owner
  • Ownership is distributed

10-stage process for integrating accessibility into enterprise:

  1. Establish a benchmark, e.g. WCAG 2
  2. Break down the requirements of web accessibility. Establish accessibility checks
  3. Determine which checks are machine-detectable vs hand-review
  4. Identify the roles & responsibilities of each discipline within the site production chain
  5. Map ownership of checks to production disciplines: nice matrix where for each WCAG checkpoint they assigned roles; e.g. 1.3.2 – dev, qa, ia
  6. Make room for accessibility information in project documentation, discipline deliverables and project artifacts.
  7. Creating scripts for automated testing
  8. Hand review
  9. Train delivery teams (by discipline): AT&T created their own style guide where every design element was fleshed out and documented (a11y included)
  10. Train QA to test for all machine-detecible and hand-review errors.

Overall conclusion:

AT&T made sure to do role-based training & work. Also they made sure to divide the accessibility work between teams such that there’s no overlap; e.g. the QA team doesn’t check for the validity of alt text, that’s not their job, content writers do that.

One last important conclusion: 

This is a project-based process so they do this with every new project; it is not an organization-wide process;

Helping Corporations Embed Accessibility in Their Culture

“Most corporations consider accessibility product by product. To be effective it needs to be embedded in their “business-as-usual” culture. Hear how to make that happen.”

  • Corporations have lots of pains when it comes to accessibility (a11y).
  • How most corporations do a11y: ignore it… then all hands on deck.
  • You need to fix the problem in the process, not the product to prevent it re-occuring.
  • When it comes to taking on the responsibility of accessibility it seems like it’s nobody’s jobs (marketing: not my job, finance: love to but I have no budget, developers: don’t have the requirements, etc.).
  • We need to get everyone involved and share the responsibility between all stakeholders, not just the “accessibility superhero”.
  • In Norway speeding drivers pay the fine to the drivers going under under speed limit – the speed camera became your friend (and possibly your career). That’s where accessibility needs to be.
  • WCAG 2 is good at getting you from point A to point B, turn left, turn right, etc. WCAG 2 is not good at: “Are we there yet?”
  • Make your product mould itself to one set of guidelines…
  • “Let’s build a better product… not just a compliant one” – good example from the OXO company designing inclusive products.

WCAG Best Practices: What About the Users?

This really hits home for me because we sometimes get bugged down in standards, guidelines & coding techniques and we tend to forget why we’re doing all this accessibility work. So, without further ado… “What about the users”?

The speakers worked a lot of accessibility goodness into their web projects (ARIA, proper roles, landmarks, etc.) only to discover these features are not being used. They took steps towards creating vendor-independent user guides to both help & educate users.

Discrepancy between WebAIM’s (screen reader survey) & their own survey (which they’ll make public) especially on the topic of advanced users – very high in WebAIM’s survey – approximately 65% (?).

Some of their findings:

  • multiple screen readers
  • most users have a
  • only 21% of users received formal training in screen readers
  • ONLY 1/3 familiar with landmarks
  • 60% are familiar and use headings for navigation
  • about the same for table navigation

Website exploration strategies (based on their survey):

  • 31% explore by landmarks
  • 80% use tab and arrow keys; same for headings headings
  • 44% bring up a links list

Another interesting finding: a large percentage of screen reader (SR) users do not have any formal training on using SRs and do not know and use any advanced features for website exploration & navigation.

Their goal: create a simple online user manual that explain how SR users can effectively navigate websites.

Survery will be available at easi.cc on May 2

eeintelligent wheelchair

To contact me while I live blog about this, use twitter @SinaBahram. Dean of the college computer science at CSUN is talking. One of the fastest growing comp sci programs in the nation. It’s remarkable, the solutions students have come up with in this space. The microphone is handed over to someone else. I’m not doing names, sorry. Maybe I can fill in later.

Challenges

  • taragne recognition: prevent it from going into mud, sand, etc.
  • Steps
  • tree roots, bushes
  • manmade obstacles like bunches, trash cans, polls
  • Also moving obstacles, like bikes, cars, people, etc.

Today, not able to deal with cars, just yet, but we’re working on that. (got a laugh)

Components

Laser range finder, GPS, camera, etc. The data is integrated into a local map and used as part of planning. The recognition of the environment then allows a plan for movement to be thought of and executed by computer. This is standard autonomous vehicles, 101.

We need computer vision, cognition and planning, motion command generation method e.g. EEG, factial signals, voice activation, etc.

Then of course GPS, and we also need local navigation, not just global navigation. So, if you have an obstacle, you have to make real-time detour decisions, etc.

Another microphone change

Changes to a wheelchair

Added power system such as additional battery, regulators, etc. Mechanically, the wheels got bigger. They are fome filled, not air filled, and that optimized ground clearance and speed and comfort. This was also done with casters on the back. Now another microphone change.

Software

Three types of taragne recognition. Using a camera that runs at 30fps. It has a max viewing angle of 66 degrees, and has white balnace, etc. It’s mounted on top and points forward. Shows an image of typical environment. So, identify grass, pavement, and dirt, and then draw those boudnries, virtually that is, and so the computer distinguishes between these taragnes. So, first we do grass detection. They are doing standard color detection e.g. mixing color channels, going to gray scale, etc. Then they go to dirt detection, and then they can overlay these results, of course, since you have grass nad dirt.

Now, shadows cause issues. So that might block a path. Resolution, they implmeented a shadow filter. It just takes shadow filters from source image and filters them out.

One they have grayscale of grass and dirt areas, with no shadows. They thresholde that image into a binary image that indicates preferable Vs. not preferable for path finding purposes. Then they noise filter that. So it’s just cleanup work. And, then they take the image and do some view dredirecting, so they have a top down image view. Finally, the distance of the increments int eh image is calculated. They then sedn this data into a component for processing. Anotehr microphone change.

Cognition system

How can you perceive and make decisions on those perceptions? We have laser range finder, camera, and gps, so we need to use that to make decisions on where to go.

Laser range finder gets them depth, and so they can use it for how far things are. They are showing data from the finder overlayed with an actual image so the sighted folks in the audience can understand depth detection.

They convert, and it essentially looks like a bird’s eye view. They have a polar histogram of this data for various edge detection and other purposes, and they overlay distances on top of this for boundry detection, I beleive, but I have some questions for him about that later on.

Two modes of operation. Hybrid mode takes commands from an EEG headset. Left, right, stop, etc. It is activating in background to make sure no unsafe movement, etc. Autonomous mode is just it gets one command nad then it just goes that way, avoiding obstacles, etc.

GPS can be used to get them course navigation data.

Radial polar histogram is what they developed, basically. So they are trying to determine radial turning distances and directions, but optimizing obviously for path finding, obstacle avoidance, etc. Chosing a turning direction can be hard. You coudl just say, go in a direction, then try, but these guys are trying to chose which curve to use to get there.

The layout of cognition system takes measurements, creates this histogram, has the local map discussed earlier, and grouping data to determine desired block e.g. path finding by any other name. The math in their algorithm determines boudnries e.g. they have a buffer on the wheelchair, virtually speaking, so in other words, even if they are off a bit, you don’t brush up against obstacles. The velocity and acceleration functions are smooth, so that’s basic calculus and just smoothen it out for the user. So, as the range finder sees nothing in front, it keeps going, and it keeps assessing and sampling and making sure it’s on the right path and that there’s a way to get there. Now it approaches a corner, and it wants to turn, so it choses the right turn radius and avoids the two walls in the hallway or whatever, and he’s hsowing a video of this, I beleive, and then there’s a narrower hallway and there’s closed doors, and so the chair handles that as well. Anotehr microphone change.

User Commands

Originally they used an EEG headset, which she’s wearing as she talks. There are four commands: forward, left, right, or stop. But she found it difficult to go past two commands, so she wanted alternative user commands. She’s going to tell us about three different command types, EEG commands, speech recognition, and other stuff. I’ll cover as I get there.

They are using the emotiv headset. I’m familiar with it, and I’ll post some URLs later on.

She’s saying it takes in thoughts, but you should know it’s just brain waves, not actual thoughts. Emotiv has 14 sensors, or electrodes, that detect brain waves, at a very course level. I believe they are using the idea called motor imagery. She’s not calling it that, but it simply means, make the user think left really hard, and then the signal gets picked up and it interprets it as left.

You can also blink or do other actions and it can recognize that. They are using the congative suite, an expressive suite, etc. So in other words, moto imagery, fatial expressions, etc. So, the user smiles, and you can recongize that and take an action.

The displacement of the gyro in the headset is also used. Remember, there’s a headset, so the wheelchair can interpret those as movement commands. Tehre’s a GUI. If the user is sitting still, the red dot shows as neutral area, but if they tilt their head right, ro even turn it right, it sends out a signal to turn right. They then take these commands and send them as keystroke commands. Not sure why they bother, since they could natively interpret, no? *remember to ask if I ever gatch my breath*

Then they do speech recognition. They are just using a windows computer, and so they are using built-in microsoft speech recognition software. They are also using labview, just some software (URlL later), and they can process those textual commands form the speech.

She shows a video of someone performing some of the actions we just discussed. Microphone change.

Navigation System

They have standard stuff, compus, GPS, accerometer, etc. They use basic shortest path algorithms to do pathfinding. Now, during this algorithm’s running, they recalculate the weights and pick optimized paths based on motion. Now he’s explaining basic path finding and shortest path. It won’t be explainable in this live blog, but tweet me later, and I’ll help explain, or point to rellavent wikipedia articles. It just sounds complicated but is very powerful and easy and straight forward. Now he’s showing a video again of this in action e.g. path finding. He points out some tough patts RE recognition. You know, some computer vision bugs and performance issues that they’ve worked on. Video goes on. Basically they are showing path finding, obstacle avoidance, etc.

Question: can you go in reverse? Well, it can, but no sensors pointing back. In non-automated mode, it definitely can, in that you can give those commands.

Question: you showed it on a marked path, but what about in just a parking lot or open area? He responds that it’s just shortest pathing it’s way to the thing, but if there are obstacles, it’ll avoid them, but of course, this requires an endpoint, because it needs a goal.

Question: are you just using GPS? Yes, just GPS for location, no dead rekkenning.

Non-Visual Drawing with the HIPP Application

So what is it?

There’s a force feedback device, a pen, that gives haptic feedback. There’s several components actually, for example, a sonic feedback component as well, and also a speech feedback which is different than sonic, which is non-speech audio. Changes, deletions, those sorts of things cause sound effects to occur. There’s also screen reader output. Currently using jaws.

A Demo

So we hear speech as she draws and reviews the drawing. We also hear sound effects, sonfiication, about the fact that something got drawn.

A question comes up about whether it can normalize lines, for example, can it take a drawn line and lengthen or shorten it for the user automatically? The answer is no for now.

We hear her go over more items on the screen, and we can hear things like the fact that if you draw a wave, you can then associate wave sounds like splashing water with that part of the image.

The application is written in c++ as well as python. Various haptic and audio APIs are used.

Why did we do this?

Lots of inaccessible materials, especially in schools for children. Tactile material can be costly, and so that’s another reason that it doesn’t get used a lot in schools. Haptics, on the other hand, via technology like the pen she’s using could be used to solve this problem.

Question: what do you see as typical projects that this would be used for? Answer, we’ve seen that it’s used in different ways, but there’s more later in the talk.

Free and Open Source?

It’s an open source project, which is fantastic. I’ll include URLs later on.

We took haptic devices, laptops, etc. and put them in schools with kids that we worked with. We talked with teachers and found out how/on what they want to use it with/on, and then we saw what happened. Context is very important e.g. using it for drawing or exploring graphics, but also for other reasons. Teachers were actually thinking that they could give the students materials and have them consume it. We tried that, and it took a long time for teachers to provide the exact material and so forth, so there’s something missing there. So we then consulted pedogical experts.

 

Some things we learned

Students need to learn how to draw. Even doodling, making dots, etc. That’s a useful skill, and one that might not be as prevalent in blind children. So, kids draw. They make shapes and draw things, and they get positive feedback form parents and teachers, and so this encourages them. Also, though, those individuals give the children positive reenforcement and they even provide context such as “oh, that’s a wave” or “oh, that’s a dolphin”, and so this process continues and there’s feedback to move the kids from doodling to drawing actual things.

Question: did they ever ask the kids what they were trying to draw? Somewhat, but often the children start drawing something, with an intention, such as “I want to draw a dolphin” and then get some help drawing that … maybe with the body or the hsape or what not.

Interesting Query/future work: How can you draw 3D? For example, thehre’s a guy who wants to draw a mountain on the moon, and then a vally etc.

One Takeaway: One child started with “I can’t draw at all” and ended with “when it comes to drawing, I’m the best”.

Current Status:  The project has ended, but the code and all is available. Also the kids who were in the project have kept it, some of them anyways, and are continuing to use it. We are looking for folks to help us with the project, to move it forward, do new things, etc. The application is in English, but the website and all is in Swedish.

http://hip.certec.lth.se

Question: how much spoken output is there? When you draw, it speaks, or erase something, it speaks.

Question thought about SVG? Yes, it’s an SVG variant, but we didn’t have a need just yet for it.

Have you looked at vibrotactile feedback Vs. haptic? That way, no pen force feedback required. Yes, we looked into this some, but nothing official.

Question: what is the major contribution that this project represents? General access to graphics, but also this is a bottom up approach to think about drawing and exploring graphics. Drawing might not be the only goal here, for example, something interactive like a graphing calculator, or other possibilities.

Haptics in pedogical practice is what HIPP stands for.

Choosing Automated Accessibility Testing Tool

Primary Criteria:

  • (Should have started with) Is the tool testing the DOM?
  • Is the tool user friendly?
  • Quality, reliability of results
  • Is it web based?
  • Integration
  • Does it spider?
  • Does it test uploaded / submitted source?

More on integration: the tool should integrate with existing SDLC tools so that there isn’t a separate flow for accessibility.

Session Description: successful use of an automated testing tool requires purchasing a tool that’s efficient, user-friendly, and reliable. We provide unbiased criteria for choosing such a tool.

Keynote Address

Keynote Address by Dr. Deana McDonagh – faculty member of both the Industrial Design in the School of Art + Design at the University of Illinois (Urbana-Champaign) and the Beckman Institute of Advanced Science and Technology.

  • This talk is about industrial design.
  • What is industrial design? Everything we interact with, everything we touch and manipulate.
  • “We gradually adapt and adjust to our material environments”.
  • “Normality of doing things differently”. We’re not designing for older people, we’re designing for the future.
  • Looking at industrial design as human-centered design.
  • A concept of ”empathy horizon”. Designing for somebody outside of our “empathy horizon” leads to “crap design” and also inaccessible. An example of braille signs posted 7 feet high on university walls.
  • “Empathic modelling”: putting the designers into the users shoes, i.e. taking away their vision while using / interacting with a designed product.
  • “The only way to experience experience is to experience it”.
  • “Empathic modeling can be low tech or high tech and very helpful for ALL to learn how to best address the needs of ALL” via Twitter
  • An example of authentic human behaviour: speaker sticking post-it notes on her phone’s screen.
  • “Disability + Relevant Design” Book
  • Smithsonian Institution: empathic modelling with engineering students.

Live Blogging #CSUN13

Welcome to this “28th Annual International Technology and Persons with Disabilities Conference” at California State University, Northridge (CSUN).

I will attempt to live-blog the event sessions (Wednesday through Friday) for those of you at home and of course for posterity, information & education. I say attempt as I was told the WiFi / 3G connections can be spotty. Of course you can follow along with the twitter hashtag #CSUN13.

If you’d like to help out live-blogging the other (parallel) sessions please create an account  on the website, or get in contact and I’ll create the account for you. Of course, you can also send me off-line / post-sessions notes as well.

Thank you,
George “Good Wally” Zamfir
goodwally.ca
twitter.com/good_wally