When an automated tool replaces the consultant

Yes, it will happen. Us consultants will be replaced by automated testing/automated tools. The progress is clear, a lot of people are working on it, so is it time for me to start looking for a new job?

Automated testing of accessibility has existed for a long time. When I first started looking into the area of accessibility in the year 2002 we were assisted by a British policeman, Bobby. The names and the tools have come and gone, a few more helpful than others and some have done more harm than good, but the constant stream of tools and services won’t subside, it grows.

At Funka we are sceptical of automated tools, or rather the way they’re being used. Automated tools are told to be able to do everything, but they can’t. It’s not possible to use an automated tool (or a collection of them) to tell whether a website is accessible. It will point at possible errors, sometimes even with high credibility. But they can never verify that a solution is working for the users.

There are many examples:

  • A picture without an alternate text may be an error, if the picture is not for decorative purposes only. The automated tool cannot decide this.
  • A picture with an alternate text may be an error, if the text is not describing the picture. The automated tool cannot decide this.
  • If a site is missing language information in the code, that is an error, an error that can be detected by an automated tool. But if the language is set dynamically with Javascript, the automated tool cannot handle this. 

Another big issue is that automated tools are very bad at handling dynamic content. Today there are websites consisting of only one HTML page and where all content is dynamic. It’s possible that the automated tools will be able to handle this in the future, but as of today only very advanced tools, which have to be specially programmed for test scenarios, can handle this and these tools have quite a limited number of users. They are used in testing of digital systems with large scale developing processes and multimillion budgets.

How do we know they are bad? 

Funka has tested how automated tools perform in different contexts. Among other things we have compared and validated tools created for developers and web editors, all presented at the Funka Accessibility Days 2015. We also just finished an assignment for the European Commission where we reviewed tools and methods used to measure web accessibility in the member states. The results are consistently bad. 

But if automated tools are useless in deciding if an interface is accessible today, how can I tell this is also the future? 

Program and slides from Funka Accessibility Days 2015

Funka develops recommendations for a monitoring methodology on web accessibility for the European Commission

We have to rethink it

All current automated tools are developed according to the same principles. One or several developers sit down and start to think, what can be tested automatically? After that they build the same functions that have already been built in a hundred tools prior to theirs. This is why, year after year, the marked treads water. The result is a tool without a clear user. It is marketed to developers, editors and website owners.

But be ware, I have this crazy thought… why not do this from another perspective, why not start with the users? Who will be using the tools? This makes things a lot easier: 

  • Developers need tools that checks the quality of code, for example validating HTML and CSS, correct use of structure elements like <nav>, <main> and WAI-ARIA roles such as search and complementary. Developers need to check style sheets and components.
  • Editors need to check things on a daily basis; that published pages have logical heading structures, that pictures have reasonable alternative texts, that the text seems accessible, that editorial tables are correct etc.
  • Auditors, persons who check interfaces to determine if it is accessible or not, need support to identify potential issues. It is a combination of the above, but presented in a clear way, something involving an entire website trying to find deviations and oddities.

So there are at least 3 different perspectives and each group needs its own set of tools. The problem today is that most tools don’t know where they belong. They try to be everywhere and so fail everywhere. There are a few exceptions and approaches, but most times these are communicated in the wrong way and therefore end up being used by persons lacking knowledge in what is in front of them.

The future

I think that a lot of what we are doing today will be replaced by automated tools within 10 years. I, as an accessibility expert, will not have to read code like before but I will be able to focus on the doubtful cases and the tricky issues. Facebook recently launched an automated picture texting tool for visually impaired. Facebook does an automatic analysis of the picture and automatically provides it with an ALT-text. If you connect a function like that to a function working through a website identifying the pictures and trying to evaluate if they have reasonable ALT- texts, you will have an embryo to a smart test. I’m guessing that the function is not yet more than a gimmick. But that is the case with almost all new technical development. It needs time to mature, but eventually it will reach a useful level matching our purpose. The same can be done to most things and that is why I believe that accessibility work will change radically within the next 10-15 years.

But an automated tool will never be able to determine if an interface is accessible, it will only be able to make a qualified guess. In order to know how something works for real people, we have to test it with real people.

Andreas Cederbom