AI and UX: Part 1

25/07/2023 Andreas

AI and UX: Part 1

Accessibility - current issues and what's next?

Photo by <a href="">Possessed Photography</a> on <a href="">Unsplash</a>

I thought it would be interesting to write posts on what kind of impact AI is going to have for user experience. As part 1 series, I chose accessibility as a topic because it may not be obvious how AI fits in. AI doesn’t only provide powerful ways for content creation / hallusination – however you wish to phrase it, but is going to bring enhancements on our online user experiences. How? Let’s see!

Accessibility now

Problem 1 = Poor UX

User experience is not great. Let’s face it, the current accessibility solutions are a one-size-fits all. For sure, they are better than nothing. But it’s far from perfect in usability sense, we’ve been essentially translating a visual user interface into a form that a screen reader is able to work with. But that’s exactly the big part of the poor user experience for visually impaired; the premise. Every single user interface is always designed for a device that uses some sort of screen. Phones are just a one huge screen with no tactile feedback, it’s obvious who they’ve been designed for = most of us. This doesn’t only concern visually impaired, people with motor impairments need separate devices for text input and navigation – a screen doesn’t do it. As a personal side note I also just loathe virtual keyboards and unnecessary swipe buttons in different appliances when a physical one would do the job so much better.

World of screens

Everything is basically designed to cater the majority with no impairments. Most of us take visual UIs and our devices for granted, but imagine yourself having to rely on screen readers? I’ve been involved in company wide accessibility fixes doing this exact translation and used screen readers for testing, but I must say that the technology and premise is wonky and doesn’t serve the impaired as well as they’d deserve. The root problem for this is a simple fact: A visual UI manages to communicate a ton of information instantly to the user. Content can be deciphered very quickly by glancing, while people relying on screen readers have to rely on the hierarchical order of content and that everything has been labeled logically and clearly. They usually advance content block by block, one by one unless the UI doesn’t provide shortcuts. Basically users are relying on the manual work of designers and developers who mostly don’t have these impairments, although they may know a lot about the subject and even get consultation subject matter experts. But there exists absolutely an inconsistent level of usability quality between different services.

Solution: Finally a capable AI voice assistant as an option for visual UI?

We’ve heard plenty of hype about non-visual user interfaces maybe even 6-8 years ago? They were supposed to be based on artificial intelligence. This was the golden reign of chatbots – which undoubtedly sparked hype and conversation further. However, that was way back when the AI technology was poor and we’ve seen that the service quality of chat bots have been questionable. The way they functioned resembled more of a sophisticated wizard that had been scripted to follow certain branches in the tree and provided limited options for users. We all have experiences, good and bad… But now, AI is actually showing such a great promise that a proper non-visual user interface could be very possible to be done.

Large language models offer a whole new level for a more natural communication with AI. In order to achieve natural level of interaction, it would be crucial that the AI would be able to understand context and actually follow up a conversation. For example if I started discussing about cars in the beginning, change subject to something else like apples, then bring cars back in the conversation and the AI knows and remembers what originally was discussed – and it would not always mean a new clean slate for every new topic. This is just a crude example, but human to human conversations are full of different nuances which this example touches. Being able to bridge these gaps in a natural manner is important so that the interaction feels as natural as possible. All this is starting to look very possible, and if this is achieved, we could provide so much better user experience with non-visual UIs. And this would solve so many current problems with user experience. The way things have been solved now with screen readers will feel archaic in the future.

Imagine unique ChatGPT’s but which are tailored to serve different business needs – we could have an AI expert for insurances and is able to help user with voice or text / same with any other business : banking, telecom, eCommerce, anything online. I’m pretty sure this is one use case the AI will be harnassed for = we are going to see specialized AIs serving users with specific issues. ChatGPT is able to provide information on wide variety of topics, but for business use it makes no sense to allow customers to talk about topics which are irrelevant for the business. Google and Apple are certainly going to bring more general AI capabilities for their voice assistants, once the tech just gets good enough!

Problem 2 = Images

Image content is going to get actually more accessible, the visually impaired have been relying on the descriptive metadata that they may only hope are descriptive enough. I’ve found graphs and information graphics to be especially tricky to translate for screen readers. There’s a reliance on the quality and effort of manual work.

Solution: already exists!

But with AI this issue can be solved, there already exists an AI tool which is able to describe what is in an image with great detail. For example Astica has an example which you can try for free:

Sounds simple but it’s very powerful! It will definitely provide value for users and get rid off manual work for descriptive metadata input. It also unlocks a huge amount of digital content for the visually impaired!

Promising future

I’m very hopeful that AI will make user experience a lot better for people with visual or motor impairments. Maybe we will also see new AI features to provide ease for people with cognitive impairments? I believe AI is going to provide many good solutions!