5 things to keep in mind when building an Alexa skill

amazon-echo-alexa

A team of us at Red Badger, which consisted of myself, Marcel, Graham and Roman, had two weeks to play around with Amazon’s Alexa and build a sommelier skill to recommend wine pairings to your food. We’re writing a four-part series to take you through what we learned from our varied perspectives.

There’s been so many blog posts written about the rise of chatbots and Voice User Interface (VUI), some even marking 2017 to be the year of the bots.

This is new territory for both developers and designers. Developers are ramping up their skills to make machines take inputs and make sense of them better, while designers work on making the interaction less mechanic.

Here are some points that might come in handy when you’re designing the next big Voice Activated App.

1. Speaking to objects in public is still awkward

Machines’ ability to get input and process better than before doesn’t mean it’s ready for public display. Despite the relatively advanced state of Voice User Interfaces (VUI)’s, most people still feel uncomfortable talking to their phones or to a hockey puck.

The slightly robotic repetitive nature of the conversation, the high possibility that you’d be the only one on a bus making your shopping list out loud and concerns over privacy make VUI’s have great barrier of entry to public life.

Despite it being awkward to use in public, Alexa works quite well in private environments. Especially when it’s literally placed in context (i.e. the kitchen or your car), Alexa can make your life a lot better. Wynn hotels have an Alexa in every room to meet all your needs. By doing that they’ve practically given every room its own personal butler, with rapid googling skills. Customers are in the privacy of their rooms when arranging a call, asking for champagne or looking up entertainment nearby.

2. The tools we’ve mastered for screens don’t really apply here 

Imagine all the accounts you have for apps that help your daily design tasks, I’m talking about tools like Sketch, Marvel, Photoshop, After Effects and many more. With screens and interactions gone, the designs we need to work out are in the invisible world of context and speech.

The powerful niche tools we’ve developed over the years won’t work, but the basics of design are still applicable. Human Centered Design (HCD) provides strong, invaluable tools and methodologies that needs to be applied in a different medium.

We used storyboards and flowcharts describing conversations as our main "deliverables" over the two weeks. Of course, there was a lot of testing and tweaks that happened over impromptu conversations. The initial vision for how (smart) the conversation would go went through a lot of change as we found out more about Alexa’s limitations and capabilities.

3. Most of it is about making the unpredictable less painful

We’re all familiar with big red error messages on our screens. Now it’s time to think about them in the context of VUI.

Speech mostly is linear, and in the context of using apps or websites there’s always an end goal. The goal with handling errors on VUI’s is to avoid errors before they happen. We don’t want to go down a path where users simply have to turn the app of and on again.

  • Give an introduction to your app, explain what it’s good for and what users should expect to get out of it
  • Repeating the input is great for avoiding errors down the line, identifying miscommunications clearly will help users recover early on and will avoid ending up in a place where you can’t simply reply back
  • Suggest the type of input users are expected to give so they’re not left in the dark thinking about formatting
  • Just like any good design practice, allow users to recover from mistakes and errors easily

4. Don’t get too friendly. Acknowledge them as what they are: machines

Every interaction, be it on a screen, with an AI or a human, depends on how the actual experience performs compared to the expectations that were set at the beginning.

Making your "intelligent machine" a little too human would raise users’ expectations to a human interaction level, which is very, very difficult to deliver. Alexa, Siri and even more so with Google Now (as it doesn’t even have a human name), all have a non-human form and a slightly mechanic voice. They’re further away from resembling a primal robot, but not quite human either.

Humans are complex and unpredictable, aspiring to make a machine act like one is probably going to end up in a weird hybrid that can’t deliver on any point and will constantly disappoint.

5. There’s a lot you can do, once you pass the 'Hello World' phase

Once the conversation evolves and is no longer like talking to a non-English speaker with hearing impairment; it gets to be quite fun. The more comfortable you are speaking to Alexa, the easier it understands you and the more enjoyable the interaction becomes.

As you establish the limitations and have accurate expectations from what the smart little hockey puck can do, you will start appreciating the clever comebacks and see opportunities to making Alexa a handy companion, who is always ready and eager to help.

Published under license from ITProPortal.com, a Future plc Publication. All rights reserved.

Sinem Erdemli, Service and User Experience Design, Red Badger.

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.