By JHROGERSII of iPad Insight.com
With the HomePod showing up on my doorstep next Friday, I’ve been doing some thinking about Siri lately. Why is the overall impression of Apple’s digital assistant so negative? There are recent surveys and tests showing it as being competitive with Alexa, Google Assistant, and Cortana in some areas. There is real evidence that many “normal” users aren’t as dissatisfied with it as we in the tech community and the “Apple bubble” are. So what is the problem? Where is the disconnect?
Consistency is Key
I think the problem with the general perception of Siri is twofold. First, I have been begging for Apple to unify Siri across its platforms and make its feature set consistent from device to device. Unfortunately, not only has that not happened, but now we have yet another unique Siri implementation on the way that will be specific to the HomePod.
Users shouldn’t have to remember that Siri on Apple TV can only handle media requests and HomeKit, or that Siri on the Mac can save a list of previous responses, but can’t talk to HomeKit devices. Why can’t we get the saved Siri results from the Mac at least on the iPad? Now we have an intelligent speaker that won’t work for a lot of common Siri queries that we can perform on the iPhone we will use to set it up. Why Apple? Why? None of this makes any sense at all. All it takes is Siri not coming through or confusing a user a few times for them to give up on it and move on.
One positive is that I’m certainly not the only one talking about this. I was very happy to hear Rene Ritchie of iMore also discussing making Siri consistent across all Apple platforms during Monday’s Vector podcast. He was also advocating for Apple to make Siri a cloud-based service that works across all devices, which would also be a very welcomed addition. This could still be done while maintaining users’ privacy, so Apple shouldn’t try to hide behind that excuse anymore.
While many of us have been asking about this for a while now, the fact is that Mr Ritchie has eyes and ears inside of Apple and may actually be able to exert some influence on the situation. If he is brining it up, at least it is likely to be heard within the glass walls of Apple Park. I mean, the guy was able to get an Instagram pic with Tim Cook at a hockey game, right? That’s a lot closer than most of us will ever get.
Make Siri Great…For the First Time
Even as an Apple fan, I have no problem admitting that Siri has NEVER been great at anything. I, like most people, gave it a pass at release because it was new and different. However, when Apple didn’t improve it or truly move it forward after several years, most people lost their patience with it. I have still use it often for basic tasks, such as reading messages, creating alarms, and placing phone calls. However, we are a long way down the road from those tasks being impressive.
In my opinion, for all of the things Siri does, the biggest problem is that Apple never focused in and made it great at any of them. Some of its features, such as entering or reading off appointments or reminders, or setting timers, are very good and pretty consistent. The ability to ask Siri to remind me about a phone call, email, voicemail, or web page that is on the screen is also very useful (for those who know the feature exists).
However, I wouldn’t qualify any of the above features as “great,” because there are still times when they break down. For example, Siri will just stop recognizing the “Remind me about this” command on occasion, and ask me what I want to be reminded about. When this happens, I have to reboot my iPhone to get the feature back online. That just makes me shake my head, because this is a really useful feature that I take advantage of often. It is two years old now, so this really shouldn’t be happening anymore.
Unfortunately, these features are still the best that Apple has to offer with Siri, and they still have glaring issues. Then you get into the real problem areas. Dictation still comes and goes and struggles mightily with proper names and context. Asking Siri questions often just results in a web search that will quickly disappear from the screen. Trying to use context between actions will sometimes work and sometimes just break down. Combine the failures with the lack of consistency and shortage of and restrictions on third party integrations and you have too many pitfalls for users to fall into.
What is the difference?
So what’s the real difference between Apple on the one hand, and Google and Amazon on the other? Both of their assistants have legitimate issues and shortcomings, as well. Google doesn’t play much better with third parties than Apple, and in some cases, Assistant is actually harder for them to work with (although this year’s CES shows that Google is addressing this). As for Alexa, just try using it on a smartphone or other non-Amazon hardware. Amazon has the same issues as Apple with sub-par mics that aren’t set up to be used with a voice assistant.
While Amazon has given third party developers an open door, Alexa doesn’t allow for any contextual awareness with its “Skills.” Users have to memorize set commands and queries, and if they forget, their requests don’t work. I have heard Echo users who are otherwise very happy with Alexa curse it over this shortcoming. Even the most favored voice assistant of the moment has its issues if you get past the hype.
So, both of Apple’s primary competitors in voice assistants have legitimate shortcomings that users are very aware of. Why do they get a pass on them while Apple doesn’t? It is because both Assistant and Alexa are legitimately great at one or more things that users find very useful. If you ask Google Assistant questions, it will give you direct correct answers very quickly. It will translate on the fly. It will search, recognize and digitize written text. Oh, and it has a very similar feature set across the board where it is available. Google handles this better than any other assistant by far, and frankly, no one else is even close right now.
As for Amazon, they doubled down on making the basics near perfect. The Echo devices have multiple beam-forming mics that do an impressive job of picking up your voice and accurately parsing your requests, even in the presence of background noise. The Alexa experience may have a steep drop-off on third party hardware, but most people are using it on Amazon’s because of how inexpensive and easily available they have made it. Their system’s combined ease of use has made people comfortable using voice assistants. And again, like Google Assistant, Amazon’s Alexa feature set is very consistent, no matter what device you are using it on.
Along that same line, another key for Amazon (that Google wisely copied ahead of Apple) is that they made a device that put the voice assistant in a different context. Many people are still self-conscious about using Siri and other assistants in public, especially when using a headset or AirPods. While this has become more commonplace over the last decade, it can still look pretty odd watching someone “talk to themselves” while walking down the street. There are a lot of people who are too self-conscious to do that.
The beauty of the Echo is that it takes the voice assistant and makes it available throughout a room. You don’t have to carry a phone around and be subject to the limitations of its mics. “Hey Siri” works, but it is locked to a device that is meant to be with you, not across the room. The Watch is great if you have one, but it isn’t capable of making all of the same voice responses to your queries yet. The Echo took the genie out of the bottle by making a device that is dedicated to monitoring an entire space, and it is clear that users prefer this experience. Alexa was also set up in such a way as to not make users feel less self-conscious about using it in the open. They are having a conversation with a device that responds aloud, so the experience is natural and more “human.”
Another strength of Amazon’s Alexa is the third party ecosystem that has sprung up around it. While I mentioned the limitations of Alexa Skills as being a drawback, the fact that they exist is still a big strength. HomeKit may have been there first, but people have embraced Alexa because there is convenience in being able to link devices that they want to use together without headaches and restrictions. While the defined commands required to use Alexa Skills may cause some frustration, the amount of third party integrations available is still a strength that Amazon has over both Google and Apple.
Getting a pass
The bottom line is, Google’s Assistant and Amazon’s Alexa do get a pass on their shortcomings, but they get it for legitimate reasons. People don’t get as irritated over them because both of these assistants have aspects that are truly great. On the flip-side, Apple doesn’t get a pass for Siri’s shortcomings because there isn’t a similar feature that it has or task it performs that is similarly great. There is no positive bubble or reality distortion field here. Without that, people will pile on the negative aspects and won’t give much credit for the things that are good.
Every time I hear Siri discussed on a tech podcast, even an Apple-centric podcast, this is what it comes down to. There are complaints and the typical, “Siri sucks” comments. Then someone will usually mention a feature or two that is good and works well for them, and people will backpedal a bit and agree. Then there is usually a more reasonable discussion about all the things that don’t work as well. I hear the exact same in reverse with discussion on Assistant and Alexa, with the overall impression being positive. However, you will often hear the same backpedaling and admissions that certain features of those assistants don’t work so well. These overall positive and negative impressions come down to doing a few things very well, and the reactions around the three assistants are remarkably consistent across the tech world because of this.
We just heard a rumor this week that Apple is scaling back the planned features in iOS 12 to focus on software stability. I can only hope that Siri will be one of the items that will be focused on over the course of this year as part of this. The fact that Craig Federighi was supposedly behind this move and that Siri is now under his jurisdiction is cause for some optimism that improvements will be made going forward into 2018. Even if Apple won’t say it, the moves they have made to bolster their AI and machine learning efforts over the last two years, as well their downplaying of Siri as an intelligent assistant in the first HomePod, show me that they see the problems. However, the question remains- do they have the right answers to fix them?
If Apple can create a more consistent user experience for Siri across all of its platforms, it will help cut down on frustration and might actually encourage more Apple device owners to use it. However, to turn around the service’s tarnished reputation and get it seen in a favorable light, Apple needs to double down on one or two core features that they know users want to be improved. They need to taken them, hammer everything out and make them great, whatever that takes.
I’m talking bulletproof. Rock solid. The kind of great that no reviewer can deny. That is what it will take to turn heads at this point, so that’s what they have to do.
The current path of incremental upgrades and new feature additions isn’t improving the situation or user’s impressions of Siri. Apple needs something that its competitors already have. They need something great to hang Siri’s hat on going forward. Without this, the negative perception won’t change, even if Siri does improve incrementally over time.
What would you add to Siri’s feature list? Sound off in the comments below!
Leave a Reply