AI, Silver Bullets, And The Looming Backlash
Username: Name: Membership: Unknown Status: Unknown Private: FALSE
Over the last few weeks several of our vendor clients have asked us to comment on the future of AI (Artificial Intelligence) and whether or not retailers are really buying it. The short answer to the second half of that question is yes, retailers are buying it, but for the most part they’re unaware that they’re really buying AI. For example, you buy a personalization solution with machine learning at the core of it, and you’re buying AI.
But one client this week mentioned that he’s seeing a new distressing tendency: you mention the word “AI ” when the retailer asks about what’s in the black box of the solution, and the retailer immediately backs off: “Oh, it’s got AI? Okay. We’re good. ” (That’s a bit of an over-simplification, but hopefully you get my point).
One could argue that AI as a group of technologies belongs in the “Peak of Inflated Expectations ” on Gartner’s hype cycle, which would then be followed by the “Trough of Disillusionment “. The way these two typically work, expectations for what the technology can do exceed what it is actually capable of, and buyers have to work through the growing gap and subsequent reality check before the technology can move on to the “Plateau of Productivity ” where it will live until the next innovation trigger starts the hype cycle all over again.
I would argue that this is NOT what we’re seeing with AI, and actually, the trend that is developing in the marketplace is more dangerous than a typical hype cycle. There are two developments that are coming together that are leading to this dangerous place.
One, what AI can do is actually developing faster than retailers’ expectations. You could argue that this places AI on the “innovation trigger ” part of the hype cycle, but innovation isn’t causing the gap so much as retailers’ lack of understanding how these innovations are developing. I love this stuff, but I’m not a PhD in Machine Learning or Optimization, and I can barely keep up. For someone who is trying to figure out how to sell more stuff to consumers, no matter how smart they are, they’re not going to be able to keep up either. And AI is rapidly developing its own vocabulary that is increasingly distant from business use. “Deep learning ” and “neural networks ” and all kinds of phrases that don’t mean much to a marketer or a merchandiser.
Two, some retailers are getting desperate, because they have not yet figured out how to successfully compete against Amazon. We’ve been calling this the Amazon Effect for a while now. This desperation, as we’re seeing this year in our research, is leading retailers to what we at RSR have coined “Silver Bullet Syndrome “. They are looking for a single solution that will solve all of their problems. That’s how you get buyers who respond to “Our solution uses AI to come up with its recommendations ” with “Oh, AI? Great. Cool. “
AI is starting to fall prey to Silver Bullet Syndrome.
Here’s how that will most likely play out. Retailers buy AI solutions because they think that will “solve all their problems “. AI is contained in a “black box ” within the solution – they don’t know how it works and frankly, they don’t want to know. They just want it to help them get Amazon off their backs and their share price back the way it used to be.
They go down the road of implementation. The solutions work – and they do work. For some applications, AI is really just a hyped way of saying “optimization ” or “predictive “. But within the retailer, no one understands how they work. And this is the beginning of the backlash.
What’s at issue, though, is not that retailers need to understand how the black box works. With machine learning in particular, the machine is learning – it’s learning things that humans don’t have the time to do the in-depth analysis that’s required to see. Hidden patterns, more granular ways of looking at the data. It’s part of the business case that makes AI so attractive: we’re not replacing people. We’re helping your people become more productive. We’re mining your long tail for value – the part of your products or customer behavior that your people never get to.
The problem is, the machine is learning – not the people. And that’s where the backlash will come from. Business users don’t really want to know how the black box works, but they do want to know what it learned from looking at their data. Okay, the personalization engine recommends this product at this price to this customer. Why? What did it see that made it think this recommendation was best?
It doesn’t matter if AI drives proven value, if people fear either outright for their jobs, as in, “This solution is going to replace me “, or if people fear long-term obsolescence, as in, “This solution is going to know more about our customers than I do – I’ve lost touch. ” As soon as people fear for their jobs, you can bet there will be a backlash against the technology – an internal revolt that pushes back against an executive sponsor who can’t really defend the technology because they don’t know how it works any better than the people using it.
There is, however, a way around this future. It’s not to try to explain the depths of AI theory to business executives. It’s to use the machine to teach humans what it learned. And it’s interesting to hear from solution providers that they are getting exactly this kind of pushback – business users are starting to push back against the black box of the solution, not to understand how it works, but to understand what it learned. Why it recommended something. And they are starting to put more windows on their black box – more exposure of the why’s that went into recommendations, so that humans can learn alongside the machine.
Will it be enough to prevent Silver Bullet Syndrome? To prevent an end-user backlash against AI? My thinking is yes, it will be. We’re in for a rough patch, but AI as a group of technologies delivers the potential for too much value to kick it to the curb out of fear. But technology vendors need to do their part, by spending more time thinking not about how to get the machine to think like people, but rather to focus on how people react to having to work alongside the machine.