The Candid Voice in Retail Technology: Objective Insights, Pragmatic Advice

More On AI & Related Technologies

						Username: 
Name:  
Membership: Unknown
Status: Unknown
Private: FALSE
					

AI (Artificial Intelligence), ML (machine learning) and NLP (natural language processing) are coming up a lot in the news and in conversation lately. At RSR, both Nikki Baird and I have been commenting on various aspects of their use, and last week I received two separate calls from industry colleagues who wanted to discuss (1) whether any retailers are deploying AI-based solutions yet (answer: yes), and (2) whether consumers will “buy into ” AI and related technologies (answer: they already are, whether they know it or not).

Just like so many other technologies, once AI is given an easy-to-use front-end and a set of practical capabilities, people will find a reason to use it. One recent example is office supply retailer Staples’ Easy System, a B2B offering that allows businesses to order products online or via a Wi-Fi-enabled red “easy button” featured in online ad campaigns. But it’s not just consumer ease-of-use that drives adoption; it’s also the ease with which companies can deploy the new technology for consumers to use. When it comes to AI and NLP, it’s becoming more effortless with each passing day. Leave it to Amazon to figure out how to “democratize ” the technology. A couple of months ago, the company announced Amazon Lex, a toolset that enables companies to build AI-driven “conversation BOTs ” for their Amazon Web Services hosted websites. This is a potentially disruptive offering since it does two things: first, it makes very advanced technologies available to companies that could not otherwise afford them (I’m thinking specifically of less-than-Tier-1 retailers), and secondly, it puts pressure on Tier-1 retailers that don’t use Amazon technologies to pay big bucks for their own proprietary AI and NLP enablement.

Avoiding ‘Whoops’

What we can glean from all the recent buzz about AI, NLP, and similar technologies is that they have jumped onto the consumer technology adoption curve; that is, although early stage experimentation and adoption of any new technology may take years, once a technology is easier to use than to ignore by consumers, mass adoption will happen very quickly. We’ve seen it over and over again; perfect examples are the Internet itself and “smart ” mobile devices. But the dark side of recent rapid consumer technology adoption is that it comes at a cost, and sometimes it takes far longer for us to understand that cost than it did to adopt the technologies themselves. In the cases of the Internet and mobile devices, it’s our privacy and the security of information about us and our lives – and we’re still collectively trying to figure out whether this is a big problem or just a fair tradeoff.

A couple of weeks ago in Retail Paradox Weekly, I mentioned that Tesla and SpaceX CEO Elon Musk has said that artificial intelligence could be “our greatest existential threat “. This quote comes from an interview with Musk while he attended the MIT Aeronautics and Astronautics Centennial Symposium in 2014. The inventor is not alone in his concerns; cosmologist Stephen Hawking and Microsoft founder Bill Gates have also warned that while the potential benefits of AI adoption are huge, uncontrolled adoption of self-teaching AI could make humans too dependent for our own good. In a 2014 article, Professor Hawking said, “we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone’s list. Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. … In the medium term… AI may transform our economy to bring both great wealth and great dislocation. “

Unfortunately, we humans aren’t so good at anticipating the downsides of technology adoption. “Whoops ” is the most dangerous word in IT. To avoid “whoops ” moments is why software development teams usually have an independent testing group – the developers instinctively don’t do the things that would “break ” their creations, but those things turn out to be what bring systems to their knees in production. In our personal lives, it’s pretty much the same – the GPS in our smart phones allows faceless entities to track our movements (for example, your mobile carrier). Do we know what they’re doing with the info? Of course not! It’s easier to ignore the concern and get the benefit of location tracking on our mobile maps app.

With this very human tendency to shoot first and ask questions later in mind, Elon Musk, Stephen Hawking and futurist Ray Kurzweil, along with 2000 other signatories, in February agreed on a list to 23 guidelines as part of the Asilomar AI Principles. All of the principles might seem a little esoteric to most retailers, although a couple of them have obvious applicability (for example, “Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications “, and “Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data “).

So What’s The Point?

The reason for bringing these things to your attention is that first, AI and its related technologies are very real and ready-for-primetime, but secondly, they take businesses to places where we haven’t been before. The vast majority of the signatories on the Asilomar AI Principles agreement are academics, and while I’m not suggesting that you dive right in and get your name on the document, I do suggest that if you are considering applying AI to your business capabilities, you should keep the discussions about the ethical issues associated with its adoption on your radar.

Here’s a link to help you can get started: https://futureoflife.org/2017/01/17/principled-ai-discussion-asilomar/

 


Newsletter Articles February 14, 2017