As conversation bots become more common, interesting ethical questions are emerging.

How much should a bot know about a user? And how should its makers disclose that information?

Case in point: Matt Sparks at Learning Solutions describes an exchange with an adaptive conversation bot that Sparks found pleasant … until the bot responded with comments that made it clear the bot was reading Sparks’s Evernote files. Sparks notes that the bot’s terms of service never mentioned its ability to access Evernote or other apps.

“I would have been fine with this if I’d been made aware,” he says. The surprise, however, ruined the experience for Sparks: “I haven’t opened the app since.”

Then, there’s the actual question of intelligence. Bots that learn from conversations run the risk of learning behaviors that may shock or alienate users. For instance, Microsoft’s machine-learning-enabled Twitter chatbot Tay was shut down in 2016 after Tay incorporated several bigoted phrases into its vocabulary, says Glenn McDonald at InfoWorld.

Even if they’re not presenting users with unpleasant surprises, chatbots’ alliance with big data is already changing the way humans experience certain events. For instance, tennis fans can connect with Wimbledon Messenger, which uses artificial intelligence to access IBM’s Wimbledon Information Service (WIS) databases and answer audience questions. “We’re finding the technology is getting better and better as it receives more queries from people,” says Simon Boyden, chief architect for IBM at Wimbledon.

Not only can these conversation bots understanding human queries, but human users are demonstrating tolerance for the bots’ learning curves, as well. “Questions that are awkward or annoying coming from a brand are socially acceptable and even welcome in chatbot interactions,” says Adelyn Zhou of ThinkGrowth.org.

The combination of increased usefulness and high user tolerance creates a social niche for conversation bots in which they have permission to ask a wide range of questions — and in which they are likely to receive answers.

Companies using these bots must then ask: Our conversation bots can — but should they?

Ethical Guidelines for Data Collection

Because talking to a conversation bot can be “like talking to a friend,” according to Daniel Vido at Chatbot News Daily, users don’t always consider the ethical dimensions of the conversation or the information they’re providing. “One big question that people should ask but often don’t ask is ‘does this bot serve me, or the service provider?’” says Amir Shevat, VP of developer experience at Twitch.

One place for service providers to start is with communication. “Building trust between humans and machines is just like building trust between humans,” says IBM’s Trips Reddy. It requires transparency, not only about the conversation bot’s existence as a bot, but also about how the human conversation partner’s data will be used and stored.

Although clear guidelines about data or intellectual property ownership have yet to be established, Reddy says, companies can set and understand boundaries by ensuring their bots communicate the organization’s data sharing policies.

Certain boundaries can be set by conversation bot creators. For instance, Justin Lee at GrowthBot notes that an AI-enabled bot can be limited to learning data that is relevant to its task. Many smart home objects, like robotic vacuum cleaners, already have such limitations: The vacuum’s learning is typically focused on tasks like paying attention to where the furniture is.

Privacy is another area of concern for people who interact with conversation bots, particularly when the bots’ access to user data is far-reaching, says Joe Amditis, associate director of the Center for Cooperative Media at Montclair State University. “Users need to know that the questions they ask and the interactions they have with your bots will remain private and secure.”

These privacy concerns take on an additional dimension when conversation bots can have more intuitive, adaptive conversations — particularly involving a user’s emotional state.

“Emotional engagement and the level of empathy provided by the AI are the factors that may encourage one to reveal bigger volumes of private information, including health information, sexual orientation, and habitual circumstances,” write Daniel Dimov and Rasa Juzenaite of Dimov Internet Law Consulting.

As these categories of information aren’t often on the list of information users are primed to protect — the way Social Security numbers and bank information are — they may be even easier for an emotionally-sensitive conversation bot to extract them from their human conversation partners.

 

Conversation Bots and Big Data: Opportunities and Challenges

Because they act autonomously, conversation bots have the potential to act in ways that are neither foreseen nor condoned by their creators, Maya Medeiros at Norton Rose Fulbright writes. To curtail the risk of a conversation bot making a serious blunder, creators must consider the ethical boundaries of the bots’ performance and how to maintain those boundaries without compromising the bot’s work.

Artificial intelligence and machine learning have made it simpler than ever to analyze and leverage collected data, while advances in computer technology have made it easier to collect ever-larger sets of data to analyze. Conversation bots are poised to play a key role in the process.

“Chatbots are a perfect collector of big data,” Alec Sears writes at DataFloq. “They can handle large numbers of inquiries at once (volume), manage those inquiries, categorize the information at a much greater speed than humans (velocity), and gather a tremendous range of information (variety).”

Conversation bots’ ability to analyze vast amounts of data quickly offers a number of benefits for organizations seeking to improve their internal processes, Nadiia Sheveleva argues at Big Data Made Simple. When the same tools are used to communicate with an organization’s clients or customers, however, larger concerns arise.

As companies partner with conversation bots to conduct business, help visitors, and gather information, humans must assume the responsibility for overseeing the bots’ ethical codes. Conversation bots are getting better at learning, which means they can to learn how to be good citizens — if they have the information they need to do so.