Americas

  • United States

5 ways to keep virtual assistants from sharing your company’s secrets

Feature
Apr 19, 20176 mins
CareersSecurity

Consumers love talking to Alexa, Siri, Cortana and Google Now. But what should CIOs be doing to prepare for the growing use of virtual assistants among their employees?

handsome male executive holding finger up to be quiet keep a secret
Credit: Thinkstock

Virtual assistants like Apple’s Siri, Microsoft’s Cortana and Google Now have the potential to make enterprise workers more productive. But do “always listening” assistants pose a serious threat to security and privacy, too?

Nineteen percent of organizations are already using intelligent digital assistants, such as Siri and Cortana, for work-related tasks, according to Spiceworks’ October 2016 survey of 566 IT professionals in North America, Europe, the Middle East and Africa. The survey also found that 46 percent of organizations plan to adopt intelligent assistants within five years.

Currently, about 500 million people use voice-activated digital assistants, with 1.8 billion people expected to do so by 2021, according to a Bing/iProspect study. Because workplace technology tends to follow consumer tech trends, voice-enabled virtual assistants are expected to become more common at the office, a venture capitalist recently noted in The Wall Street Journal.

But when asked about their biggest concerns related to virtual assistants and AI, most IT professionals point to security and privacy issues (48 percent), according to Spiceworks.

While “virtual assistants deliver tremendous convenience and value, these devices introduce new security and privacy challenges,” notes Merritt Maxim, senior analyst, security and risk, for research firm Forrester. “On one level, they could be compromised for other purposes, such as what happened with the Mirai botnet. Or the devices themselves could be compromised either for malicious purposes, such as collecting data, or merely to prove a vulnerability.” Recent news reports about how the CIA may be able to turn a Samsung Smart TV into a listening device haven’t exactly dispelled such concerns.

If you’re planning to integrate virtual assistants into your organization, here are five industry recommendations and IT best practices to consider from security professionals.

1. Focus on user privacy

Virtual assistant developers are typically work at large tech companies with the ability to turn their attention to protecting users, vs. “getting product out the door and selling it before someone else does,” says Will Ackerly, co-founder and CTO of Virtru, an email encryption and data security firm. He says that, over time, privacy “will become a premium feature and differentiator” for virtual assistant products.

In the meantime, manufacturers should take steps to safeguarde their users by moving more intelligence to their devices and allowing users to maintain control over where their data goes and how it’s protected, Ackerly advises.

2. Develop a policy

Assume all devices with a microphone are always listening, says Bill Anderson, who worked on security for BlackBerry and Palm and is now CEO of mobile enterprise security firm OptioLabs. Even if the device has a button to turn off the microphone, if it has a power source it’s still possible it could be recording audio, he warns.

Also, remember that virtual assistants may store voice searches and requests on the vendor’s servers. “What would it take to breach that data?,” Anderson asks. He points to the recent Yahoo email hack as an example of how, in a worst-case scenario, an attacker could obtain a “head start on logging into my other accounts and getting access to this (voice recording) data.”

To bolster privacy and security, an enterprise should start with the big picture of where its potential vulnerabilities lie, Anderson advises. “Ask yourself, does your organization recognize that there are multiple microphones in every office already? Have you thought about the privacy and security of your in-office PBX phone system? How do you know the firmware in your office phones hasn’t been hacked? How do you know your laptops aren’t running third-party monitoring software? Are there other devices in the office (that record voice)? Do you understand how they got there and what their function is?”  

Once you have a full picture, develop a policy that reflects “how much trouble you want to go through to ensure regular conversations aren’t being leaked unintentionally,” says Anderson. “And if you care about regular conversations on existing equipment (like PBX phone systems), then you’re ready to think about a policy for virtual assistants.”

3. Treat virtual assistant devices like any IoT device

IT “should treat virtual assistant devices just like any other IoT device that records sensitive information and sends it to a third party,” says Marc Laliberte, information security threat analyst for security firm WatchGuard Technologies. “These devices should not be operational in locations where potentially sensitive information is verbally passed. Furthermore, IoT devices should be segmented from the rest of the corporate network to provide additional protections if they become compromised.”

One way to segment IoT devices from the corporate network is to connect them to a guest Wi-Fi network, which doesn’t provide access to internal network resources, notes Matias Woloski, CTO and co-Founder of Auth0, a universal identity platform.

4. Decide on BYO or company-owned

The BYOD craze of the past few years will inevitably extend to voice-enabled virtual assistant devices such as Google Home and Amazon’s Echo, says David Fapohunda, director of PwC’s financial crimes unit, which deals with cybersecurity.

“We’ll see these devices become more commonplace in both personal and corporate applications over the next two years,” says Fapohunda. “Unlike today’s versions, they’ll become more personal, able to identify who the user is by voice recognition, a wearable or potentially something like Wi-Fi acoustics imaging. The next-generation versions will be tailored to each individual and will truly provide deeper contextual responses and automate many manual tasks. A digital assistant will read your emails, voicemails, and even communicate to other virtual assistants in your personal and business network. At this point, the assistants will become a critical business tool like email.”

But this scenario in the not-too-distant future introduces a challenge for the enterprise. Do you allow BYO virtual assistants? Or should the company own and manage them? The question will spark more discussion than it has thus far “as C-suite executives refine their next operational expense reduction or workforce optimization strategies,” says Fapohunda. Since personal virtual assistants “rely on the cloud to comprehend complex commands, fetch data or assign complex computing tasks to more resources,” their use in the enterprise raises issues about data ownership, data retention, data and IP theft, and data privacy enforcement that CISOs and CIOs will need to address, he adds.

5. Plan to protect

Devices such as Google Home and Amazon’s Echo are designed for homes, not workplaces, notes Will Burns, senior vice president of cybersecurity firm root9B. “While their presence makes certain activities convenient, they’re not geared towards enterprise actions or security and bring with them a number of unresolved security questions,” he adds. “If there is one deployed within the corporate environment, I wouldn’t tie it to a corporate account used for purchasing. I’d minimize its internet access to the specific time when it’s being used for corporate activities, and I’d monitor its network activity. Just like many corporations today now put a cover on their webcams when not in use, there should be similar protections on the microphone for your digital assistant.”