Gen AI chatbots have changed how we work. Today, we bring you the second installment of Tech News Briefing’s special series “Chatbot Confidential,” where WSJ personal tech columnist Nicole Nguyen details best practices for using the chatbots while protecting your privacy. In this episode, we dive into an area where the temptation to tap generative AI tools is very strong: work! We’ll tell you about the risks your new helper brings and how not to give away company secrets while using them.
Sign up for the WSJ's free Technology newsletter.
Learn more about your ad choices. Visit megaphone.fm/adchoices
[00:00:02] It's safe to say work will never be the same again now that generative artificial intelligence is in the picture. People have used AI chatbots for all sorts of tasks. I use it on a daily basis for engineering and for modeling and simulation, which is really my job. Sometimes for fun stuff, like, oh, tell my son a story while we're driving in the car to keep him busy.
[00:00:27] I'm pretty busy on myself. The AI is my supporter because of the AI, now I can handle juggling more than eight organizations, three-year leadership roles, and now I'm building my own venture company. Those were Wall Street Journal readers Ashraf Zaid and Ian Yang. And I'm personal tech columnist Nicole Nguyen. Today we bring you the second installment of our special tech news briefing series, Chatbot Confidential,
[00:00:56] where we look at whether it's possible to protect your privacy and keep your personal data safe when using generative AI chatbots like ChatGPT and Claude. In this episode, we dive into an area where the temptation to tap gen AI tools is very strong. Work will give you the lowdown on the risks your new helper brings and how not to give away company secrets while using them.
[00:01:25] Before we dive in real quick, a reminder that we want to hear from you. Do you have questions about using AI and privacy? Send us a voice memo at tnb at wsj.com or leave us a voicemail at 212-416-2236. One more time, that's 212-416-2236. I'll be back in a future episode to answer your questions. Alright, back to the show.
[00:01:57] After ChatGPT first came onto the scene, many companies were quick to ban the chatbot. Still, one in five U.S. workers said they used ChatGPT for work in 2024, according to the Pew Research Center. That's more than double compared to the year before. It's easy to understand why. Chatbots can take on some of your work, saving you time. The most common use cases? Research, writing first draft emails, and creating presentations.
[00:02:25] Before we get into it, News Corp, owner of The Wall Street Journal and Dow Jones Newswires, has a content licensing partnership with ChatGPT maker OpenAI. Another Pew survey found that about 16% of respondents say they do at least some of their work with AI. And a quarter say, while they're not using it much now, at least some of their work can be done with AI. So with AI use growing in the workplace, what are some risks employees, and their employers,
[00:02:55] should keep in mind when it comes to these large language models, or LLMs? Steven Rosenbush is the chief of the Enterprise Technology Bureau at WSJ Pro. Companies are very familiar with a certain kind of LLM risk. They're familiar with this idea that the LLMs might make poor decisions in a very convincing way, that they might hallucinate, that they might be biased in some way.
[00:03:24] But they're not too focused on this idea that the LLM could present an actual cybersecurity threat. And security pros have two names for that threat. Outbound, as in a data leak. And inbound, as in generating compromised code or recommending malicious software. Steven explains. The outbound is somewhat more familiar. This is a cybersecurity threat in which there's a risk of data exposure, either intentionally or unintentionally.
[00:03:55] In March 2023, a bug in ChatGPT allowed some users to see what other people initially typed in their chats. OpenAI also said the users' first and last names, email addresses, and payment information were exposed. OpenAI said it is committed to user privacy and keeping its data safe.
[00:04:15] But there's also an inbound threat in which companies could be at risk of importing not just compromised data, but actual compromised software through an LLM. Steven says such threats are bound to multiply, especially as more Gen.AI tools flood the market. As the tech is still so new, and technology advances at a much faster clip than the government's ability to enact policy, most companies are on their own. At least for now.
[00:04:44] It reminds me of the early days of cloud computing. When many companies were moving to the cloud and they didn't really fully appreciate the risks that were sort of hidden in the system. There was so much technical work to be done. They didn't have real visibility and they didn't really understand what the cloud providers were responsible for, what they were responsible for, and make sure that everyone was living up to that bargain.
[00:05:11] So I think that over time we'll see a similar shared responsibility model take shape when it comes to LLMs. Right now, let's say that the dial, that the share that falls on the company itself is pretty close to 100%. And amplifying the risk? Companies are made up of hundreds, sometimes thousands of people. And with that, points of potential failure abound.
[00:05:38] So who's responsible for making sure a company isn't at risk when employees engage with new online tools? When we come back, we'll hear from a chief information officer on how she's handling the use of Gen.AI in her workplace. That's after the break.
[00:05:54] Since the advent of ChatGPT and other Gen.AI tools, security chiefs at companies have had to figure out how to mitigate risks. And it's not just cyber breaches they have to worry about. Generative AI brings with it a unique challenge.
[00:06:22] It's easy for employees to inadvertently spill company secrets, like confidential or proprietary information. And this has happened already. According to Bloomberg, Samsung banned the use of ChatGPT and other AI-powered chatbots after sensitive internal source code was accidentally leaked to ChatGPT by an engineer. And the Wall Street Journal reported that Apple has restricted external AI tools for some employees as it develops its own similar technology.
[00:06:52] Documents viewed by the journal showed that the iPhone maker is concerned workers could release confidential data. So how are company leaders addressing this? Kathy Kay is the CIO of the global financial company Principal Financial Group. We actually have locked down any of the public chatbots. If somebody wants to use them, we have a whole workflow that will say, what's your business rationale? And then there's an approval.
[00:07:21] They have to take a quick training. Their leader has to take a training. Kay says they've also signed agreements for enterprise technology to use at the company, like their own chatbot. We call it page that people can use. That provides a lot of protections around making sure that they're the only ones who are leveraging the data that they have access to, things like that.
[00:07:44] For those that do go outside, we do track the interactions they're having with the external bots. So some bosses can see everything you've typed in on a company device. But do they look? That's a discussion for another time. Uploading a client contract, composing an internal email, generating a chart with undisclosed financial data.
[00:08:07] Getting an unauthorized bot to do any of that could land you and your company in hot water if that data is leaked. Or absorbed as a part of the model's training dataset. Kay says if company secrets do get out, there's a system in place to deal with the fallout. We have a whole playbook of who do we immediately include? How do we assess the impact of that? We're customers impacted.
[00:08:33] But the best failsafe for companies, she says, is to work with the new tech. Train up their staff and trust employees. With any new technology, you have to find ways for safely allowing employees to try these things, right?
[00:08:52] Because if not, if you make it so hard for them to try these things, they're going to make mistakes going around all the blockage, right? And so my philosophy is how do I make a safe environment for employees to try these things such that they're learning, we're coming up with new ways of using it.
[00:09:16] As Kay suggests, people will keep coming up with new ways to use these tools, like getting medical advice. Next week, we'll tell you about using chatbots in your personal life, specifically health, and how to do it without compromising your privacy. And that's it for this episode of Tech News Briefing's special series, Chatbot Confidential. Today's show was produced by Julie Chang. I'm your host, Nicole Nguyen.
[00:09:46] We had additional support from Wilson Rothman and Catherine Millsop. Shannon Mahoney mixed this episode. Our development producer is Aisha Al-Muslim. Scott Salloway and Chris Zinsley are the deputy editors. And Felana Patterson is the Wall Street Journal's head of news audio. Thanks for listening.

