Techspace
- Alphabet Inc. reported to employees how it
was using chatbots, including its own Bard while marketing the program around
the world, said four people familiar with the matter.
Google
holding company has instructed its employees not to include any confidential
material in AI chatbots, a policy the company and its sources have confirmed.
This
policy is in place to protect sensitive information, in line with the company's
long-term commitment to safeguarding data.
These
chatbots, like Bard and ChatGPT, are designed to engage in human-like
conversations with users, leveraging generative artificial intelligence to
direct cues. However, as part of the training process, AI systems can ingest
and potentially reproduce data that has been exposed, creating a potential data
leak risk.
Alphabet
also advises its engineers against using live computer code that chatbots can
generate, as some individuals familiar with the matter have done.
The
company acknowledges that Bard occasionally makes unsolicited code suggestions.
Even so, the company was pressured that Bard would continue to help programmers
while maintaining transparency about the limitations of his technology.
This
concern demonstrates Google's efforts to mitigate the potential negative impact
on its business from competing with ChatGPT. Billions of dollars of investment
and unexplored advertising and cloud revenue from new AI programs at stake in
Google games with support for ChatGPT, OpenAI, and Microsoft.
Google's
caution also reflects what is standard security for the company, including
modifying employees to avoid using publicly available program dialogs.
Many
businesses around the world, including Samsung, Amazon, and Deutsche Bank, have
implemented safeguards for AI chatbots.
More
and more businesses around the world have put guardrails on AI chatbots, such
as Samsung, Amazon, and Deutsche Bank.
According
to a survey conducted by networking site Fishbowl, about 43% of professionals
used ChatGPT or other AI tools in January, often without telling their bosses,
according to a survey of nearly 12,000 respondents, including from top US-based
companies.
In
a month, Google informed staff who tested Bard before its February launch not
to provide internal information to the chatbot. Today, Google is expanding
Bard's availability to more than 180 countries and 40 languages, positioning it
as a tool for fostering creativity. The company warning also includes code
suggestions provided by Bard.
Google
confirmed that they have engaged in extensive discussions with the Irish Data
Protection Commission and are addressing regulatory inquiries. This comes after
a Politico report indicated that Google was delaying the launch of Bard in the
European Union this week, pending further information on the impact of chatbots
on privacy.
Concerns
around sensitive information
Such
technology can structure emails, documents, and even the software itself, promising
to speed up tasks significantly. However, this content may include
misinformation, sensitive data, or even copyrighted passages from the
"Harry Potter" novels.
On
June 1, Google updated its privacy notice, which now includes a statement,
"Do not include confidential or sensitive information in your Bard
conversations."
Several
companies have developed software to solve the problem. For example, Cloudflare
(NET.N), which protects websites from cyber attacks and offers other cloud
services, markets the ability for businesses to flag and restrict some data
from flowing externally.
Several
companies have developed software solutions to address this problem. For
example, Cloudflare, a provider of website security from cyber-attacks and
other cloud services, offers businesses the ability to flag and release certain
data from being sent externally.
Google
and Microsoft are also offering conversational tools to up-and-coming business
customers at a higher price tag but are refraining from ingesting data into
public AI models. The default setting in Bard and ChatGPT is to save the user's
conversation history, which the user can choose to delete.
Yusuf
Mehdi, who is a director of consumer marketing for Microsoft, stated that it
makes sense for companies to prevent their staff from using public chatbots for
work purposes. He said "The company is taking a suitably conservative
viewpoint," highlighting that Microsoft's free Bing chatbot operates under
much stricter policies compared to its enterprise software.
Microsoft
declined to comment on whether there was an overall ban on staff entering confidential
information into public AI programs, including its own, though a different
executive there told Reuters he privately restricted its use.
Matthew Prince, CEO
of Cloudflare, said that typing confidential matters into chatbots was like
"turning a bunch of PhD students loose in all of your private
records."
Leave a comment