Detecting Data Leaks from
Employee Use of ChatGPT

Use of AI as a service (AIaaS), including generative AI tools such as OpenAI ChatGPT, is skyrocketing, as individuals and organizations recognize the enormous productivity benefits these services offer. Even though the technology is not well understood, many large enterprises are committing significant resources to use it.

“Gartner®’s 2023 CIO and Technology Executive Survey asked respondents which technologies they considered most likely to be implemented by 2025. The highest ranked response among respondents in the U.S. was AI, selected by 92%. AI ranked similarly high in many other geographic markets”.[1]

The Downside

Beyond the ethical concerns associated with AIaaS, organizations using these tools have learned some hard lessons about data leaks and intellectual property (IP) risk. Until now, they haven’t had an easy way to audit employee use and potential misuse of these tools.

The ExtraHop Reveal(x) network detection and response (NDR) platform provides customers with visibility to help them better understand their potential risk exposure from employee use of ChatGPT.

 

[1] Gartner, Applying AI - Key Trends and Futures, Bern Elliott, Jim Hare, Frances Karamouzis, 25 April 2023.
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.