When ChatGPT debuted in November 2022, it ushered in new points of view and sentiments around AI adoption. Workers from nearly every industry started to reimagine how they could accomplish daily tasks and execute their work — and the cybersecurity industry was no exception.
Like shadow IT, new rogue AI tools — meaning AI tools that employees are adopting unbeknownst to the organization they work for — can pose security risks to your organization. If you haven’t already started, now’s the time for CISOs to think critically about AI adoption. A new survey conducted by Devo and Wakefield Research found that 80% of IT security professionals use unauthorized AI tools despite suspecting their company would disapprove.
So, what should CISOs consider? And how can they enable their SOC teams to embrace AI without putting the organization’s security posture at risk? Let’s dig into it.
What’s the Risk with Rogue AI?
We’ve seen many cybersecurity use cases for AI tools such as ChatGPT cropping up online. For red teamers and blue teamers alike, there are lots of possibilities that are very appealing. The infosec community has started to use ChatGPT for malware analysis, setting up alerts within SIEMs, refactoring code, and so much more. The Devo and Wakefield Research survey also found that the number of IT security pros who use AI for incident response rose to 37% in 2023 – up 9 points from 2022 (28%).
While these use cases can boost productivity, some alarm bells should be sounding off for CISOs. What potentially sensitive information are employees inputting into these tools? How is that information then being stored and accessed? Is that information being shared with other third parties? These are valid and important questions you need to ask to understand whether your organization is incurring any unnecessary risk.
It’s Time to Create Your AI Action Plan
If you’re just starting to have conversations on how to handle the adoption of these AI tools, you can begin putting together your AI action plan by:
- Updating your acceptable use policies to account for AI tools such as ChatGPT.
- Beginning to understand why your security teams are leveraging these tools so you can make corporate-approved investments that meet their needs. This way, they can leverage the tools they want, but you can also conduct the proper due diligence.
- Updating your data loss prevention processes and policies to capture instances where corporate data may be uploaded to an AI tool/site.
- Ensuring your customer-facing privacy statements account for if/when AI is used, the purpose for AI processing, and the use case that is met with its use. You should also allow customers to opt out of this type of processing, if possible.
- Reviewing your company’s risk appetite. Ask yourself, is it acceptable for employees to use this technology? Are you breaching any existing contracts, security certifications, audits, terms and conditions, or licenses?
By taking these steps, you can simultaneously empower your SOC teams to take advantage of AI while mitigating risk and keeping data secure.
You can read more findings from this survey in this blog post, Why Security Leaders are Betting on Automation.