Ever since the inception of Chatgpt, practitioners have been trying to incorporate machine learning and generative AI into their cybersecurity processes, but how can AI help and hurt cybersecurity?
How AI helps us:
Stop breaches faster and cheaper:
According to a report released by IBM, organizations that use AI and automation cyber defense tools face significantly less effects of a data breach. Companies that don’t use AI in their operations face a data breach lifecycle that’s on average 100 days longer than companies that do deploy these technologies (322 vs 214 days). Additionally, organizations that use automation save 1.8 million more during information breaches.
How Fordham Uses Machine Learning for Cybersecurity:
Increasingly, anti virus software are now leveraging AI to detect suspicious activity earlier, Crowdstrike Falcon, an antivirus software used on all university devices, utilizes machine learning trained on 2 trillion instances of malicious activity to detect when malware is being used on a Fordham-owned device. Crowdstrike has recently implemented AI generated indicators of attack (IOAs). These IOAs work with pre-existing sensors to analyze events in real time based on past observed malicious behaviors.
How AI heightens cybersecurity threats:
From Novice to Serious Threat:
Attackers who would’ve once been novice cyber criminals are now using AI to advance their hacking capabilities. Wormgpt and Fraudgpt are two corrupt alternatives that bad actors are now using to write better phishing emails and more sophisticated malware scripts. Both applications are being sold as jailbroken versions of chatgpt on the dark web that can answer prompts without restrictions. However, even without the dark web, it’s fairly easy to bypass content restrictions for chatgpt and engineer its answers for malicious purposes.
Potential Data Leaks:
According to an article by Forbes, AI, particularly large language models pose a threat to the security of an organization’s data if their use isn’t regulated by administration. While AI can become a vital assistant to employees, if sensitive info isn’t properly secured, it can lead to self-inflicted data leaks. For chatgpt, information from conversations with the model will be stored and used for future training unless the user turns the feature off. Without standards for how employees can use AI, they can open up org data to more possible breaches.
It’s a delicate balance, but organizations must embrace new technologies while enforcing risk management to allow for innovation and security in the workplace.