smallbiztechnology_logo (1)

New report analyzes artificial intelligence market

3 Min Read
AI Report

The increasing reliance on artificial intelligence and automation to manage the complex cybersecurity landscape comes with potential drawbacks if not properly managed. Daniel dos Santos, Senior Director of Security Research at Forescout’s Vedere Lab, explained that generative AI helps make sense of large amounts of data in more natural ways than previously possible. AI and machine learning models are now routinely used to help security tools categorize malware variants and detect anomalies, according to ESET CTO Juraj Malcho.

He emphasized the need for manual moderation to reduce threats by purging data and inputting cleaner datasets to continuously train AI models. Malcho noted that AI helps security teams manage the onslaught of data generated from various systems, including firewalls, network monitoring equipment, and identity management systems. These systems generate alerts and collect data from devices and networks, which become easier to understand with AI.

Security tools can now not only raise an alert for a potential malicious attack but also use natural language processing to explain where a similar pattern may have been identified in previous attacks and what it means when detected on a network. “It’s easier for humans to interact with that type of narration than before, where it mainly comprises structured data in large volumes,” dos Santos said. Malcho stressed the importance of SOC engineers to prioritize and focus on more critical issues.

Analyzing automation in cybersecurity tasks

However, a growing dependence on automation could result in a decreased ability for humans to recognize anomalies. Dos Santos acknowledged this concern but noted the continuous growth in the volume of attacks, data, and devices needing protection.

See also  Nvelop pioneers precise, safe gene-editing delivery

“We’re going to need some kind of automation to manage this, and the industry is already moving toward that,” he stated. He further explained that while automation is necessary, humans will always need to be involved in making decisions, especially in determining if an alert warrants a response. “There’s a limit to how organizations staff their SOCs, so there’s a need to turn to AI and generative AI tools for help,” he said, adding that human instinct and skilled security professionals are essential to ensure the tools function correctly.

With data increasing in volume, there is always room for human professionals to expand their knowledge and better manage the threat landscape. Malcho concurred, noting the need for human professionals to add value and make informed decisions based on AI-generated signals. “SOC engineers still have to look at a combination of different signals to connect the dots and see the whole picture,” he stated.

However, increased automation poses the risk of misconfigured codes or security patches being deployed, potentially bringing down critical systems. This underscores the ongoing necessity for human oversight and intervention in cybersecurity operations.

Share This Article
Follow:
SmallBizTechnology.com Editorial team. Striving to publish news, insights, and interviews focused on technology and more for growing businesses!