Unveiled Findings: AI-Driven Cybersecurity Threats' Significant Impacts, According to Darktrace Report
In a significant shift towards a more collaborative approach, Australian organisations are increasingly favouring human-AI collaboration over full automation in the realm of cybersecurity. This trend is highlighted in the latest "State of AI Cybersecurity Report" released by the cybersecurity company Darktrace.
The report reveals that a staggering 78.5% of Australian organisations are currently experiencing significant impacts from AI-powered threats, underscoring the urgent need for effective cybersecurity measures. Interestingly, only 30.8% of these organisations claim to know exactly which AI types they are using, suggesting a gap in understanding and control.
Despite the growing reliance on AI, there is a noticeable lack of confidence in AI's ability to automatically stop threats, with only 26.5% of organisations expressing such faith. This apprehension has led Australian organisations to adopt a more cautious stance, pushing back against the 'automation at all costs' narrative.
In an effort to address these challenges, various Australian organisations such as the Australian Computer Society, Business Council of Australia, Australian Chamber of Commerce and Industry, Ai Group (Australian Industry Group), Council of Small Business Organisations Australia, and Tech Council of Australia advocate for a human-centered approach in AI strategy, including its use in cybersecurity. However, no specific number has been provided about how many have explicitly endorsed AI-based security solutions.
The report finds that 91.6% of Australian organisations prefer AI security solutions that do not share data externally, demonstrating a concern for data privacy. This preference aligns with the general trend of Australian organisations being more cautious about full automation compared to their global counterparts.
On a positive note, 92.5% of Australian organisations agree that AI-powered security solutions significantly improve their ability to prevent, detect, respond to, and recover from cyber threats. Furthermore, 46.7% of organisations have already implemented formal AI safety policies, with a further 48.6% in policy discussions regarding AI safety.
However, concerns remain, with just under half of Australian organisations (43%) lacking confidence in their teams' ability to defend against AI threats. Adding to this, 58.9% of organisations lack confidence in traditional cybersecurity solutions against AI threats.
The survey for the report included over 1500 cybersecurity professionals, including 107 in Australia. The full report, which delves deeper into these findings, can be found at the provided link.
In conclusion, Australian organisations are taking a proactive stance on AI security governance, embracing a human-centric approach to AI security in their quest to navigate the complex and evolving landscape of AI-powered cybersecurity threats.