Review of five hot spots of AI security in 2024

#News ·2025-01-02

Two years after ChatGPT's launch, generative AI has become a significant force in cybersecurity. The impact of generative AI technology in 2024 is everywhere, from deep fraud to the risk of "shadow AI" to the emergence of AI security regulations and the potential for AI-driven vulnerability research. Below, we will take stock of the five hot topics of AI and cybersecurity in the past year.

1. Deep fake attacks and AI phishing attacks surge

At present, the core of AI security threats is not AI-generated malware or rogue AI, but fraud carried out through AI-generated phishing lures and deep fake techniques. The real threat of these attacks is enormous. For example, a financial practitioner in Hong Kong was deceived by a fake video conference and transferred as much as $25.6 million. Similar cases show that deep fake technology is threatening the financial security of individuals and businesses through realistic face simulation and voice imitation.

According to iProov, a security firm, biometric authentication based "face swapping" attacks surged 704% in 2023. Deep fake videos not only deceive users, but also bypass biometric verification systems such as facial recognition. Ai-driven business email fraud (BEC) is also performing strongly, with VIPRE Security estimating that AI-generated phishing emails account for 40% of commercial fraud decoys.

2. Data leakage risk brought by enterprise "shadow AI"

The use of unauthorized or monitored AI tools within enterprises, known as "shadow AI," is growing, creating the risk of data breaches. Cyberhaven reported a 485% increase in sensitive data uploaded by employees to generative AI tools in 2024, including customer support information, source code and research and development data.

While AI tools have boosted productivity, concerns have been raised about lags in security training and policy development. For example, the U.S. House of Representatives banned employees from using Microsoft's Copilot in March 2024, fearing it could lead to sensitive data breaches.

3. Combination of LLM cracking and APT organization

In 2024, attacks targeting large language models (LLMS) evolved further, such as a "Deceptive Delight" approach developed by Palo Alto Networks that bypassed security restrictions with only three interactions. In addition, threat actors from well-known APT groups such as Russia, North Korea, and China have been found to utilize ChatGPT for script generation, vulnerability research, and target reconnaissance.

Microsoft and OpenAI quickly took shutdowns after disclosing these activities, and Microsoft's proposal to incorporate LLM attack methods into the MITRE ATT&CK framework reflects the far-reaching impact of LLM threats on the cybersecurity ecosystem.

4. The global AI security legislation process is accelerating

In 2024, global AI regulations entered a new stage. The European Union took the lead in passing the "AI Act" to classify AI systems according to risk levels, prohibit high-risk applications, and set up corresponding regulatory requirements. The United States has yet to introduce similar national regulations, although it has issued AI security guidelines for critical infrastructure. In addition, states such as California have tried to introduce AI regulations, but have faced criticism for stifling innovation.

5. Opportunities for AI to enable cybersecurity

AI technology is not only a threat, but also opens up entirely new possibilities for cyber defense. For example, Google's AI-driven vulnerability research tool Big Sleep found vulnerabilities in the SQLite database, while its improved OSS-Fuzz tool identified 26 new vulnerabilities in open source projects.

The RSAC 2024 conference also highlighted the value of AI to critical infrastructure and national security defenses, from pattern recognition to automated analytics, where AI is helping cybersecurity teams dramatically increase efficiency.

The rise of generative AI has made the offensive and defensive battle in cyberspace more complicated. On the one hand, AI increases the destructive power of attackers, on the other hand, it also provides security practitioners with unprecedented efficient tools. This game will only intensify in the coming years, and the regulatory and industrial forces of countries around the world need to closely follow its pace.

TAGS:

  • 13004184443

  • Room 607, 6th Floor, Building 9, Hongjing Xinhuiyuan, Qingpu District, Shanghai

  • gcfai@dongfangyuzhe.com

  • wechat

  • WeChat official account

Quantum (Shanghai) Artificial Intelligence Technology Co., Ltd. ICP:沪ICP备2025113240号-1

friend link