Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

The Growing Role of AI in Identity-Based Attacks in 2024

By Jasson Casey, CEO of Beyond Identity

In 2024, we can expect hackers to continue using AI to breach enterprises’ defenses, including to bypass weak authentication solutions. Not only will the large players across CISA’s 16 critical infrastructure segments be targeted, but small to mid-sized organizations across all verticals will be vulnerable in the upcoming year.

Emerging AI capabilities are democratizing hacking by lowering the barrier of entry for cybercriminals to carry out successful attacks. Methods of attack that were previously considered sophisticated are now being executed with ease thanks to technologies like generative AI and machine learning. For example, AI is being leveraged to aid in hackers’ preferred attack strategies over the past few years: stealing credentials, phishing, and exploiting vulnerabilities. To make matters worse, the vast majority of today’s enterprises are still relying on weak authentication solutions like passwords, SMS codes, and magic links, which are highly susceptible to these reigning cyber threats.

Read: Role of AI in Cybersecurity: Protecting Digital Assets From Cybercrime

In the coming year, AI will continue to accelerate social engineering attacks, in which targets are tricked into divulging sensitive information, like login credentials, to hackers. AI-powered large language models (LLMs) can create much more credible narratives, dialogues, and impersonations than were possible in the past, especially when AI is deployed to research individuals or organizations in depth beforehand. This highly-tailored approach previously required a lot of time and effort, but can now be done quickly through automation — allowing hackers to craft scams that are much more compelling and effective. Without telltale spelling, grammar, or formatting errors, social engineering scams will also be much harder for individuals or security systems to detect.

Similarly, continued advances in deepfake technology will make the adage that “seeing is believing” less true over time. Deepfakes, a concept popularized less than ten years ago, will pose a greater threat as fake videos become hyper-realistic, especially in real-time contexts. Through the lens of authentication, this will make verification processes that happen through the medium of video more unreliable.

In combination with individual tools like ChatGPT, hackers are also taking advantage of the evolution of hacking software. For instance, Ransomware-as-a-Service (RaaS) software provides them with the technology to launch ransomware campaigns with very little know-how or technical skills.

The urgency of these AI-powered attacks is elevated by the current flexible work landscape, since remote workers may have weaker overall security defenses or limited access to immediate IT support. And the rising volume of threats due to AI creates excess “noisy signals” that can quickly overwhelm the systems businesses currently use to identify, classify, and respond to threats.

Related Posts
1 of 6,895

Read: AI in Content Creation: Top 25 AI Tools

So how should companies combat AI-powered threats in 2024, especially as they pertain to authentication and identity-based attacks?   

Phishing-resistant multi-factor authentication (MFA) is a good place to start, especially in industries that handle particularly sensitive data. Phishing-resistant MFA, sometimes referred to as “strong MFA,” eliminates the use of vulnerable factors like passwords and one-time codes that are susceptible to being intercepted — for instance through an adversary-in-the-middle attack. In 2023, a growing number of companies already started or completed the shift away from weak authentication solutions to more secure options, like passkeys.

Improving authentication infrastructure goes a long way toward mitigating new and existing cyber threats while freeing up security teams to focus on preventative security, rather than reactionary security. Companies that do so will also be better prepared to comply with future frameworks or regulations related to authentication and cut down on cyber insurance premiums.

While there’s no doubt that the capabilities of AI will continue to evolve in 2024, understanding its associated risks will enable security teams to engage with threats proactively and effectively secure their organizations and data.

Read: AI In Marketing: Why GenAI Should Be in All 2024 Marketing Plans?

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Comments are closed.