End-point Security Archives - AiThority https://aithority.com/category/it-and-devops/end-point-security/ Artificial Intelligence | News | Insights | AiThority Tue, 30 Jul 2024 13:21:44 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://aithority.com/wp-content/uploads/2023/09/cropped-0-2951_aithority-logo-hd-png-download-removebg-preview-32x32.png End-point Security Archives - AiThority https://aithority.com/category/it-and-devops/end-point-security/ 32 32 AI Inspired Series by AiThority.com: Featuring Bradley Jenkins, Intel’s EMEA lead for AI PC & ISV strategies https://aithority.com/ai-inspired-stories-by-aithority/ai-inspired-series-by-aithority-com-featuring-bradley-jenkins-intels-emea-lead-for-ai-pc-isv-strategies/ Tue, 30 Jul 2024 13:02:22 +0000 https://aithority.com/?p=574472

The post AI Inspired Series by AiThority.com: Featuring Bradley Jenkins, Intel’s EMEA lead for AI PC & ISV strategies appeared first on AiThority.

]]>

In this AI Inspired Story by AiThority.com, we had Bradley Jenkins, Intel’s EMEA lead for AI PC & ISV strategies participate to chat about the key benefits of Intel’s Core Ultra processor range and how modern enterprises can benefit from systems powered by it:

Key topics covered:

-> An overview of Intel Core Ultra Processors

-> How software optimization can impact the performance of Intel Core Ultra processors

-> Who benefits from laptops powered by Intel® Core™ Ultra Processors

-> A brief breakdown on the 3 AI engines driving this: CPU, GPU, NPU

The post AI Inspired Series by AiThority.com: Featuring Bradley Jenkins, Intel’s EMEA lead for AI PC & ISV strategies appeared first on AiThority.

]]>
HR and IT Related Emails are the Top Choices for Phishing Scams https://aithority.com/it-and-devops/end-point-security/hr-and-it-related-emails-are-the-top-choices-for-phishing-scams/ Wed, 22 May 2024 10:18:09 +0000 https://aithority.com/?p=570928 HR and IT Related Emails are the Top Choices for Phishing Scams

Organizations are targeted by advanced phishing scams, generated by Artificial Intelligence tools. Leading security awareness training and simulated phishing platform, KnowBe4 announced the results of its Q1 2024 top-clicked phishing test report. The results include the most common email subjects clicked on in phishing tests, reflecting the persistent use of HR or IT-related business email […]

The post HR and IT Related Emails are the Top Choices for Phishing Scams appeared first on AiThority.

]]>
HR and IT Related Emails are the Top Choices for Phishing Scams

header-logoOrganizations are targeted by advanced phishing scams, generated by Artificial Intelligence tools. Leading security awareness training and simulated phishing platform, KnowBe4 announced the results of its Q1 2024 top-clicked phishing test report. The results include the most common email subjects clicked on in phishing tests, reflecting the persistent use of HR or IT-related business email messages that captivate employees’ interests.

Phishing emails continue to be one of the most common methods for executing cyberattacks on organizations worldwide.

KnowBe4’s 2023 Phishing by Industry Benchmarking Report reveals that nearly one third of users are susceptible to clicking on malicious links or complying with fraudulent requests. As a result, cybercriminals take advantage of this vulnerability and leverage the innovative tools available to them, such as AI, to come up with increasingly sophisticated messages to outsmart users. These bad actors tailor phishing email strategies to appear more legitimate in their requests and trick employees by inciting an emotional response and urgency to click on a malicious link or download an infected attachment.

Source: Q1 2024 KnowBe4 Phishing Report infographic
Source: Q1 2024 KnowBe4 Phishing Report infographic

Related Story: The Risks Threatening Employee Data in an AI-Driven World

HR-related phishing attacks take the top spot at 42%, a trend that has persisted for the last three quarters, followed by IT-related phishing emails at 30%. Phishing emails from HR or IT departments that prompt dress code changes, tax and healthcare updates, training notifications and other similar actions are effective in deceiving employees as they can affect a user’s work, evoke an immediate response and can cause a person to react before thinking about the validity of the email.

The KnowBe4 phishing report this quarter also noted more personal phishing email attacks, such as tax, healthcare and ApplePay, that could affect users’ sensitive information. These types of attacks are effective because they cause a person to react to a potentially alarming topic and engage to protect their private information before thinking logically about the credibility of the email.

“KnowBe4’s report shows that cybercriminals are becoming increasingly tactical in exploiting employee trust by using HR-related phishing emails due to their seemingly legitimate source,” said Stu Sjouwerman, CEO of KnowBe4.

Recommended Read: 10 AI ML In Data Storage Trends To Look Out For In 2024

Stu added, “Emails coming from an internal department such as HR or IT are especially harmful to organizations since they appear to be coming from a trusted source and can convince employees to engage quickly before confirming their legitimacy, exposing the company to security vulnerabilities. A well-trained workforce is therefore crucial in building a strong security culture and serves as the best defense in safeguarding organizations against preventable cyberattacks.”

KnowBe4, the provider of the world’s largest security awareness training and simulated phishing platform, is used by more than 65,000 organizations around the globe. Founded by IT and data security specialist Stu Sjouwerman, KnowBe4 helps organizations address the human element of security by raising awareness about ransomware, CEO fraud and other social engineering tactics through a new-school approach to awareness training on security.

Top Story of the Month: As Advertising Fatigue Grows, It’s Time To Let Creative Content Marketing Shine!

The post HR and IT Related Emails are the Top Choices for Phishing Scams appeared first on AiThority.

]]>
Beyond Traditional Security: Pioneering AI-Based Solutions for Browser Security https://aithority.com/machine-learning/pioneering-ai-based-solutions-for-browser-security/ Tue, 30 Apr 2024 18:21:12 +0000 https://aithority.com/?p=569669 Beyond Traditional Security: Pioneering AI-Based Solutions for Browser Security

Online attacks pose major risks. People and companies must prioritize cybersecurity. Launched in 2013, Darktrace revolutionized the field by using AI to detect and stop threats as they happen. Their cutting-edge approach challenged old methods and opened new ways to defend against digital dangers. This exploration examines how AI enhances browser protection, presenting novel approaches […]

The post Beyond Traditional Security: Pioneering AI-Based Solutions for Browser Security appeared first on AiThority.

]]>
Beyond Traditional Security: Pioneering AI-Based Solutions for Browser Security

Online attacks pose major risks. People and companies must prioritize cybersecurity. Launched in 2013, Darktrace revolutionized the field by using AI to detect and stop threats as they happen. Their cutting-edge approach challenged old methods and opened new ways to defend against digital dangers.

This exploration examines how AI enhances browser protection, presenting novel approaches to guard against online threats. Integrating AI technologies into browser security allows proactive threat detection and rapid response capabilities. Darktrace’s pioneering AI algorithms enable browsers to dynamically adapt and respond to evolving cyber threats, providing a robust defence against malware, phishing, and other malicious activities during web browsing.

Darktrace: AI Pionee­r in the Cybersecurity Are­na

In 2013, Darktrace – founded by mathematicians and cyber experts from intelligence agencies – revolutionised cybersecurity. It was among the first to leverage AI for cyber defence, setting new enterprise security standards. Darktrace’s AI doesn’t just observe; it actively learns from an organization’s daily operations to strengthen network security and threat detection.

With solutions like Darktrace PREVENT, the firm proactively tackles cyber threats. Vulnerabilities are prioritized to bolster defences and mitigate risks more effectively. Through its groundbreaking AI, Darktrace offers insights into global cybersecurity trends via its AI Cybersecurity State report – highlighting AI’s role in shaping data protection’s future and security operations against escalating cyber risks. Guardio, another innovator in the cybersecurity realm, collaborates with Darktrace to further enhance browser security, leveraging AI to detect and neutralize online threats in real time.

Artificial intellige­nce: Truth versus imagination

AI transformed cybersecurity, shattering misconceptions but validating others. Many confuse AI’s capabilities with science fiction scenarios. In reality, cybersecurity AI utilizes sophisticated algorithms and machine learning to analyze massive data sets. This helps identify security weaknesses and cyber threats humans might overlook. AI doesn’t involve robots taking control, but intelligent systems tirelessly safeguarding data.

Misunderstandings about how AI functions in protecting networks and information persist. For example, some believe AI can completely replace human judgment—this is incorrect. While AI excels at rapidly analyzing vast information to detect potential threats, complex decision-making processes like differentiating false alarms from genuine dangers require human expertise. The collaboration between AI and human insight creates a potent defence against cyber attacks, enabling better threat detection and data protection than ever before.

National Security: Cyber Thre­at

AI plays a crucial role in national defence, especially against cyber threats. Countries utilize AI for cybersecurity measures to protect against advanced cyber attacks. These threats are becoming increasingly intelligent and dangerous, including hacking, phishing, and other malicious activities targeting the nation’s security systems.

Experts apply AI in threat spotting and cyber defence more aptly than ever nowadays. The tech promptly examines data to find out bizarre patterns that might signify a security breach. This renders AI an essential utility in safeguarding national interests from complex cyberattacks, amplifying the necessity for responsible employment of this potent tech in securing cyberspace.

AI-Based Solutions for Browser Security

AI solutions work by looking at lots of data in real time. They find patterns that show if there are threats. This helps them identify threats with great accuracy. The systems keep learning about new cyber threats. So they can stay ahead of bad actors, protecting users from changing online risks.

AI in browser security doesn’t just find threats better. It also makes responding to threats smoother. With automated response, AI-powered solutions can stop identified threats quickly. This limits the harm to users’ devices and data. Detection and response work together, letting users browse safely, thanks to advanced AI protection.

Advance­ment of AI’s Role in Cyberse­curity

AI in cybersecurity has surged massively over the past decade. Major tech firms and specialized cybersec companies fueled this progress by developing AI-based solutions that outperform traditional security tools in threat identification, faster and more accurately. This transition towards artificial intelligence revolutionized how we protect data and networks from cyber threats.

The cyber threat landscape changes constantly, presenting fresh challenges daily. Advanced ML and AI-powered security solutions have become indispensable for addressing evolving cyber risks effectively. These technologies excel at threat detection, vulnerability analysis, and malware identification, offering stronger defence against evolving cyber risks. The embrace of AI in cybersecurity marks a pivotal advancement in safeguarding digital assets more efficiently than ever before.

Cyber threats are identified quickly using machine learning

AI helps detect and respond to cyber-attacks. It recognizes abnormalities and patterns impossible for traditional techniques. This enables faster detection of threats targeting browsers. Real-time analysis of device behaviour and network traffic occurs. Suspicious activities prompt blocking or mitigating threats before they cause harm. Analyst efficiency improves with AI, enabling quicker responses to cyber threats.

New risks are adapted to, enhance endpoint security and safe browsing.

Challenges arise as AI for threat detection advances

Cybercriminals developing sophisticated attacks test AI’s limits. Machine learning algorithms require large data amounts, raising privacy and ethical information use concerns. The coming of Web 3.0 creates opportunities and risks. Analytics can predict threats, but advanced attacks could outsmart defences.

Future cybersecurity needs AI to adapt fast to emerging dangers while keeping people’s privacy secure. Developers face balancing detecting threats with protecting rights and security.

Web 3.0 and AI cybersecurity are locked in an endless cycle of change – staying ahead of threats without invading privacy is crucial for safeguarding our digital lives.

Trust and Ethical Considerations in AI-Base­d Browser Security Solutions

User trust hinges on ethical AI browser security that protects privacy and data. Ethical AI must balance innovation with respecting personal information. Doing so gains user acceptance of AI cybersecurity.

However, AI browser extensions raise fears of malware and privacy breaches. From the start, developers must build ethical safeguards to handle social, political, and legal implications. A trustworthy ethical framework is vital for resolving moral questions about AI’s role in securing browsers.

In conclusion, online safety advances, thanks to AI’s watchful eye. AI-driven solutions outmatch traditional barriers, spotting threats quickly and intelligently, making web browsing more secure for all. Discussions persist about ethical use, ensuring tech serves us responsibly, without crossing lines. The digital defence future shines brighter with AI leading the charge, ushering in an intuitive era of heightened cyber safety.

AI reshapes cybersecurity’s landscape. Innovative tech surpasses old protective measures, detecting risks rapidly and smartly. Everyone benefits from enhanced web surfing security. However, ethical concerns remain central, guaranteeing AI assists without infringing boundaries. Cybersecurity’s AI-powered tomorrow promises digital safeguards, more instinctive than ever imagined.

The post Beyond Traditional Security: Pioneering AI-Based Solutions for Browser Security appeared first on AiThority.

]]>
Microsoft Launches Measures to Prevent Tricking AI Chatbots https://aithority.com/it-and-devops/end-point-security/microsoft-launches-measures-to-prevent-tricking-ai-chatbots/ Tue, 02 Apr 2024 07:12:08 +0000 https://aithority.com/?p=568678

Microsoft Introduces New Security Features to Safeguard AI Chatbots Challenges are opportunities! Often heard! Microsoft grabbed this opportunity. A solution to combat so-called “prompt injection” assaults was one of several services for the tech giant’s Azure AI system. For example, “groundedness” detection can identify artificial intelligence “hallucinations,” while prompt shields can identify and prevent quick […]

The post Microsoft Launches Measures to Prevent Tricking AI Chatbots appeared first on AiThority.

]]>

Microsoft Introduces New Security Features to Safeguard AI Chatbots

Challenges are opportunities! Often heard!

Microsoft grabbed this opportunity. A solution to combat so-called “prompt injection” assaults was one of several services for the tech giant’s Azure AI system. For example, “groundedness” detection can identify artificial intelligence “hallucinations,” while prompt shields can identify and prevent quick injection assaults. One of the tools is called a “prompt shield,” and its purpose is to prevent intentional attempts to manipulate an AI model into doing something it shouldn’t. “Indirect prompt injections,” in which hackers introduce harmful instructions into data, is another issue that Microsoft is working to resolve.

Safety system messages “to steer your model’s behavior toward safe, responsible outputs” are shortly to be launched by Microsoft, and the company is now previewing safety evaluations to find out how vulnerable an app is to jailbreak assaults and content risk generation, according to the post. Concerns about inappropriate material and prompt injections are just two of the many risks that organizations face when deploying generative AI. These technologies aim to assist in alleviating some of those risks.

Read the latest blogs: Navigating The Reality Spectrum: Understanding VR, AR, and MR

New features in recent releases include safeguards against emerging attack vectors like jailbreaks and prompt injections, and real-time monitoring to identify and block offensive content or users. The part played by Microsoft in the “battle for generative AI” that began with the triumph of ChatGPT, created by Microsoft partner OpenAI. There is more than just Big Tech competing for the title of AI champion, even though leading tech giants like Google and Microsoft have an advantage.

The Driving Force Behind Microsoft’s New Tool

In the race to unseat OpenAI, open-source initiatives, partnerships, and an emphasis on transparency and responsibility have surfaced as potential contenders. It is common to need to spend money on processing power and research talent to push AI to its limits. Chatbot security is enhanced by Microsoft. Although generative AI has the potential to increase productivity and efficiency for businesses, a recent poll by McKinsey found that 91% of corporate leaders are ill-prepared for the dangers that come along with it. These issues have been the driving force behind Microsoft’s new tools, which are the result of extensive study and technological advancements built on the company’s own experience with products like Copilot. The multibillion-dollar investment by Microsoft in OpenAI has certainly been a game-changer, opening up a plethora of new possibilities for AI research and development.

To create malicious, undesired content, prompt injections manipulate AI systems. Direct and indirect prompt attacks are both protected by Microsoft’s Prompt Shields. The program checks third-party data and prompts for possible harmful intent using sophisticated machine-learning techniques and natural language processing. In addition to fixing security issues, the newest tools from Microsoft should make generative AI apps more reliable by automatically testing them under stress to make sure they’re not vulnerable to things like jailbreaks.

Adjust Content Filter Setups to Increase Safety

Developers will be able to manually tune the back end and adjust content filter setups to increase safety with the use of real-time monitoring, another notable addition. This capability tracks inputs and outputs that activate safety mechanisms. All of Microsoft’s previous AI-related announcements have reaffirmed the company’s dedication to responsible and safe AI, and these latest technologies are no exception.

[To share your insights with us as part of editorial or sponsored content, please write to sghosh@martechseries.com]

The post Microsoft Launches Measures to Prevent Tricking AI Chatbots appeared first on AiThority.

]]>
DARPA and IBM Secure AI Systems from Hackers https://aithority.com/it-and-devops/end-point-security/darpa-and-ibm-secure-ai-systems-from-hackers/ Mon, 18 Mar 2024 14:39:11 +0000 https://aithority.com/?p=567921 IBM and Top Universities to Advance Quantum Education for 40_000 Students in Japan_ South Korea_ and the United States

DARPA and IBM’s Collaboration The US Department of Defense’s (DoD) research and development arm, DARPA, and IBM have been collaborating on several projects related to hostile AI for the past four years. The team from IBM has been working on a project called GARD, which aims to construct defenses that can handle new threats, provide […]

The post DARPA and IBM Secure AI Systems from Hackers appeared first on AiThority.

]]>
IBM and Top Universities to Advance Quantum Education for 40_000 Students in Japan_ South Korea_ and the United States

DARPA and IBM’s Collaboration

The US Department of Defense’s (DoD) research and development arm, DARPA, and IBM have been collaborating on several projects related to hostile AI for the past four years. The team from IBM has been working on a project called GARD, which aims to construct defenses that can handle new threats, provide theory to make systems provably robust and create tools to evaluate the defenses of algorithms reliably. The project is led by Principal Investigator (PI) Nathalie Baracaldo and co-PI Mark Purcell. To make ART more applicable to potential use cases encountered by the US military and other organizations creating AI systems, researchers have upgraded it as part of the project.

Read: Data monetization With IBM For Your Financial benefits

Famous Machine-Learning Model Structures

With the hope of inspiring other AI experts to collaborate on developing tools to safeguard AI deployments in the actual world, IBM gave ART to the Linux Foundation in 2020. In addition to supporting numerous prominent machine-learning model structures, like TensorFlow and PyTorch, ART also has its own GitHub repository. To continue meeting AI practitioners where they are, IBM has now added the updated toolkit to Hugging Face. When it comes to finding and using AI models, Hugging Face has swiftly risen to the top of the internet. The current geographic model developed with NASA is one of many IBM projects that have been made publicly available on Hugging Face. Models from the AI repository are the intended users of Hugging Face’s ART toolset. It demonstrates how to include the toolbox into time, a library utilized to construct Hugging Face models, and provides instances of assaults and defenses for evasion and poisoning threats.

Read: Top 10 Benefits Of AI In The Real Estate Industry

Challenges

The researchers in this dispersed group would use their standards to assess the efficacy of the defenses they constructed. ART has amassed hundreds of stars on GitHub and was the first to provide a single toolset for many practical assaults. This exemplifies the community’s cooperative spirit as they strive toward a common objective of protecting their AI procedures. Although they have come a long way, machine-learning models are still fragile and open to both targeted attacks and random noise from the real world.

A disorganized and immature adversarial AI community existed before GARD. Digital assaults, such as introducing small disturbances to photos, were the main focus of researchers, although they weren’t the most pressing issues. Physical attacks, such as covering a stop sign with a sticker to trick an autonomous vehicle’s AI model, and attacks where training data is poisoned are the main concerns in the real world.

New: 10 AI ML In Personal Healthcare Trends To Look Out For In 2024

Researchers and practitioners in the field of artificial intelligence security lacked a central hub for exchanging attack and defense codes before the advent of ART. To accomplish this, ART offers a platform that enables teams to concentrate on more particular tasks. As part of GARD, the group has created resources that blue and red teams can use to evaluate and compare various machine learning model’s performance in the face of various threats, such as poisoning and evasion. What is included in ART are the practical countermeasures against those attacks. Although the project is coming to a close this spring after four years, it is far from over.

[To share your insights with us as part of editorial or sponsored content, please write to sghosh@martechseries.com]

The post DARPA and IBM Secure AI Systems from Hackers appeared first on AiThority.

]]>
Utilizing AI for Unparalleled Cybersecurity Strength https://aithority.com/ai-machine-learning-projects/utilizing-ai-for-unparalleled-cybersecurity-strength/ Tue, 27 Feb 2024 10:29:09 +0000 https://aithority.com/?p=564601

Google has come up with an interesting blog on digital security. The world is fixating on AI’s potential, and both governments and businesses are trying to figure out how to regulate it so that it’s safe and secure. Many see AI as a turning moment in the fight for digital security. Their plight is shared. […]

The post Utilizing AI for Unparalleled Cybersecurity Strength appeared first on AiThority.

]]>

Google has come up with an interesting blog on digital security. The world is fixating on AI’s potential, and both governments and businesses are trying to figure out how to regulate it so that it’s safe and secure. Many see AI as a turning moment in the fight for digital security. Their plight is shared. At this weekend’s Munich Security Conference, attendees will hear about artificial intelligence’s potential to improve security, an issue that more than 40% of people rank as AI’s top application.

Read Top 20 Uses of Artificial Intelligence In Cloud Computing For 2024

At this critical juncture, AI stands before lawmakers, security experts, and members of civil society with an opportunity to shift the cybersecurity power dynamic away from cybercriminals and toward cyberdefenders. At a time when bad actors are playing around with AI, decisive and prompt action is needed to influence the future of this technology. The fact that cybercriminals just require a single effective, new threat to circumvent the most robust defenses has been a major problem in the field for many years and continues to be so today. Meanwhile, there is little tolerance for error as defenders must constantly deploy top-tier defenses across ever-evolving digital landscapes. We call this situation the “Defender’s Dilemma,” because up until now, no one has found a foolproof solution.

We are certain that AI has the potential to change this dynamic because of their expertise with large-scale AI deployments. With the use of AI, defenders and security experts may speed up their processes for identifying threats, analyzing malware, finding vulnerabilities, correcting vulnerabilities, and responding to incidents. If they want to make a difference online, they should open up new AI developments to government agencies and companies of all sizes and in all sectors. Our Vertex AI platform and other extensive generative AI capabilities will be supported by more than five billion euros invested in data centers across Europe between 2019 and the end of 2024. This investment will help ensure the continued availability of a wide variety of digital services.

Read the Latest blog from us: AI And Cloud- The Perfect Match

Today’s AI governance choices have the potential to change the cyber landscape in ways no one has anticipated. To prevent a future where thugs can develop while protectors can’t, their communities need a moderate approach to AI regulation. In order for enterprises to harness the full potential of AI while preventing its misuse by competitors, they require strategic investments, collaborations between businesses and governments, and efficient regulatory strategies. To back this up, they’re announcing $2 million in research grants and strategic partnerships to bolster AI-based cybersecurity research initiatives. These will help with things like developing larger language models with better resilience to threats, better understanding of how AI can aid with cyber offense and defense countermeasures, and improving code verification.

Researchers at Stanford, Carnegie Mellon, and the University of Chicago are receiving the funds. Gmail, Drive, and secure Browsing are just a few of the products that use Magika to help keep users secure online. The VirusTotal team also uses it to make the internet a better place. With a 30% improvement in overall accuracy and a 95% improvement in precision on notoriously difficult-to-identify but potentially harmful information like VBA, JavaScript, and Powershell, Magika surpasses previous file recognition methods.

Read OpenAI Open-Source ASR Model Launched- Whisper 3

[To share your insights with us, please write to sghosh@martechseries.com]

The post Utilizing AI for Unparalleled Cybersecurity Strength appeared first on AiThority.

]]>
Unveiling IBM’s AI Research: How Live Calls Can be Magically Hijacked https://aithority.com/ai-machine-learning-projects/unveiling-ibms-ai-wizardry-how-live-calls-can-be-magically-hijacked/ Tue, 13 Feb 2024 11:41:50 +0000 https://aithority.com/?p=562858

Transforming Communication with the Breakthroughs of Voice Cloning and Language Models: A New Era Emerges Potentially exploitable by malicious actors for financial benefit is the “audio-jacking” method, which employs large-language models (LLMs), voice cloning, text-to-speech, and speech-to-text capabilities. According to recent research by Chenta Lee, IBM Security’s lead architect of threat intelligence. Scientists at IBM […]

The post Unveiling IBM’s AI Research: How Live Calls Can be Magically Hijacked appeared first on AiThority.

]]>

Transforming Communication with the Breakthroughs of Voice Cloning and Language Models: A New Era Emerges

Potentially exploitable by malicious actors for financial benefit is the “audio-jacking” method, which employs large-language models (LLMs), voice cloning, text-to-speech, and speech-to-text capabilities. According to recent research by Chenta Lee, IBM Security’s lead architect of threat intelligence. Scientists at IBM have figured out how to use generative AI technologies to secretly alter live audio calls without the speakers’ knowledge.

Many are worried that the fast development of generative AI in the last 16 months will lead to the spread of misinformation by means of deepfakes, or fake images, and voice cloning, in which AI tools can use a sample of a person’s voice to generate full audio messages that sound exactly like the original.

Read: Amazon’s Revolutionary Retail Revealed: Meet Rufus, An Astonishing AI Conversational Shopper

Echoes of Deception: The Alarming Rise of Voice Cloning

In the past month, voice cloning has been in the news because of robocalls that purportedly came from President Biden urging people not to vote in the New Hampshire presidential primary. The calls were traced back to two Texas-based organizations, Lingo Telecom and Life Corp., according to the New Hampshire Attorney General’s Office. One such usage of voice cloning is in scams, where the victim is contacted by phone calls that seem to be from a loved one in distress, requesting financial assistance.

IBM explained that the idea behind audio-jacking is comparable to thread-jacking attacks, which allow hackers to covertly alter a phone call. IBM warned last year that these attacks were becoming more common in email exchanges. Here, IBM researchers aimed to surpass the use of generative AI to generate a synthetic voice for an entire discussion—a tactic they claimed is easily detectable. Rather than that, their system listens in on real-time chats and substitutes context-dependent phrases.

Read: 10 AI In Energy Management Trends To Look Out For In 2024

“Bank account” was the term utilized in the experiments. The LLM was directed to substitute a false bank account number for any reference of real bank accounts in the chat. Malware put on victims’ phones or a hacked or malicious Voice-over-IP (VoIP) service are two possible vectors for such an assault. Hackers with excellent social engineering abilities might potentially call two victims simultaneously to start a conversation between them.

In IBM’s proof-of-concept (PoC), the software observes a live discussion and operates as a man-in-the-middle. A speech-to-text tool converts voice into text, and the LLM understands the context of the conversation. When a bank account is mentioned, it modifies a sentence. Furthermore, the LLM can be instructed to perform anything. Any kind of financial information, including accounts associated with mobile apps or digital payment systems, as well as other forms of information, such blood types, can be altered in this way. It can instruct a pilot to change the course of their flight or a financial expert to purchase or sell stocks, he added. Bad actors’ social engineering abilities must be more sophisticated for conversations that involve protocols and processes, which are inherently more complex.

Revolutionary Potential: How Generative AI Drives Problem Solving and Sparks Creative Discovery

Another obstacle that generative AI made easy to overcome was the creation of convincing artificial voices. Hackers may create convincing-sounding but phony voices using LLMs by cloning just three seconds of a person’s voice and then feeding it into a text-to-speech API. Some difficulties arose. One issue was that the researchers had to artificially interrupt the discussion in the PoC so that people on the call wouldn’t suspect a thing, because they needed to access the LLM and text-to-speech APIs remotely.

Read: 10 AI In Manufacturing Trends To Look Out For In 2024

To top it all off, the voice cloning needs to mimic the victim’s natural speech pattern down to the tempo and inflection for the con to really stick. Possible Future Threats of a Similar Kind IBM’s Proof of Concept demonstrated the use of LLMs in such sophisticated assaults, which could pave the way for future ones. In particular, he warned that “the maturity of this PoC would signal a significant risk to consumers foremost – particularly to demographics who are more susceptible to today’s social engineering scams.”

Use only trusted devices and services for these chats, make sure they are up-to-date with fixes, and ask callers to paraphrase and repeat language if something appears strange. Additionally, there are time-tested methods, such as employing robust passwords and avoiding phishing scams that include opening attachments or going on URLs that you are unfamiliar with.

[To share your insights with us as part of editorial or sponsored content, please write to sghosh@martechseries.com]

The post Unveiling IBM’s AI Research: How Live Calls Can be Magically Hijacked appeared first on AiThority.

]]>
Is Generative AI a Game-changer for Password Security? https://aithority.com/machine-learning/is-generative-ai-a-game-changer-for-password-security/ Thu, 31 Aug 2023 08:35:03 +0000 https://aithority.com/?p=538259 Is Generative AI a Game-changer for Password Security?

The rapid growth of Generative AI (Artificial Intelligence) has huge implications for cybersecurity specialists, who can use it to reduce human error, improve efficiency and spot security issues. But, while these AI tools have many benefits, there have also been many concerns raised with respect to data security. As the age-old adage goes, any new […]

The post Is Generative AI a Game-changer for Password Security? appeared first on AiThority.

]]>
Is Generative AI a Game-changer for Password Security?

The rapid growth of Generative AI (Artificial Intelligence) has huge implications for cybersecurity specialists, who can use it to reduce human error, improve efficiency and spot security issues. But, while these AI tools have many benefits, there have also been many concerns raised with respect to data security.

As the age-old adage goes, any new technology brings its own advantages and disadvantages.

While AI is predominantly used by IT specialists to heighten cybersecurity, malicious actors are using AI, specifically generative AI, to boost their hacking game. To maintain the integrity and security of their data, everyone—from individuals to organizations—must be up to date with today’s rapidly evolving IT security trends.

When cybersecurity infrastructures are compromised, passwords are most often the first line of defense to be breached. As generative AI is advancing in its ability to facilitate identity theft, this only makes it even more important to implement a strong password hygiene routine.

There are several password cracking tools that malicious actors employ to breach security infrastructures, ranging from those that use basic data models to those that use generative adversarial networks (GANs) to crack passwords more quickly and effectively, like PassGAN, a password cracking tool currently making waves on the internet.

Could PassGAN Crack Your Password?

A portmanteau of the word “password” and the acronym “GAN”, PassGAN is a newer kind of tool that uses AI to swiftly crack passwords.

Unlike other password-cracking software which employs straightforward data models and presumptions regarding password patterns, PassGAN has the capacity to evaluate and learn from data to become increasingly intelligent.

According to a Home Security Heroes study, PassGAN could decipher 51% of popular passwords in under a minute; complex passwords take a bit more time, but not much, with 65% deciphered in under an hour, 71% deciphered in under a day, and 81% deciphered in under a month. The study also found that passwords that incorporated both perfect length (more than eight characters) and complexity (special characters) turned out to be the most secure.

Is Your Data in Danger From PassGAN?

It is worth noting that similar password-cracking tools have been doing the rounds since 2017. Despite appearing to employ innovative, password-cracking technology, it is not a ground-breaking tool.

Only when there is a data breach can these tools be used to crack passwords. Hackers do not immediately obtain access to password details the moment a website is compromised; they will only be able to access the passwords’ encrypted “hash,” which is different from accessing accounts directly. Additionally, they would need to compromise a server to access accounts and effectively breach the network.

How Can You Secure Your Data?

Although password free alternatives and biometrics have recently become all the rage, the best way we can defend ourselves and the integrity of our data is by using proper password hygiene.

These tools aren’t devoid of errors of biases, so for now passwords continue to be the primary and easiest method of authentication. Implementing a set of basic security hygiene procedures—such as ensuring and enforcing strict password policies, compliance with NIST and GDPR regulations, incorporating MFA controls, periodic vulnerability scanning and patching of endpoints, changing passwords on a regular basis, and never using the same password—can make a world of a difference.

[To share your insights with us, please write to sghosh@martechseries.com]

The post Is Generative AI a Game-changer for Password Security? appeared first on AiThority.

]]>
How Do AI-Based Cyber Tools Prevent and Mitigate Botnet Attacks? https://aithority.com/technology/how-do-ai-based-cyber-tools-prevent-and-mitigate-botnet-attacks/ Wed, 26 Apr 2023 09:40:32 +0000 https://aithority.com/?p=513077 How Do AI-Based Cyber Tools Prevent and Mitigate Botnet Attacks?

Over the past week, Blizzard reported that their systems were repeatedly targeted with Distributed Denial of Service (DDoS attacks). Their servers went down and became available only for users in certain locations. Players are getting frustrated because they can’t access many of the games they normally would. For many, the gaming experience has been negatively […]

The post How Do AI-Based Cyber Tools Prevent and Mitigate Botnet Attacks? appeared first on AiThority.

]]>
How Do AI-Based Cyber Tools Prevent and Mitigate Botnet Attacks?

Over the past week, Blizzard reported that their systems were repeatedly targeted with Distributed Denial of Service (DDoS attacks). Their servers went down and became available only for users in certain locations.

Players are getting frustrated because they can’t access many of the games they normally would. For many, the gaming experience has been negatively affected due to exasperating lag. Some users reported that their email addresses even got hacked in the midst of a DDoS attack.

On April 20, the company shared that its systems are still being targeted with DDoS threats on a daily basis.

Also known as botnet attacks, DDoS are common threats to businesses that rely on versatile applications and networks — meaning most companies operating today.

The longer a botnet attack compromises the company’s infrastructure, the more financial and reputational damage the business suffers in the long run.

What is a botnet attack exactly, and what is the role of artificial intelligence in avoiding, detecting, and ceasing this malicious cyber threat?

What Is a Botnet Attack?

Botnets are groups of devices that connect to the internet. Whether we talk about mobile, desktop or IoT devices, threat actors who control botnets (AKA botmasters) hijack them to initiate botnet attacks.

The users whose devices are being exploited by a botnet group are often not aware their computers and mobile phones are part of the process. Botmasters can use the same device to attack multiple networks at the same time.

Generally, botnets are deployed to spam a specific website or to crash entire servers. Hackers’ intentions can be to harm the company’s reputation, finances, or both.

How does a botnet attack happen, exactly?

A robotic army controlled by the online criminal is used to send a large volume of traffic to the victim’s network or application.

As a result, the company can lose access to its network, or their application might crash — depending on the capacity of a botnet and how much traffic is used to flood the target. 

The volume of DDoS attacks on the application level is measured in RPS (requests per second). On the network level, the attack is more severe and is measured in PPS (packets per second).

The attacks can last a few minutes, days, or even months. Depending on the hacker’s intention and the power of the botnet, the network or application can completely crash or slow down to the point where users get frustrated and leave the service.

AI-Powered Botnet Attack Protection

How to protect the network or an application from malicious botnet attacks? Due to the large volume and an increasing number of threats, cybersecurity teams delegate repetitive security tasks to artificial intelligence.

Some of the tasks that can be automated with the use of AI in cybersecurity include:

  • Detection of signs of a cyberattack
  • Analysis of data generated from the security tools
  • Blocking of traffic that is deemed malicious
  • Generating reports that depict the state of security and provide actionable tips on how security teams can mitigate the issues at hand

With AI, analysis of traffic and mitigation are possible in real time. The processes are repeated at all times, and security analysts have an insight into the state of security 24/7.

AI-Based DDoS Protection

To fight botnet attacks, cybersecurity teams rely on cloud-based DDoS attack prevention tools — they are designed to detect and block unwanted traffic.

How does DDoS protection work in practice?

It identifies a large number of versatile DDoS attacks — which is important since hackers are developing new and more complex methods every day.

For instance, that could mean the detection of attacks that occur on the application, Domain Name System (DNS) or network levels.

The traffic is inspected before reaching the network of a user. It’s compared with the ever-growing database that lists versatile hacking techniques and malicious IP addresses. Within the network, packets are triple-checked to ensure that the traffic is legitimate.

When the botmaster targets an application, the automated DDoS solution automatically identifies the signature of the botnet to differentiate it from genuine human activity.

Only the traffic that is deemed “clean”, genuine and safe will reach the system of a company. The rest are blocked.

Layered AI Cybersecurity Architecture

In many cases, a botnet attack is just the start. Threat actors tend to team them up with other hacking techniques. It goes without saying that companies today need a layered and comprehensive security system to protect themselves from such versatile and depleting attacks.

In the case of Blizzard, players shared that their email addresses got compromised during the DDoS attack that occurred.

DDoS attacks are also often paired with ransomware. Once the file-encrypting malware is deployed on the network and the ransom is requested, criminals can initiate DDoS attacks to add more pressure on their victims.

Therefore, having other automated security solutions that can detect and mitigate threats in time is essential. Most businesses have layers of 40–90 cybersecurity solutions to protect their most valuable assets. 

Final Word

Botnet attacks are difficult to eradicate completely. These “zombie armies” tend to come back every year — on a larger scale and more advanced than the year before.

As mentioned, even major enterprises such as Blizzard aren’t immune to DDoS attacks — let alone companies that don’t have the same resources but rely on applications and networks in their day-to-day.

To prevent and stop threats such as botnet attacks today, artificial intelligence has a key role in cybersecurity. AI can keep up with the incoming data and continually scan the traffic to detect malicious activity, such as a vast amount of traffic fast.

 

As companies are up against more cyber attacks than ever before, and threats are getting more and more sophisticated as well as hitting the servers with more volume, organizations have to prepare beforehand — with streamlined technology that can detect issues in real-time.

The post How Do AI-Based Cyber Tools Prevent and Mitigate Botnet Attacks? appeared first on AiThority.

]]>
Is Encryption a Defense Against Ransomware? https://aithority.com/security/is-encryption-a-defense-against-ransomware/ Thu, 09 Jun 2022 16:38:58 +0000 https://aithority.com/?p=416885 Is Encryption a Defense Against Ransomware?

Ransomware appears to be rampant. Organizations all over the world are trying their best to defend against these malicious software attacks that compromise organizational data at a price. One of these measures is using encryption to defend the company from hackers. Although encryption cannot prevent ransomware, it ensures that the attackers cannot read sensitive data. By converting […]

The post Is Encryption a Defense Against Ransomware? appeared first on AiThority.

]]>
Is Encryption a Defense Against Ransomware?

Ransomware appears to be rampant. Organizations all over the world are trying their best to defend against these malicious software attacks that compromise organizational data at a price. One of these measures is using encryption to defend the company from hackers. Although encryption cannot prevent ransomware, it ensures that the attackers cannot read sensitive data.

By converting critical data into code, encryption prevents an organization from being further exploited by ransomware attackers.

Encrypted Emails

According to Mimecast, encryption of all forms should be encouraged, particularly encrypted email.

Sensitive content such as customer information, financial information, and business plans are shared through an organizational email system. Protecting that information from data loss can avoid fines, legal fees, public relations disasters, and loss of revenue.

In the modern-day workplace, email is utilized heavily and is one of the first methods used to gain access to unauthorized information. Therefore, setting up encrypted emails may seem like a simple measure but it is needed if an organization wants to build a secure system.

Latest AI ML Insights: KEVANI’s Neuroscience-based Report Decodes Role of OOH Technologies in Ad Retention

Thinking Beyond Encryption 

Information security goes beyond encryption. Protecting an organization against ransomware requires a layered approach. Encryption is a start. However, there are other pathways that must be explored to create a safe and secure environment.

Exploring Solutions 

First, there are some simple, required steps that any organization should take to guard themselves against ransomware exploits:

  • Installing anti-virus software and firewalls
  • Conducting security awareness training for employees
  • Maintaining software updates

These steps may seem basic but one missed software update or successful phishing attempt can allow ransomware hackers to gain access to the company’s data.

Beyond the basics, there must be a strategy in place and data security requires an overarching system. There are many effective cyber security systems that halt email-borne ransomware infections before they start and have cloud technology, which restores data instantly to keep an entity running.

With superb cloud technology, critical data can be restored without any infected files. Clean data restoration promotes resiliency, as the organization is not reliant on the hacker if crucial data can easily be regained.

There are a variety of ways to backup organizational data, such as creating an image backup before encryption. This backup is a single file of the operating system and all associated data. Backups must be done frequently either on-site or through the cloud.

By having backups off-site and unconnected to the organizational network, the company is again, not reliant on the hacker for critical files and does not have to pay the ransom.

The Bottom Line 

Encryption is a defense against ransomware, but it is simply the first layer of a multi-layer defense. It cannot be the only source of protection. Robust cyber security systems with cloud technology can protect data and limit any negative impact ransomware may have on the organization.

[To share your insights with us, please write to sghosh@martechseries.com]

The post Is Encryption a Defense Against Ransomware? appeared first on AiThority.

]]>