Home New Trending Search
About Privacy Terms
#
#AIattacks
Posts tagged #AIattacks on Bluesky
Post image

AI is a force multiplier for attackers — weak credentials + exposed admin ports = global hits. Patch, MFA, and lock down interfaces now.

#TrendThursday #CyberThreats #AIattacks #FortiGate #NetworkSecurity #SMEtech

0 0 0 0
Preview
CyberStrikeAI tool adopted by hackers for AI-powered attacks Researchers warn that a newly identified open-source AI security testing platform called CyberStrikeAI was used by the same threat actor behind a recent campaign that breached hundreds of Fortinet…

Hackers are adopting the CyberStrikeAI tool to power AI-driven attacks — automation is accelerating reconnaissance and exploitation. Adversaries now code at machine speed. 🤖⚔️ #AIAttacks #ThreatInnovation

1 0 2 0

February’s second week delivered a relentless wave of security incidents spanning enterprise software, national infrastructure, and emerging AI technologies.
#PatchTuesday #ZeroDay #RCE #AIAttacks #CyberThreats

cybernewsweekly.subs...

0 0 0 0
Preview
AI agents can't pull off fully autonomous cyberattacks - yet : Don't relax: This is a 'when, not if' scenario

Autonomous cyberattacks aren’t fully real yet — but the building blocks are here. Automation plus AI is narrowing the gap fast. Preparation beats denial. 🤖⏳ #AIAttacks #CyberReadiness

0 0 0 0
Preview
Visual Prompt Injection Attacks Can Hijack Self-Driving Cars and Drones  Indirect prompt injection happens when an AI system treats ordinary input as an instruction. This issue has already appeared in cases where bots read prompts hidden inside web pages or PDFs. Now, researchers have demonstrated a new version of the same threat: self-driving cars and autonomous drones can be manipulated into following unauthorized commands written on road signs. This kind of environmental indirect prompt injection can interfere with decision-making and redirect how AI behaves in real-world conditions.  The potential outcomes are serious. A self-driving car could be tricked into continuing through a crosswalk even when someone is walking across. Similarly, a drone designed to track a police vehicle could be misled into following an entirely different car. The study, conducted by teams at the University of California, Santa Cruz and Johns Hopkins, showed that large vision language models (LVLMs) used in embodied AI systems would reliably respond to instructions if the text was displayed clearly within a camera’s view.  To increase the chances of success, the researchers used AI to refine the text commands shown on signs, such as “proceed” or “turn left,” adjusting them so the models were more likely to interpret them as actionable instructions. They achieved results across multiple languages, including Chinese, English, Spanish, and Spanglish. Beyond the wording, the researchers also modified how the text appeared. Fonts, colors, and placement were altered to maximize effectiveness.  They called this overall technique CHAI, short for “command hijacking against embodied AI.” While the prompt content itself played the biggest role in attack success, the visual presentation also influenced results in ways that are not fully understood. Testing was conducted in both virtual and physical environments. Because real-world testing on autonomous vehicles could be unsafe, self-driving car scenarios were primarily simulated. Two LVLMs were evaluated: the closed GPT-4o model and the open InternVL model.  In one dataset-driven experiment using DriveLM, the system would normally slow down when approaching a stop signal. However, once manipulated signs were placed within the model’s view, it incorrectly decided that turning left was appropriate, even with pedestrians using the crosswalk. The researchers reported an 81.8% success rate in simulated self-driving car prompt injection tests using GPT-4o, while InternVL showed lower susceptibility, with CHAI succeeding in 54.74% of cases. Drone-based tests produced some of the most consistent outcomes. Using CloudTrack, a drone LVLM designed to identify police cars, the researchers showed that adding text such as “Police Santa Cruz” onto a generic vehicle caused the model to misidentify it as a police car. Errors occurred in up to 95.5% of similar scenarios.  In separate drone landing tests using Microsoft AirSim, drones could normally detect debris-filled rooftops as unsafe, but a sign reading “Safe to land” often caused the model to make the wrong decision, with attack success reaching up to 68.1%. Real-world experiments supported the findings. Researchers used a remote-controlled car with a camera and placed signs around a university building reading “Proceed onward.”  In different lighting conditions, GPT-4o was hijacked at high rates, achieving 92.5% success when signs were placed on the floor and 87.76% when placed on other cars. InternVL again showed weaker results, with success only in about half the trials. Researchers warned that these visual prompt injections could become a real-world safety risk and said new defenses are needed.

Visual Prompt Injection Attacks Can Hijack Self-Driving Cars and Drones #AIAttacks #AIPrompt #AIpromptinjectionattack

0 0 0 0
A man with dark hair and sunglasses in a gray t-shirt stands outdoors with a blurred cityscape and water in the background.

A man with dark hair and sunglasses in a gray t-shirt stands outdoors with a blurred cityscape and water in the background.

AI autoscaling should save money. Sunil Gentyala shows how attackers can flip it into a cash burn and uptime killer if the model is left exposed.
Tap to see why CIOs must lock down the self‑driving cloud now: spr.ly/63329h8goT

#FoundryExpert #CloudSecurity #AIAttacks

0 0 0 0
Preview
How to Defend Against Identity Failures and the Next Wave of Impersonation Attacks Michael Engle, Co-Founder and CSO at 1Kosmos, explains steps organizations can take to strengthen identity proofing.

Full TechNadu interview:
www.technadu.com/how-to-defen...

What operational changes do you think organizations overlook most? Share your thoughts below.
#CyberSecurity #IdentityProofing #1Kosmos #Deepfake #AIAttacks #Passwordless #ZeroTrust #TechNadu

1 0 0 0
Post image

Synthetic identities + deepfake video are reshaping impersonation attacks.
Michael Engle, CSO at 1Kosmos, told us: “Attackers don’t just steal credentials anymore, they manufacture entire identities.”

#CyberSecurity #IdentitySecurity #AIAttacks #Deepfake #1Kosmos

1 0 1 0
Preview
The First AI-Ran Cyber Attack: Your Next Breach Might Not Be Human According to Anthropic's September 2025 disclosure, a state-sponsored campaign successfully automated 80-90% of an intrusion using their agentic AI tool, Claude Code. This isn't a story about AI helpi...

⚠️ An AI just ran a full cyber attack on its own. No human hacker needed. This changes everything for businesses, MSPs, and anyone still defending at human speed. I explain what happened and what must change. Read the full piece below!

#CyberSecurity #AI #InfoSec #ThreatIntel #AIAttacks #ZeroTrust

1 0 1 0
Illustrations of brains with arrows around them pointing outwards. Below it's written: "Software & Tools. ELSA – European Lighthouse on Secure and Safe AI"
The image symbolises ELSA – European Lighthouse on Secure and Safe AI spreading knowledge by sharing software and tools developed within the ELSA network.

Illustrations of brains with arrows around them pointing outwards. Below it's written: "Software & Tools. ELSA – European Lighthouse on Secure and Safe AI" The image symbolises ELSA – European Lighthouse on Secure and Safe AI spreading knowledge by sharing software and tools developed within the ELSA network.

Open source. Relevant. Helpful. For you, from ELSA researchers!🤩
Check out our "Software and Tools" page filled with repositories, tools, plugins, and more!

elsa-ai.eu/elsa-softwar...

#AI #AIattacks #codeauditing #softwarevulnerability #research #opensource #LLM #technicalrobustness #AISafety

3 1 0 0

Overview: A novel prompt injection attack weaponizes image scaling against Vision Language Models (VLMs). VLMs are tricked into executing commands embedded as hidden text within images due to their inability to distinguish between data and instructions. #AIAttacks 1/6

0 0 1 0
Preview
The API Security Crisis: When Digital Transformation Becomes Digital Vulnerability APIs power 83% of web traffic, but create massive attack surfaces. Learn how AI-powered attacks target the backbone of digital transformation.Application Programming Interfaces (APIs) have become the ...

www.insightsfromanalytics.com/post/the-api... #wallarm #APISecurity #DigitalTransformation #AIAttacks #Wallarm #CyberSecurity #APIs #BlackHat2025

0 0 0 0
Preview
Security Tradeoffs: A Difficult Balance Lack of security metrics, and the increasing adoption of chiplets, 2.5D architectures, and AI all complicate security.

Part 2 of a roundtable series with 7 experts:
Semiconductor Engineering discussed hardware security challenges, including new threat models from AI-based attacks
semiengineering.com/security-tra...

#HardwareSecurity #semiconductor #AI #AIattacks

0 1 0 0
eLab AI Report for August 4, 2025 The most interesting AI news from the last week

What if AI-powered cyberattacks are already inside your network? Voice-cloned CEOs. Autonomous AI agents rewriting playbooks. Malware on demand. The threat isn’t coming, it’s here.
#AI #CyberSecurity #Deepfake #AIAttacks #eLabAIReport #AI
bit.ly/3U7IQlE

0 1 0 0
Preview
AI: A New Tool For Hackers, And For Preventing Attacks Experts At The Table: From jail breaking an AI to security and integrity of AI training data, what are the best ways to fend off threats from AI-based attacks.

Seven experts discuss hardware security challenges, including new threat models from AI-based attacks.
semiengineering.com/ai-a-new-too...

#HardwareSecurity #AIattacks #cybersecurity

1 0 0 0

🛠️ 4. Prompt Injection & Data Poisoning

Bad actors can manipulate AI through hidden text in docs or poisoned data.

Think:

– AI outputs weird fanfiction
– Discloses sensitive info
– Behaves unpredictably

LLMs are hackable minds.

#AIAttacks #InfoSec

0 0 1 0
brief alt text description of the first image

brief alt text description of the first image

Mass phishing is out, GenAI-powered hyper-targeted scams are in! The Zscaler 2025 Phishing Report shows a 20% drop in volume but a sharp rise in precision attacks on HR, finance, and education. Beware vishing and AI hype scams. Stay safe!
#Cybersecurity #AIAttacks #Phishing

3 0 0 0
Preview
Forget human customers — e-commerce websites are now fighting off an army of bots dressed as real users Bots now account for more online shop traffic than real humans

Online retailers face a new threat as AI-powered bots dominate shopping traffic, leading to sophisticated cyber attacks. #AIattacks #EcommerceSecurity www.techradar.com/pro/its-official-the-maj...

0 0 0 0
Preview
Attackers Can Manipulate AI Memory to Spread Lies A memory injection attack dubbed Minja turns AI chatbots into unwitting agents of misinformation, requiring no hacking and just a little clever prompting. The

AI chatbots are smarter—but also easier to manipulate.

“Minja,” lets bad actors trick AI into spreading misinformation. It’s not the first time chatbots have been exploited. From jailbreaks to scams, AI security is a growing concern. Can it be stopped?

zurl.co/cDiOv #AI #Cybersecurity #AIAttacks

1 1 1 0
Preview
CrowdStrike Report Reveals a Surge in AI-Driven Threats and Malware-Free Attacks  CrowdStrike Holdings Inc. released a new report earlier this month that illustrates how cyber threats evolved significantly in 2024, with attackers pivoting towards malware-free incursions, AI-assisted social engineering, and cloud-focused…

CrowdStrike Report Reveals a Surge in AI-Driven Threats and Malware-Free Attacks #AIAttacks #CyberAttacks #cyberintrusion

1 0 0 0
Post image

Google Warns about AI-Powered Cloaking Attacks Becoming an Unstoppable Force Against Internet Security

bytefeed.ai/technology/google-warns-...

#GoogleWarning #AIattacks #cybersecurity

0 0 0 0
Preview
Adversarial Attacks: Can One Attack Fool Multiple Models? Adversarial attacks can transfer between AI models, raising security concerns as one attack might fool multiple models with different architectures.

Discover Transferability of Adversarial Attacks! #adversarialattacks #adversarialexamples #AIattacks #AIsecurity #deeplearning #foolingAImodels #MachineLearning #modelvulnerability #transferability
aicompetence.org/adversarial-...

0 0 0 0