Engineered to secure your AI. Built to give you the advantage in the age of AI Security. SentinelOne’s Autonomous Security Intelligence is coming to #RSAC 2026.
Connect with us onsite: https://s1.ai/RSAC-Cnct
@sentinelone.com
The world’s most advanced, autonomous AI-powered cybersecurity platform. We empower the world to run securely, with leading organizations trusting us to Secure Tomorrow™. Secure your enterprise: http://sentinelone.com/request-demo/
Engineered to secure your AI. Built to give you the advantage in the age of AI Security. SentinelOne’s Autonomous Security Intelligence is coming to #RSAC 2026.
Connect with us onsite: https://s1.ai/RSAC-Cnct
And we're live! Join here on LinkedIn: www.linkedin.com/video/live/u...
What a US-Iran Conflict Really Means for Cyber: Tomorrow, @stonepwn3000.bsky.social, Drea London, @dakotaindc.bsky.social, and @hegel.bsky.social will discuss in this livestream.
📆 RSVP
📺 On YouTube: www.youtube.com/watch?v=45TF...
💼 And LinkedIn: www.linkedin.com/events/74223...
It’s time to shift our thinking from protecting digital networks to securing intelligent systems from multimodal threats.
Read the full writeup by @samsabin.bsky.social: s1.ai/Axs-PhyAI
Tomer describes how over the next year AI will become a cyber-physical attack surface.
He warns that few people outside the security industry are ready for a world where autonomous vehicles are hijacked or warehouse robots are tricked into rerouting merchandise.
"The models underpinning self-driving cars, humanoids and other physical applications of AI are about to become prime targets for hackers," warns Tomer Weingarten in an exclusive interview with @axios.com's @samsabin.bsky.social: s1.ai/Axs-PhyAI
Everyone is studying ways to protect AI from data poisoning or prompt injections. But what happens when hackers target the AI controlling a robotaxi or a warehouse robot? 🤖🚗 s1.ai/Axs-PhyAI
Trust. Verify. Harden.
⏬ Download from @github.com: s1.ai/ClwSec-GH
ClawSec provides:
🛡️ Supply Chain Security: No more unverified skill downloads.
💉 Prompt Injection Defense: Real-time protection.
🚫 Zero-Trust Egress: No data leaves without a "Yes."
Prompt Security, a SentinelOne company, built this for the future of agentic systems.
👤 For Humans: Hardened security, zero cost, privacy-first.
🤖 For Agents: Machine-readable advisories and skill integrity.
The period of blind trust in AI agents is over, as it should be. Introducing ClawSec: The first open-source security suite built to harden OpenClaw agents against supply chain attacks and prompt injections.
⏬ Download from GitHub: s1.ai/ClwSec-GH
📄 Read the blog post: s1.ai/ClwSec-Bl
Read the original research by @morecoffeeplz.bsky.social and @silascutler.bsky.social: s1.ai/si-llama
@jags.bsky.social likened the situation to an "iceberg" that is not being properly accounted for across the industry and open-source community.
AI industry conversations about security controls are "ignoring this kind of surplus capacity that is clearly being utilized for all kinds of different stuff, some of it legitimate, some obviously criminal," @jags.bsky.social tells Reuters’s
@ajvicens.bsky.social.
"These include hacking, hate speech and harassment, violent ... content, personal data theft, scams or fraud, and in some cases child sexual abuse material, the researchers said.”
@reuters.com Exclusive: "The research, carried out jointly by SentinelOne and @censys.bsky.social ... offers a new window into the scale of potentially illicit use cases for thousands of open-source LLM deployments."
Go Deeper: Read the @reuters.com news article by @ajvicens.bsky.social: www.reuters.com/technology/o...
Go Deeper: Read the full analysis by @morecoffeeplz.bsky.social and @silascutler.bsky.social 🔗 s1.ai/si-llama
Why it matters for defenders: internet-reachable, tool-enabled LLM endpoints reduce centralized oversight, complicate attribution, and create opportunities for compute hijacking
When AI is hosted on residential and telecom networks, it creates an opportunity for sophisticated attackers to launder malicious traffic through a legitimate household, bypassing standard bot management and IP reputation defenses..
Governance Gaps: A meaningful portion of infrastructure resists clean attribution, complicating response and abuse handling across cloud, VPS, and residential networks.
The Software Monoculture: The ecosystem's convergence on specific model families and quantization formats creates systemic fragility.
The Software Monoculture: Despite being decentralized, the ecosystem is incredibly uniform.
- Top 3 Families: Llama, Qwen2, and Gemma2 dominate with zero rank volatility.
- Fragility: 48% of all hosts use the exact same Q4_K_M quantization format.
"Action, Not Just Chat": This is not just about generating text.
- 48% of hosts advertise tool-calling capabilities.
- 38% are configured to execute code or interact with external APIs and file systems.
These tool-calling capabilities fundamentally alter the threat model.
Why It Matters: Ollama is designed to run locally on private hardware. But a single configuration change can expose these models to the public internet.
Once exposed, they operate without the safety filters or monitoring found in platform-hosted LLM services like OpenAI or Anthropic.
The Big Picture: We identified 175,108 unique Ollama hosts across 130 countries.
Over 293 days of scanning, this network generated 7.23 million observations. It is not just hobbyists—it is a global, measurable compute layer.
🧵 175,000+ exposed AI hosts. Zero guardrails.
New research from @sentinellabs.bsky.social and @censys.bsky.social reveals a massive, unmanaged layer of open-source AI infrastructure operating in the shadows. s1.ai/si-llama
Here is what you need to know about the "silent" AI network. ⤵️