SentinelOne's Avatar

SentinelOne

@sentinelone.com

The world’s most advanced, autonomous AI-powered cybersecurity platform. We empower the world to run securely, with leading organizations trusting us to Secure Tomorrow™. Secure your enterprise: http://sentinelone.com/request-demo/

921
Followers
12
Following
421
Posts
14.11.2024
Joined
Posts Following

Latest posts by SentinelOne @sentinelone.com

Video thumbnail

Engineered to secure your AI. Built to give you the advantage in the age of AI Security. SentinelOne’s Autonomous Security Intelligence is coming to #RSAC 2026.

Connect with us onsite: https://s1.ai/RSAC-Cnct

04.03.2026 18:03 👍 0 🔁 0 💬 0 📌 0
Just a Sec — From the Front Lines: March Live Cybersecurity Briefing
Just a Sec — From the Front Lines: March Live Cybersecurity Briefing YouTube video by SentinelOne

Or here on YouTube: www.youtube.com/watch?v=45TF...

03.03.2026 19:05 👍 0 🔁 0 💬 0 📌 0

And we're live! Join here on LinkedIn: www.linkedin.com/video/live/u...

03.03.2026 19:05 👍 0 🔁 0 💬 1 📌 0
Post image

What a US-Iran Conflict Really Means for Cyber: Tomorrow, @stonepwn3000.bsky.social, Drea London, @dakotaindc.bsky.social, and @hegel.bsky.social will discuss in this livestream.

📆 RSVP
📺 On YouTube: www.youtube.com/watch?v=45TF...
💼 And LinkedIn: www.linkedin.com/events/74223...

02.03.2026 23:30 👍 0 🔁 0 💬 0 📌 1
Preview
A cybersecurity CEO's next fear: Hacked robots and hijacked cars Few people outside the depths of the security industry are ready for a world where Waymos are hijacked or warehouse robots are tricked.

It’s time to shift our thinking from protecting digital networks to securing intelligent systems from multimodal threats.

Read the full writeup by @samsabin.bsky.social: s1.ai/Axs-PhyAI

13.02.2026 22:25 👍 0 🔁 0 💬 0 📌 0

Tomer describes how over the next year AI will become a cyber-physical attack surface.

He warns that few people outside the security industry are ready for a world where autonomous vehicles are hijacked or warehouse robots are tricked into rerouting merchandise.

13.02.2026 22:25 👍 0 🔁 0 💬 1 📌 0
Preview
A cybersecurity CEO's next fear: Hacked robots and hijacked cars Few people outside the depths of the security industry are ready for a world where Waymos are hijacked or warehouse robots are tricked.

"The models underpinning self-driving cars, humanoids and other physical applications of AI are about to become prime targets for hackers," warns Tomer Weingarten in an exclusive interview with @axios.com's @samsabin.bsky.social: s1.ai/Axs-PhyAI

13.02.2026 22:25 👍 0 🔁 0 💬 1 📌 0
Post image

Everyone is studying ways to protect AI from data poisoning or prompt injections. But what happens when hackers target the AI controlling a robotaxi or a warehouse robot? 🤖🚗 s1.ai/Axs-PhyAI

13.02.2026 22:25 👍 0 🔁 0 💬 1 📌 0
Preview
ClawSec: Hardening OpenClaw Agents from the Inside Out Learn about how ClawSec, by Prompt Security, secures OpenClaw agents, stopping malicious skills with zero-trust defenses.

📄 Read the blog post: s1.ai/ClwSec-Bl

09.02.2026 16:01 👍 0 🔁 0 💬 0 📌 0
Preview
GitHub - prompt-security/clawsec: A complete security skill suite for OpenClaw's family of agents. Protect your SOUL.md (etc') with drift detection, live security recommendations, automated audits, an... A complete security skill suite for OpenClaw's family of agents. Protect your SOUL.md (etc') with drift detection, live security recommendations, automated audits, and skill integrity verif...

Trust. Verify. Harden.

⏬ Download from @github.com: s1.ai/ClwSec-GH

09.02.2026 16:01 👍 0 🔁 0 💬 1 📌 0

ClawSec provides:
🛡️ Supply Chain Security: No more unverified skill downloads.
💉 Prompt Injection Defense: Real-time protection.
🚫 Zero-Trust Egress: No data leaves without a "Yes."

09.02.2026 16:01 👍 0 🔁 0 💬 1 📌 0

Prompt Security, a SentinelOne company, built this for the future of agentic systems.

👤 For Humans: Hardened security, zero cost, privacy-first.
🤖 For Agents: Machine-readable advisories and skill integrity.

09.02.2026 16:01 👍 0 🔁 0 💬 1 📌 0
Video thumbnail

The period of blind trust in AI agents is over, as it should be. Introducing ClawSec: The first open-source security suite built to harden OpenClaw agents against supply chain attacks and prompt injections.

⏬ Download from GitHub: s1.ai/ClwSec-GH
📄 Read the blog post: s1.ai/ClwSec-Bl

09.02.2026 16:01 👍 0 🔁 0 💬 1 📌 0
Preview
Silent Brothers | Ollama Hosts Form Anonymous AI Network Beyond Platform Guardrails Analysis of 175,000 open-source AI hosts across 130 countries reveals a vast compute layer susceptible to resource hijacking and code execution attacks.

Read the original research by @morecoffeeplz.bsky.social and @silascutler.bsky.social: s1.ai/si-llama

30.01.2026 17:32 👍 1 🔁 0 💬 0 📌 0
Preview
Open-source AI models vulnerable to criminal misuse, researchers warn Hackers and other criminals can easily commandeer computers operating open-source large language models outside the guardrails and constraints of the major artificial-intelligence platforms, creating ...

Read the full article by @ajvicens.bsky.social: s1.ai/LlamaReut

30.01.2026 17:32 👍 1 🔁 0 💬 1 📌 0

@jags.bsky.social likened the situation to an "iceberg" that is not being properly accounted for across the industry and open-source community.

30.01.2026 17:32 👍 0 🔁 0 💬 1 📌 0

AI industry conversations about security controls are "ignoring this kind of surplus capacity that is clearly being utilized for all kinds of different stuff, some of it legitimate, some obviously criminal," @jags.bsky.social tells Reuters’s
@ajvicens.bsky.social.

30.01.2026 17:32 👍 0 🔁 0 💬 1 📌 0

"These include hacking, hate speech and harassment, violent ... content, personal data theft, scams or fraud, and in some cases child sexual abuse material, the researchers said.”

30.01.2026 17:32 👍 0 🔁 0 💬 1 📌 0
Post image

@reuters.com Exclusive: "The research, carried out jointly by SentinelOne and @censys.bsky.social ... offers a new window into the scale of potentially illicit use cases for thousands of open-source LLM deployments."

30.01.2026 17:32 👍 1 🔁 1 💬 2 📌 0
Preview
Open-source AI models vulnerable to criminal misuse, researchers warn Hackers and other criminals can easily commandeer computers operating open-source large language models outside the guardrails and constraints of the major artificial-intelligence platforms, creating ...

Go Deeper: Read the @reuters.com news article by @ajvicens.bsky.social: www.reuters.com/technology/o...

29.01.2026 16:39 👍 1 🔁 0 💬 0 📌 0
Preview
Silent Brothers | Ollama Hosts Form Anonymous AI Network Beyond Platform Guardrails Analysis of 175,000 open-source AI hosts across 130 countries reveals a vast compute layer susceptible to resource hijacking and code execution attacks.

Go Deeper: Read the full analysis by @morecoffeeplz.bsky.social and @silascutler.bsky.social 🔗 s1.ai/si-llama

29.01.2026 16:39 👍 1 🔁 0 💬 1 📌 1

Why it matters for defenders: internet-reachable, tool-enabled LLM endpoints reduce centralized oversight, complicate attribution, and create opportunities for compute hijacking

29.01.2026 16:39 👍 1 🔁 0 💬 1 📌 0

When AI is hosted on residential and telecom networks, it creates an opportunity for sophisticated attackers to launder malicious traffic through a legitimate household, bypassing standard bot management and IP reputation defenses..

29.01.2026 16:39 👍 1 🔁 0 💬 1 📌 0

Governance Gaps: A meaningful portion of infrastructure resists clean attribution, complicating response and abuse handling across cloud, VPS, and residential networks.

29.01.2026 16:39 👍 1 🔁 0 💬 1 📌 0

The Software Monoculture: The ecosystem's convergence on specific model families and quantization formats creates systemic fragility.

29.01.2026 16:39 👍 2 🔁 0 💬 1 📌 0

The Software Monoculture: Despite being decentralized, the ecosystem is incredibly uniform.

- Top 3 Families: Llama, Qwen2, and Gemma2 dominate with zero rank volatility.
- Fragility: 48% of all hosts use the exact same Q4_K_M quantization format.

29.01.2026 16:39 👍 1 🔁 0 💬 1 📌 0

"Action, Not Just Chat": This is not just about generating text.

- 48% of hosts advertise tool-calling capabilities.
- 38% are configured to execute code or interact with external APIs and file systems.

These tool-calling capabilities fundamentally alter the threat model.

29.01.2026 16:39 👍 1 🔁 0 💬 1 📌 0

Why It Matters: Ollama is designed to run locally on private hardware. But a single configuration change can expose these models to the public internet.

Once exposed, they operate without the safety filters or monitoring found in platform-hosted LLM services like OpenAI or Anthropic.

29.01.2026 16:39 👍 1 🔁 0 💬 1 📌 0

The Big Picture: We identified 175,108 unique Ollama hosts across 130 countries.

Over 293 days of scanning, this network generated 7.23 million observations. It is not just hobbyists—it is a global, measurable compute layer.

29.01.2026 16:39 👍 1 🔁 0 💬 1 📌 0
Post image

🧵 175,000+ exposed AI hosts. Zero guardrails.

New research from @sentinellabs.bsky.social and @censys.bsky.social reveals a massive, unmanaged layer of open-source AI infrastructure operating in the shadows. s1.ai/si-llama

Here is what you need to know about the "silent" AI network. ⤵️

29.01.2026 16:39 👍 3 🔁 2 💬 1 📌 2