Does someone need to take Zuck's phone away from him? No nanna, that's not a real Nigerian prince!π
gizmodo.com/mark-zuckerb...
@nhamiel
Senior Director of Research. Black Hat Review Board Member (AI, ML, and DS track lead) and International public speaker. I focus on emerging technologies and risks at the intersection of humanity and tech. Hype Critic. My writing: https://perilous.tech
Does someone need to take Zuck's phone away from him? No nanna, that's not a real Nigerian prince!π
gizmodo.com/mark-zuckerb...
What we get is manipulation and unintended consequences. Our modern environment is stripping the very defenses we need to stay robust.
Seneca said that the excellence of mind cannot be borrowed or bought. However, thatβs exactly whatβs being pitched with generative AI. In the end, we donβt get wisdom or AGI from turning books into statistics. perilous.tech/transforming...
The next few years will require vigilance and the ability to envision trade-offs even when no evidence of trade-offs is apparent. These are essential skills in a world that prioritizes dehumanization. This starts with not confusing innovation with progress. perilous.tech/confusing-in...
Absolutely.
Kurzweil lays out this exact same setup in The Singularity Is Nearer. He talks about how having external cloud storage increases memory capacity, and there'll be no difference between brain and cloud processing.
I think the problem is that this is fairly unrealistic in practice. The hacking of your own brain chip setup (assuming it's possible) is something 99% of people on the planet wouldn't do. They'll take the fully working solution with the built-in cloud storage, so they can use the system anywhere.
We need to get much better at envisioning tradeoffs. A symbiosis with AI would mean that we would never know if a thought or memory we have is truly our own. Itβs the end of private thoughts and the beginning of a whole new world of manipulation and unintended consequences.
Generative AI is one of the most manipulable technologies ever invented, and shoving it into systems creates an increased attack surface and unintended consequences. The future of warfare is gonna be lit, in some cases literally.
Pretending weβve achieved AGI and ignoring all of the issues is not an effective control when slapping generative AI into high-risk, safety-critical use cases. While many point to reliability and human responsibility in military use, many arenβt addressing the security aspects.
I first met FX in the early 2000s. We had so many laughs, so many memories. Hell, during just one notorious hacker trip in 2009, there were enough memories to last a lifetime. He will be missed.
Also, Iβve previously written up some observations and guidance to think about when submitting to the AI track at Black Hat. perilous.tech/black-hat-ai...
The Black Hat USA call for papers is open. This will be our 6th year of having a dedicated AI track. If you have some interesting AI research, be it attacking, defending, or applying AI, weβd love to see it. Please let me know if you have any questions. blackhat.com/call-for-pap...
The biggest hot take of the past few weeks is that software is dead. But is it really? Seems there are some fundamental realities not being considered. Regardless of success, software vulnerabilities will be absolutely everywhere. Welcome to the new reality. perilous.tech/the-death-of...
This Clinejection write-up is great, and I learned some things about GitHub actions caching, too. We experienced the same during our research for our Black Hat USA 2025 talk on attacking AI-powered developer productivity tools. adnanthekhan.com/posts/clinej...
If there was a killer use case for this "powerful agentic experience," surely they'd be touting it. But instead we are sold the ability to do things we can already do, just with less security and privacy.
I'll be speaking at Applied Machine Learning Days in Switzerland next week on the topic of AI Secure By Design. I discuss our AI Actor-based threat analysis method to simplify threat identification and get to value quickly.
MoltMatch screenshot
Proof that dudes will engineer systems burning hotter than the sun to avoid actually talking to women. Women, who I imagine are flocking in droves to this site π This is going great! The crypto aspect is the icing on the cake. The trajectory is clear.
Here we continue our technical write-ups of the exploitation of AI-powered developer productivity tools from Black Hat USA with Qodo. The takeaway here is that knowing prompt injection isnβt enough.
kudelskisecurity.com/research/qod...
Neil Postman quote
Literacy is our greatest weapon to remain robust and defend our humanity in this invasive, modern environment. Here, I recommend 7 books to create more robust humans. And yes, Huxley was right.
perilous.tech/7-books-for-...
Hmm... The previous term was terrifying. Where could we look to find something more palatable? I know, dystopian science fiction!!!
The lengths people won't go to get themselves owned. This has been happening since 2023 with AutoGPT, only now with deeper access. This isn't rocket science, if you give something insecure complete and unfettered access to your system and sensitive data, you're going to get owned.
Wow, I said the exact same thing back in 2024 from the stage at AgileDevOps USA. It included the specific number of 14B in losses as well. I was explaining the possibility that OpenAI could go out of business in a few years.
Treating shopping as an optimization problem could have devastating economic effects. Removing the friction from the purchasing process (aka shopping) with AI agents could cause people to buy less, not more. Retailers may want to rethink their strategy. perilous.tech/agentic-shop...
Please don't listen to me or anyone else making AI predictions for 2026. With that said, here's my 6 AI predictions for 2026 π perilous.tech/6-ai-predict...
Notebook with a pen
My favorite paper at the moment. If the notebook had numbered pages and a table of contents that would make it even better.
ChatGPT Health Launch
Nothing to worry about. It supports MFA and military-grade encryption.π "The company analyzed deidentified ChatGPT conversations and found that more than 230 million people globally ask health-and wellness-related questions on ChatGPT every week.β
Many inefficiencies in organizations can be addressed by making simple tweaks, organizational changes, and removing unnecessary steps without adding the complexity, overhead, or potential security issues of LLMs. An LLM may be a good fit, but that should be based on analysis and realities.
The misconception that LLMs should be the first port of call for any and all problems and efficiencies can only arise in an era of hype and a lack of work experience. Anyone whoβs had a job before has seen inefficiencies that could easily be addressed without advanced technology.
See you Saturday at #BSidesJax