This is a fun game!
* it has function coloring
* it doesn't have function coloring
This is a fun game!
* it has function coloring
* it doesn't have function coloring
"Humanity is beginning to coexist with a second apex species for the first time in 40,000 years" is off-the-charts delusion
Whatโs worrying here is that the UK government seems absolutely determined to access user private data, no matter the bad press and the consequences. And theyโre now willing to do it overtly. Between this and recent moves against encryption in the EU, weโre going to a bad place.
An illustration of me, and the headline: "AI agents are coming for your privacy, warns Meredith Whittaker The Signal Foundationโs president worries they will also blunt competition and undermine cyber-security"
To put it bluntly, the path currently being taken towards agentic AI leads to an elimination of privacy and security at the application layer. It will not be possible for apps like Signalโthe messaging app whose foundation I runโto continue to provide strong privacy guarantees, built on robust and openly validated encryption, if device-makers and OS developers insist on puncturing the metaphoric blood-brain barrier between apps and the OS. Feeding your sensitive Signal messages into an undifferentiated data slurry connected to cloud servers in service of their AI-agent aspirations is a dangerous abdication of responsibility.
Happily, itโs not too late. There is much that can still be done, particularly when it comes to protecting the sanctity of private data. Whatโs needed is a fundamental shift in how we approach the development and deployment of AI agents. First, privacy must be the default, and control must remain in the hands of application developers exercising agency on behalf of their users. Developers need the ability to designate applications as โsensitiveโ and mark them as off-limits to agents, at the OS level and otherwise. This cannot be a convoluted workaround buried in settings; it must be a straightforward, well-documented mechanism (similar to Global Privacy Control) that blocks an agent from accessing our data or taking actions within an app. Second, radical transparency must be the norm. Vague assurances and marketing-speak are no longer acceptable. OS vendors have an obligation to be clear and precise about their architecture and what data their AI agents are accessing, how it is being used and the measures in place to protect it.
๐ฃ NEW -- In The Economist, discussing the privacy perils of AI agents and what AI companies and operating systems need to do--NOW--to protect Signal and much else!
www.economist.com/by-invitatio...
crate names that go hard dot com
Daniel Hugenroth presenting Secure Messaging at RustConf
The coolest gang! Congrats on an great #rustconf talk :-) @lambda.bsky.social @itsibitzi.dev @zekehg.bsky.social
This is the perfect "product" for fragile, insecure men
In this era of enshittification, I still think Craigslist is a good example that you build for the long term by not chasing *short term* profits which involve squeezing users out of as much value as possible as quickly as possible, rather than building long term value that people like to use.
Incredibly proud to have been a part of the team that made this a reality. Congratulations on the launch!