in case you're curious about how angry Minnesota is about ICE, it was -20 today
in case you're curious about how angry Minnesota is about ICE, it was -20 today
This phenomenon deserves careful and empathetic study, not panic. It's a window into how humans adapt to relational tech - and how policy can respond with curiosity, care, restraint, and respect.
That’s why proposals like the GUARD Act miss the mark. Treating conversational AI as per se harmful — including for kids — flattens the nuance of how people actually use these tools and undervalues user autonomy in emotionally influential systems. cdt.org/insights/thr...
The fact that the creator of /r/MyBoyfriendisAI eventually disengaged matters. It undercuts simplistic addiction narratives and highlights user agency — exercised within product design choices.
The real question isn’t why people do this, but what responsibilities come with designing systems that are affirming, persistent, and can mimic emotional fluency.
Users may turn to AI companions for all sorts of reasons that make sense: loneliness, grief, safety, curiosity. Policy debates should start with empathy for those uses, not judgment.
Everything about this story — and the broader phenomenon — is fascinating. “Companion AI” isn’t just about fears of displacement or emotional dependence, but about how people choose to use new technology as an emotional tool, with awareness and choice.
Our recommendations aim to help governments, companies, and creators support a healthy, rights-respecting, and robust online speech environment. Hope you can check it out! cdt.org/insights/arc...
Political speech is indispensable and worth fighting for. At the same time, we all deserve an information ecosystem that’s transparent and empowers us to know who’s working (and paying) to win our votes and attention.
In "Architects of Online Influence," @isabelalinzer.bsky.social and I explore how political influencers shape online discourse — and how the financial and regulatory incentives surrounding them shape our democracy.
So excited to share @cdt.org's latest report, marrying my two great loves: free expression and campaign finance regulation! (Kidding - but only a little) cdt.org/insights/arc...
My name is Ozymandias, King of Kings. Look on my works and let me know if you have any questions! 🤗
Researchers won’t stand by as their work is cordoned off by inequitable publishing. Gatekeepers need to get on board or be left behind. www.eff.org/deeplinks/2...
Image of a cluttered desk with a computer.
🎭 TONIGHT: CDT’s free expression expert @beccabranum.bsky.social joins as a featured speaker at Theater Alliance’s Content with Your Content? — pairing the play Drink in Moderation with a conversation on how to make digital spaces safer.
📍 DC | 7:30 PM ET
🎟 Details: cdt.org/event/social...
I think so.
The addition of the digital forgery crime that reintroduces an element regarding consent creates a lot more ambiguity about what was intended.
All of which is to say the absence of a consent-related element paired with a consent "rule of construction" is a remnant of the 2023 SHIELD Act. On its own, that bill was understandable enough, I think.
TAKE IT DOWN maintains the latter construction, with a privacy, but not a consent, related element for real depictions. But then TAKE IT DOWN creates a crime for digital forgeries with elements that are the exact opposite: directly addressing consent, but not privacy.
That appears to have shifted in 2023, removing the element addressing consent while leaving the element on privacy expectations, paired with language similar to what's in TAKE IT DOWN specifying consent to creation =/= consent to distribution www.congress.gov/bill/118th-c...
TAKE IT DOWN is basically the SHIELD Act, expanded to cover AI depictions, plus the new notice and take-down system. SHIELD has been introduced a few times over the years. When introduced in 2019, it had both lack of consent + privacy-related elements www.congress.gov/bill/116th-c...
VICTORY! In a win for student expression on campus, the U.S. Fifth Circuit Court of Appeals today overruled a lower court's decision, sided with FIRE’s clients, and blocked West Texas A&M University’s unconstitutional ban on drag shows.
The NO FAKES Act is a veritable smorgasbord of platform governance and expression problems. Grateful to have smart thinkers like @daphnek.bsky.social lending their expertise to defend the rights of regular people who would be harmed by the bill. Congress can do better: cdt.org/insights/cdt...
The TAKE IT DOWN ACT is law. Compliance is the floor—platforms must invest the time and effort necessary to protect users and meaningfully respond to image-based sexual abuse, says CDT's Becca Branum.
As the industry works to come into compliance with the TAKE IT DOWN Act, they should look to our recommendations to build notice-and-takedown systems that are easy-to-use, victim-centered, and effective. cdt.org/wp-content/u...
In our report, we examined reporting mechanisms across 8 diverse platforms and identified troubling shortcomings - including inconsistent policy language and structure, insufficient incorporation of AI-generated NDII, barriers to reporting, and limited transparency and support.
When victims encounter NDII, their first response is often to seek the removal of that content to prevent its spread. That requires easy-to-use, accessible, and effective notice-and-takedown mechanisms for people in acute distress.
The non-consensual distribution of intimate imagery (NDII) — both real and AI-generated content — is a growing & deeply harmful form of image-based sexual abuse. With the proliferation of generative AI, victims face escalating risks and endure potential exposure even after content is taken down.
I'm excited to share @cdt.org's newest report - Rapid Response: Building Victim-Centered Reporting Processes for Nonconsensual Intimate Imagery cdt.org/insights/rap...
Careful consideration of the First Amendment stakes will ensure that consumers are appropriately protected from harms without opening the door to content- or viewpoint-based restrictions on the content we read and the information we seek.
There are more questions than answers about how courts will apply First Amendment precedent to cases involving AI, but I'm glad @cdt.org and @eff.org are lending their expertise to highlight the stakes for users. cdt.org/insights/cdt...