googlers are the best part of google www.nytimes.com/2026/02/06/b...
googlers are the best part of google www.nytimes.com/2026/02/06/b...
in both pieces of news today (google shopping ads coming to search or enabling a competitor's search assistant (!!!!!)), we see the idea that genAI will somehow unseat google's search dominance is proving to be completely false. no one can monetize their base (or strike a partnership) like google.
re: google shopping agents: given their extraordinary investment in AI, AI firms need to monetize their investment as quickly as possible. this means a tried & true playbook for big tech: 1) leverage existing customer base, 2) gain access to more of their data, 3) run ads!
re: apple deal: by falsely touting genAI would usher in a new era of competition for google, us v. google failed to sanction much of the behavior that allowed google to amass so much power in the first place, ironically paving the way for this partnership that further concentrates google's power!
in the wake of us v. google, i wrote for @techpolicypress.bsky.social how generative AI is not going to magically unseat a dominant firm like google: it's going to enable it to become even more dominant. news in the past 24 hours shows exactly this (see thread!) www.techpolicy.press/decision-in-...
my latest for @techpolicypress.bsky.social on the US v. Google search antitrust case and just how badly the court missed the opportunity to contend with Googleβs power in the genAI market. genAI wonβt magically unseat Googleβs market dominance, as the court suggests, it will only deepen it
Lofty claims to βinnovationβ should not put people at risk and AI firms should not be given a get-out-of-jail free card. We wrote for @techpolicypress.bsky.social how weak regulation is just as bad as none at all, and today we can see the fruits of this develop: www.techpolicy.press/the-storm-cl...
5/ Shockingly, people can apply to the sandbox before they even have an incorporated business. This means that a firm with no clear understanding of its product risks can effectively claim that the benefits of their hypothetical product outweigh the risks and receive immunity.
4/ Speaking of the risks, they are narrowly defined. Companies are not required to mention high-impact risks that many people face from the deployment of AI systems, including rising prices, depreciating wages, discrimination, or privacy violations.
3/ Companies must state in their applications that they are mitigating consumer risks, but thereβs no enforcement mechanism to ensure they actually follow through. This means that we will be exposed to risky AI products for up to 10 years with no legal recourse.
2/ Cruz will try to say that the sandbox is temporary. But AI companies can renew their participation for up to 8 additional years, preventing agencies from enforcing the law against them for 10 years. (Remember: the proposed moratorium was also ten years!)
1/ In the SANDBOX Act, Senator Cruz unveiled a federal sandbox program for AI companies. A federal sandbox preempts companies from following the law for two years, in effect making it no different from a moratorium. shorturl.at/TSKn4
In todayβs Senate Commerce Hearing the White House endorsed support for federal preemption of state AI laws. The fight against preemption did not disappear with the moratoriumβin fact, Sen. Cruz introduced a bill today putting us directly on the path to preemption. A thread on its risks below: π§΅
congrats to the supreme court for ignoring decades of circuit precedent, twisting logic to avoid textbook definitions (despite being obsessed with, uh, textualism), and undermining the equal protection clause to ensure trans kids can't receive the medical care they deserve
the proposed moratorium on state AI laws is dangerous. a welcome chorus (with unlikely allies!) is rising against the ban. our latest in @techpolicypress.bsky.social argues we must use this momentum to demand more accountability from AI firms and protect against weak, industry co-opted regulation
this is excellent
βWe're not interested in discussing whether or not an individual technology like ChatGPT is good. We're asking whether it's good for society that these companies have unaccountable power," says @kate-brennan.bsky.social in @wired.com
if youβve been looking for an all-in-one resource to explain why itβs troubling for tech companies to push AI into every corner of our social, political, and economic lives, you might love our latest report from @ainowinstitute.bsky.social called Artificial Power: ainowinstitute.org/publications...
A remedy proposal in one antitrust case may seem niche, but the deregulatory patterns are written on the wall. This is a time we need more--not less--scrutiny of how AI companies are shaping our economic, political, and cultural lives for our loss and their profit (2/2)
Bold enforcement remedies are crucial to meet this moment in AI shaped by Big Tech dominance. My latest for @techpolicypress.bsky.social argues that removing AI divestiture as a remedy in the Google search monopoly case fits the troubling anti-regulatory patterns taking shape around the world (1/2)
I spoke with @jasonplautz.bsky.social about how essential energy dominance is to this administration's policy of AI boosterism, and the harmful effects this is sure to have on climate and communities:
OpenAI furious DeepSeek might have stolen all the data OpenAI stole from us
π www.404media.co/openai-furio...
In a new opinion piece for @nytimes.com, AI Nowβs Chief AI Scientist @heidykhlaaf.bsky.social and Co-Executive Director @smw.bsky.social warn that AI may threaten, rather than preserve, national security.
Read more: www.nytimes.com/2025/01/27/o...
As someone who has reported on AI for 7 years and covered China tech as well, I think the biggest lesson to be drawn from DeepSeek is the huge cracks it illustrates with the current dominant paradigm of AI development. A long thread. 1/
At a convening of worker advocates in California organized by @ucblaborcenter.bsky.social, @ambakak.bsky.social told @khari.bsky.social, βLabor has been at the forefront of rebalancing of power and asserting that the public has a say in determining how and under what conditions this tech is used."
If the California fight is any indication, however, even the lightest-touch regulation in the bill will face massive industry lobbyingβa deeply troubling prospect. (3/3)
Watching LA burn and fire hydrants run dry knowing that large-scale AI systems consume millions of gallons of water is as urgent a "catastrophic risk" as those that may emerge from frontier models in the future (2/3)
I spoke with MIT Tech Review about reviving the failed California AI safety bill SB 1047 in New York and how the bill overlooks material harms AI is posing to people, workers, and the climate right now: (1/3) www.technologyreview.com/2025/01/09/1...
In the past *five days alone* weβve seen the NDAA authorize dozens of troubling AI provisions and heard the Biden Administration tease fast-tracking data center construction for AI. One report shouldn't shift our attention away from these.
My statement on the Bipartisan House Task Force Report on AI on @techpolicypress.bsky.social. In one breath the report cautions against material risks posed by large-scale AI, and in the other encourages widespread, uncritical adoption of AI across the economy. www.techpolicy.press/reactions-to...