Tickets are now on sale for this conference!
appliedml.us/2026/register/
Tickets are now on sale for this conference!
appliedml.us/2026/register/
Tickets are now on sale for this conference!
appliedml.us/2026/register/
Anthropic: "Statement from Dario Amodei on our discussions with the Department of War"
www.anthropic.com/news/stateme...
Last chance to submit a talk for the 2026 Applied Machine Learning Conference! appliedml.us/2026/cfp/ Deadline: Feb 22! โณ #rstats #pydata #databs
Thanks for sharing!
It's the last weekend to submit a talk to this Applied Machine Leaning Conference I'm helping organize in Charlottesville in April!
If you want to give a talk or tutorial at this conference, proposals are due soon, on Feb 22. I would love to see you there!
Selected speakers get free tickets to the whole conference.
appliedml.us/2026/
Applied Machine Learning Conference Keynote Speaker: Vicki Boykis Founding ML Engineer, Malachyte Previously at Mozilla.ai / Duo / Automattic April 17-18, 2026 Charlottesville, VA https://appliedml.us
Excited to announce that the Applied ML Conference that I've been involved with in Charlottesville, VA is back again this year, April 17-18!
And we have exciting news that @vickiboykis.com will be our keynote speaker!
Learn more and submit a proposal to join her: appliedml.us/2026/
Applied Machine Learning Conference Keynote Speaker: Vicki Boykis Founding ML Engineer, Malachyte Previously at Mozilla.ai / Duo / Automattic April 17-18, 2026 Charlottesville, VA https://appliedml.us
Excited to announce that the Applied ML Conference that I've been involved with in Charlottesville, VA is back again this year, April 17-18!
And we have exciting news that @vickiboykis.com will be our keynote speaker!
Learn more and submit a proposal to join her: appliedml.us/2026/
"Governments have a responsibility to regulate the use of AI in schools, making sure that the technology being used protects students' cognitive and emotional health, as well as their privacy."
(more recommendations at the end of the article)
www.npr.org/2026/01/14/n...
Agreed.
I left my Twitter account with 70k+ followers behind because of Musk, and every day, he shows why no one should be using X anymore.
And if we had any appropriate AI regulation, or anyone with power who cared, Grok would've been taken offline immediately.
LA Times: Elon Musk company bot apologizes
Newsweek: Elon Musk's Grok Apologizes
International Business Times: Grok Issues Apologies
The Hill: Musk's AI chatbot Grok apologizes
1. Headlines everywhere today read "Grok apologizes."
This is bullshit. A chatbot is not something that can apologize.
Pretending otherwise is simple laundering these companies' bullshit about what AI is, while diffusing blame away from the human beings that developed and released this system.
Glad to hear the book has already been helpful, then! ๐
I hope you are enjoying my book ๐
Oh hey, I'm in that one ๐
I went through it last year to learn SQL!
Nice! Thanks for sharing it!
This book literally changed my life. Opened a new career field for me and got me started down a path that I truly love. Highly recommend!
Wow, I'm so glad to hear that! โค๏ธ
Do you have a New Year's Resolution to learn SQL?
Check out my book!
sqlfordatascientists.com
Yes, that's who I was referring to in the second post when I mentioned sales.
Starting with definitions so everyone is on the same page is a good practice!
๐งต
Now wondering what timekeeping source time machines would use, and whether a miniscule shift in timekeeping could have cataclysmic effects to time travelers or timelines!
The previous post is a thread, by the way
I hope (but don't expect) the growing pressures around fair content use and safety create a way to slow things down and develop the technology & test use-cases more responsibly.
At least as data scientists and developers in the industry, we can be clear about what we mean when having these conversations on here.
Technologies like ChatGPT are being pushed out into products at breakneck speed, with incredibly large investment behind them, and almost no rules.
(And yes, I think all of AI/ML needs to be applied more thoughtfully, and its use regulated, but the most urgent today are the massive and pervasive Generative AI systems)
But when I talk about it, I'm going to try to use the terms "LLM" or "Generative AI", because it feels near-impossible to have productive conversation about any of this when the term "AI" has become a catch-all to some, and something very specific to others.
I land more in the middle on LLMs: basically that it's not the technology that's inherently bad, but the way content is being stolen for training, and barely-tested tech is being put into every hand and every product when we know there are harms, that is dangerous.
Uses range from good to very bad.