Into creative ML/AI, NLP, data science and digital humanities, narrative, infovis, games, sf & f. Consultant, ds in Residence at Google Arts & Culture. (Lyon, FR) Newsletter arnicas.substack.com.
Philosophy professor. My new book is THE SCORE, about true play, the limits of data - and why scoring systems can lead to beautiful games and soul-killing metrics.
Researcher at OpenAI and at the GenLaw Center.
I just want things to work (:
https://katelee168.github.io/
Artist, researcher, educator, instigator. Chaotic Good.
Papa of two girls; husband to Alisha Holland; Jew; Founder & Research Lead, Microsoft Research Plural Technology Collaborator; Founder & Chair, Plurality Institute; Founder & Board Member, RadicalxChange Foundation; co-author, Radical Markets and Plurality
Astronomer, planet hunter, Pluto killer, bear whisperer, finger-wrapped dad. Dangerous secularist. Foot soldier in the War on Cars. He/him.
mikebrown.caltech.edu
I fall in love with a new #machinelearning topic every month 🙄
Ass. Prof. Sapienza (Rome) | Author: Alice in a differentiable wonderland (https://www.sscardapane.it/alice-book/)
Google Chief Scientist, Gemini Lead. Opinions stated here are my own, not those of Google. Gemini, TensorFlow, MapReduce, Bigtable, Spanner, ML things, ...
We're a network of people working to enhance digital interaction with culture & heritage #NDFNZ. More at www.ndf.org.nz.
Follow #NDF25 for conference updates, or visit https://www.live.ndf.org.nz/
AI, philosophy, spirituality
Head of interpretability research at EleutherAI, but posts are my own views, not Eleuther’s.
https://unireps.org
Discover why, when and how distinct learning processes yield similar representations, and the degree to which these can be unified.
Understands comics… mostly… Now on sale: THE CARTOONISTS CLUB, co-created with the legendary Raina Telgemeier! https://bit.ly/CartoonistClub And I'm also working on a massive book about visual communication; learn more at scottmccloud.com.
PhD student at MIT.
Working on mechanistic interpretability and AI safety.
PostDoc @ISTAustria 🧑🏻💻 | Organizer of @unireps.bsky.social | Member @ellis.eu | Prev. PhD @SapienzaRoma @ELLISforEurope | @amazon AWS AI | @autodesk AI Lab | (he/him)
Master student at ENS Paris-Saclay / aspiring AI safety researcher / improviser
Prev research intern @ EPFL w/ wendlerc.bsky.social and Robert West
MATS Winter 7.0 Scholar w/ neelnanda.bsky.social
https://butanium.github.io
Postdoc at Northeastern and incoming Asst. Prof. at Boston U. Working on NLP, interpretability, causality. Previously: JHU, Meta, AWS
Interpretable Deep Networks. http://baulab.info/ @davidbau
https://mega002.github.io
Gemini Post-Training ⚫️ Research Scientist at Google DeepMind ⚫️ PhD from ETH Zurich
AI Safety Research // Software Engineering
Postdoc @ Northeastern, @ndif-team.bsky.social w/ @davidbau.bsky.social. Interpretability ∩ HCI ∩ #NLProc. Built @inseq.org. Prev: PhD @gronlp.bsky.social, ML @awscloud.bsky.social
gsarti.com
Waiting on a robot body. All opinions are universal and held by both employers and family. ML/NLP professor.
nsaphra.net
Machine learning haruspex
NLP PhD student at Imperial College London and Apple AI/ML Scholar.
Machine learning PhD student @ Blei Lab in Columbia University
Working in mechanistic interpretability, nlp, causal inference, and probabilistic modeling!
Previously at Meta for ~3 years on the Bayesian Modeling & Generative AI teams.
🔗 www.sweta.dev
Machine Learning PhD Student
@ Blei Lab & Columbia University.
Working on probabilistic ML | uncertainty quantification | LLM interpretability.
Excited about everything ML, AI and engineering!
PhD student at Vector Institute / University of Toronto. Building tools to study neural nets and find out what they know. He/him.
www.danieldjohnson.com
Mechanistic interpretability
Creator of https://github.com/amakelov/mandala
prev. Harvard/MIT
machine learning, theoretical computer science, competition math.
Post-doc @ Harvard. PhD UMich. Spent time at FAIR and MSR. ML/NLP/Interpretability
Computer Science PhD student | AI interpretability | Vision + Language | Cogntive Science. Prev. intern @MicrosoftResearch.
https://martinagvilas.github.io/
ml/nlp phding @ usc, currently visiting harvard, scientisting @ startup;
interpretability & training & reasoning
iglee.me
Assistant Professor, University of Copenhagen; interpretability, xAI, factuality, accountability, xAI diagnostics https://apepa.github.io/
Computation & Complexity | AI Interpretability | Meta-theory | Computational Cognitive Science
https://fedeadolfi.github.io
On the job market!
Scruting matrices @ Apollo Research
PhD student at UC Berkeley. NLP for signed languages and LLM interpretability. kayoyin.github.io
🏂🎹🚵♀️🥋
Aspiring 10x reverse engineer at Google DeepMind
PhD at EPFL with Robert West, Master at ETHZ
Mainly interested in Language Model Interpretability and Model Diffing.
MATS 7.0 Winter 2025 Scholar w/ Neel Nanda
jkminder.ch
PhD student @CMU LTI - working on model #interpretability, student researcher @google; prev predoc @ai2; intern @MSFT
nishantsubramani.github.io
CS PhD Student, Northeastern University - Machine Learning, Interpretability https://ericwtodd.github.io
member of technical staff @stanfordnlp.bsky.social
Postdoc at the interpretable deep learning lab at Northeastern University, deep learning, LLMs, mechanistic interpretability
ai interpretability research and running • thinking about how models think • prev @MIT cs + physics
Assistant Professor @HopkinsMedicine @JHUPath
https://scholar.google.com/citations?user=dGBD72YAAAAJ
ML/AI researcher @JohnsHopkins
PhDing @AIM_Harvard @MassGenBrigham|PhD Fellow @Google | Previously @Bos_CHIP @BrandeisU
More robustness and explainabilities 🧐 for Health AI.
shanchen.dev
Associate Professor @UAntwerp, sqIRL/IDLab, imec.
#RepresentationLearning, #Model #Interpretability & #Explainability
A guy who plays with toy bricks, enjoys research and gaming.
Opinions are my own
idlab.uantwerpen.be/~joramasmogrovejo
NLP & Interpretability | PhD Student @ University of Trieste & Laboratory of Data Engineering of Area Science Park | Prev MPI-IS
Ph.D. student at @jhuclsp, human LM that hallucinates. Formerly @MetaAI, @uwnlp, and @AWS they/them🏳️🌈 #NLProc #NLP Crossposting on X.
Tell me about challenges, the unbelievable, the human mind and artificial intelligence, thoughts, social life, family life, science and philosophy.
Laplace Junior Chair, Machine Learning
ENS Paris. (prev ETH Zurich, Edinburgh, Oxford..)
Working on mathematical foundations/probabilistic interpretability of ML (what NNs learn🤷♂️, disentanglement🤔, king-man+woman=queen?👌…)
Postdoc at ETH. Formerly, PhD student at the University of Cambridge :)
PhD Candidate in Interpretability @FraunhoferHHI | 📍Berlin, Germany
dilyabareeva.github.io
PhD @ ETHZ - LLM Interpretability
alestolfo.github.io
Explainability, Computer Vision, Neuro-AI.🪴 Kempner Fellow @Harvard.
Prev. PhD @Brown, @Google, @GoPro. Crêpe lover.
📍 Boston | 🔗 thomasfel.me
PhD Student at the ILLC / UvA doing work at the intersection of (mechanistic) interpretability and cognitive science. Current Anthropic Fellow.
hannamw.github.io
PhD student at Northeastern, previously at EpochAI. Doing AI interpretability.
diatkinson.github.io
Ph.D. Student at UNC NLP | Prev: Apple, Amazon, Adobe (Intern) vaidehi99.github.io | Undergrad @IITBombay
PhD student in Responsible NLP at the University of Edinburgh, curious about interpretability and alignment
CS Ph.D. Candidate @ Northeastern | Interpretability + Data Science | BS/MS @ Brown
koyenapal.github.io
Senior Research Scientist at Google DeepMind.
🌐 jasmijn.bastings.me
Research Engineer @ FAR.AI
taufeeque9.github.io
Physics, Visualization and AI PhD @ Harvard | Embedding visualization and LLM interpretability | Love pretty visuals, math, physics and pets | Currently into manifolds
Wanna meet and chat? Book a meeting here: https://zcal.co/shivam-raval
Interpretability researcher at @eleutherai.bsky.social
Computer vision, generative models, and a bit of DJ’ing. PhD @ Mila & McGill. Co-managing n10.as. Prev: Meta, Element AI.
🔗 https://mtesfaldet.net
Spills ink, rolls dice.
hicksvillecomics.com
NZ. Creative. Joycraft. Unista. Runny. Radio Suntan. 2Tracker. Red Peak.
Sr. ML Engineer | Keras 3 Collaborator | @GoogleDevExpert in Machine Learning | @TensorFlow addons maintainer l ML is all I do | Views are my own!
Working towards the safe development of AI for the benefit of all at Université de Montréal, LawZero and Mila.
A.M. Turing Award Recipient and most-cited AI researcher.
https://lawzero.org/en
https://yoshuabengio.org/profile/
Secular Bayesian.
Professor of Machine Learning at Cambridge Computer Lab
Talent aficionado at http://airetreat.org
Alum of Twitter, Magic Pony and Balderton Capital
Research Scientist at DeepMind. Opinions my own. Inventor of GANs. Lead author of http://www.deeplearningbook.org . Founding chairman of www.publichealthactionnetwork.org
San Diego Dec 2-7, 25 and Mexico City Nov 30-Dec 5, 25. Comments to this account are not monitored. Please send feedback to townhall@neurips.cc.
research scientist at google deepmind.
phd in neural nonsense from stanford.
poolio.github.io
Cofounded and lead PyTorch at Meta. Also dabble in robotics at NYU.
AI is delicious when it is accessible and open-source.
http://soumith.ch
Principal Researcher in BioML at Microsoft Research. He/him/他. 🇹🇼 yangkky.github.io
ML, Psychology, Art, Materials Informatics, Espresso, &c.
he/him
Artist, Neurographer, AI Prompteur, Purveyor of Systems, Data Dumpster Diver, Information Recycler
AI/generative artist. Writes her own code. Absolute power is a door into dreaming.
Associate Professor in EECS at MIT. Neural nets, generative models, representation learning, computer vision, robotics, cog sci, AI.
https://web.mit.edu/phillipi/
So far I have not found the science, but the numbers keep on circling me.
Views my own, unfortunately.
AI @ OpenAI, Tesla, Stanford
Professor, Programmer in NYC.
Cornell, Hugging Face 🤗
Research Scientist Meta/FAIR, Prof. University of Geneva, co-founder Neural Concept SA. I like reality.
https://fleuret.org
AI indie hacker. Prev: founded Clipdrop (YCW21, acq. Stability AI), resident at Google A&C Lab, Prof. & head of MID at ECAL
Assistant Professor of the Generative Intelligence Lab at Carnegie Mellon University. Understanding and creating pixels. All the code and models are available at http://github.com/junyanz.
🥇 LLMs together (co-created model merging, BabyLM, textArena.ai)
🥈 Spreading science over hype in #ML & #NLP
Proud shareLM💬 Donor
@IBMResearch & @MIT_CSAIL