Co-founder and CTO of Oxide Computer Company. According to Field of Schemes, "tech exec and Oakland A's fan" -- but more of an Oakland Ballers fan now.
Sr Software Engineering Manager (@Trustpilot) based in Edinburgh 🏴.
https://severin-bruhat.com/
https://sbruhat.gumroad.com/
Collective intelligence, AI, swarm robotics, and their applications. Research director at the Institute of Cognitive Sciences and Technologies of the Italian National Research Council
Researcher at ICAR-CNR
I develop computational tools to identify threats to online users posed by malicious actors and algorithms that behave unpredictably.
Personal Website: https://mminici.github.io
pragmatic, progressive, multidisciplinary creative, data scientist & full stack dev. occasional avid beach goer. always learning.
🔗 https://marko.tech 🏠 https://startyparty.dev
30+ yrs writing code. Go expert now deep in AI-powered development. Founder GopherGuides - Co-author Go Fundamentals. 10K+ devs trained. Videos & blog
vibe coding is the way. I bootstrapped a remote company before it was cool. Founder @PSPDFKit (exit to Insight). 🏳️🌈
Primarily Robotics and AI. Distinguishing hype-notism from plausibility one press release at a time. rodneybrooks.com/blog people.csail.mit.edu/brooks
Dr of Zoology
Data Scientist
Science feed co-creator
NYT bestselling author of #DoesItFart🐢💨
Physics/thermodynamics dumb, Marine energy PhD, AI stuff, etc
Opinions/Grammars/Typos are MINE!
Git: https://github.com/jm-rivcam
Blog: https://magentandcyan.wordpress.com
Ocean-Atmosphere/Engineering/Quantum lists/feeds
Location: the lands in the west
Research fellow at @scuolanormale
Research Associate @IstiCnr @kdd_lab @IMTLucca
#HumanMobility | #Segregation | #Gentrification | #UrbanDynamics
Creator of Flask • earendil.com ♥︎ writing and giving talks • Excited about AI • Husband and father of three • Inhabits Vienna; Liberal Spirit • “more nuanced in person” • More AI content on https://x.com/mitsuhiko
More stuff: https://ronacher.eu/
Long career as a dilettante at Bell Labs Research and Google, mostly building weird stuff no one uses, but occasionally getting it right, such as with UTF-8 and Go.
Social networking technology created by Bluesky.
Developer-focused account. Follow @bsky.app for general announcements!
Bluesky API docs: docs.bsky.app
AT Protocol specs: atproto.com
A programming language empowering everyone to build reliable and efficient software.
Website: https://rust-lang.org/
Blog: https://blog.rust-lang.org/
Mastodon: https://social.rust-lang.org/@rust
🇮🇹 se ti piacciono videogiochi strani o curiosi, o sei cresciuto negli anni 90 davanti ad un pc, sei a casa
🇬🇧 I'm a Twitch streamer and Retrogamer, expecially in PC gaming of 90's. In private life, Dad and Programmer.
Part of Gamerevs collective
Staff Product Engineer 🇮🇹🕹️🐶
Opinions are my own
Full Stack Dev | 🥑 Developer Advocate 🚀 | Public Speaker 🎤 | Talks about Javascript, React, Open Source and web dev
Groundbreaking foundational research in Big Data Management, Machine Learning, and their intersection. #AI #Research
www.bifold.berlin
📰News: www.bifold.berlin/news-events/news
🔑Data Privacy: www.bifold.berlin/data-privacy
Art is everywhere!
🖼️ DeepStyle: http://blackhalt.redbubble.com
Open indie game marketplace and DIY game jam host
Account support? Email support@itch.io
❗📢 You can use your itchio URL as your Bluesky username, check here: https://itch.io/user/settings/bluesky
YugabyteDB is the high-performance, AI-ready, PostgreSQL-compatible distributed database for building cloud-native apps at scale.
Find out more: https://www.yugabyte.com
AWS Serverless Hero & Principal Engineer @ PostNL
Waitress turned Congresswoman for the Bronx and Queens. Grassroots elected, small-dollar supported. A better world is possible.
ocasiocortez.com
Senior Software Engineer @TeamCymru
Let’s connect:
https://www.linkedin.com/in/michaelparkadze
Artificer of Code.
OpenSource, TC39 Signals, StarbeamJS & @emberjs.com enthusiast and advocate
Former @react.dev
Where i'm at
nullvoxpopuli.com/page/links
Projects
tutorial.glimdown.com
limber.glimdown.com
#SwarmLyfe
Queen of Blades, she/her, obv
Software Engineer, handstands, locking in and "staying upwind".
https://shagag.dev/
Front-end developer, driven by design.
Former lead front-end @ cher-ami.tv.
Musician, drummer.
Based in Lyon, France.
Freelancer
↳ https://willybrauner.com
↳ https://github.com/willybrauner
There are three kinds of lies: lies, damned lies, and statistics.
PhD student in Computer Science @UCSD. Studying interpretable AI and RL to improve people's decision-making.
Assistant Prof at UCSD. I work on safety, interpretability, and fairness in machine learning. www.berkustun.com
PhD Student @ UC San Diego
Researching reliable, interpretable, and human-aligned ML/AI
Institute for Explainable Machine Learning at @www.helmholtz-munich.de and Interpretable and Reliable Machine Learning group at Technical University of Munich and part of @munichcenterml.bsky.social
I work with explainability AI in a german research facility
Researcher Machine Learning & Data Mining, Prof. Computational Data Analytics @jkulinz.bsky.social, Austria.
ML researcher, building interpretable models at Guide Labs (guidelabs.bsky.social).
Assistant Professor @ Harvard SEAS specializing in human-computer and human-AI interaction. Also interested in visualization, digital humanities, urban design.
Machine Learning Researcher | PhD Candidate @ucsd_cse | @trustworthy_ml
chhaviyadav.org
Assistant Professor @RutgersCS • Previously @MSFTResearch, @dukecompsci, @PinterestEng, @samsungresearch • Trustworthy AI • Interpretable ML • https://lesiasemenova.github.io/
Author of Interpretable Machine Learning and other books
Newsletter: https://mindfulmodeler.substack.com/
Website: https://christophmolnar.com/
Seeking superhuman explanations.
Senior researcher at Microsoft Research, PhD from UC Berkeley, https://csinva.io/
Professor in Artificial Intelligence, The University of Queensland, Australia
Human-Centred AI, Decision support, Human-agent interaction, Explainable AI
https://uqtmiller.github.io
Assistant Professor @ University of Cambridge.
Responsible AI. Human-AI Collaboration. Interactive Evaluation.
umangsbhatt.github.io
Senior Researcher @arc-mpib.bsky.social MaxPlanck @mpib-berlin.bsky.social, group leader #BOOSTING decisions: cognitive science, AI/collective intelligence, behavioral public policy, comput. social science, misinfo; stefanherzog.org scienceofboosting.org
PhD Student @ LMU Munich
Munich Center for Machine Learning (MCML)
Research in Interpretable ML / Explainable AI
Machine Learning PhD at UPenn. Interested in the theory and practice of interpretable machine learning. ML Intern@Apple.
Data Scientist @ Mass General, Beth Israel, Broad | Clinical Research | Automated Interpretable Machine Learning, Evolutionary Algorithms | UPenn MSE Bioengineering, Oberlin BA Computer Science
CS Prof at the University of Oregon, studying adversarial machine learning, data poisoning, interpretable AI, probabilistic and relational models, and more. Avid unicyclist and occasional singer-songwriter. He/him
interpretable machine learning for atmospheric and astronomical data analysis, near-IR spectra, climate tech, stars & planets; bikes, Austin, diving off bridges into the ocean.
Assistant professor at University of Minnesota CS. Human-centered AI, interpretable ML, hybrid intelligence systems.
Researcher @Microsoft; PhD @Harvard; Incoming Assistant Professor @MIT (Fall 2026); Human-AI Interaction, Worker-Centric AI
zbucinca.github.io
Professor of computer science at Harvard. I focus on human-AI interaction, #HCI, and accessible computing.
Associate professor at the University of Chicago. Working on human-centered AI, NLP, CSS. https://chenhaot.com, https://substack.com/@cichicago
Sr. Principal Research Manager at Microsoft Research, NYC // Machine Learning, Responsible AI, Transparency, Intelligibility, Human-AI Interaction // WiML Co-founder // Former NeurIPS & current FAccT Program Co-chair // Brooklyn, NY // http://jennwv.com
🎯 Making AI less evil= human-centered + explainable + responsible AI
💼 Harvard Berkman Klein Fellow | CS Prof. @Northeastern | Data & Society
🏢 Prev-Georgia Tech, {Google, IBM, MSFT}Research
🔬 AI, HCI, Philosophy
☕ F1, memes
🌐 upolehsan.com
AI for storytelling, games, explainability, safety, ethics. Professor at Georgia Tech. Director of ML Center at GT. Time travel expert. Geek. Dad. he/him
Ginni Rometty Prof @NorthwesternCS | Fellow @NU_IPR | AI, people, uncertainty, beliefs, decisions, metascience | Blog @statmodeling
Human-centered AI #HCAI, NLP & ML. Director TRAILS (Trustworthy AI in Law & Society) and AIM (AI Interdisciplinary Institute at Maryland). Formerly Microsoft Research NYC. Fun: 🧗🧑🍳🧘⛷️🏕️. he/him.
CS prof at Haverford, Chair @acm.org U.S. tech policy, @brookings.edu nonres Senior Fellow, former White House OSTP tech policy, co-author AI Bill of Rights, research on AI and society, @facct.bsky.social co-founder
formerly @kdphd 🐦
sorelle.friedler.net
PhD Student in Machine Learning at CMU.
🐦 twitter.com/steph_milani
🌐 stephmilani.github.io
PhD @UChicagoCS / BE in CS @Umich / ✨AI/NLP transparency and interpretability/📷🎨photography painting
IBM Distinguished RSM, working on AI transparency, governance, explainability, and fairness. Proud husband & dad, Soccer lover. Posts are my own.
Principal Researcher @ CENTAI.eu | Leading the Responsible AI Team. Building Responsible AI through Explainable AI, Fairness, and Transparency. Researching Graph Machine Learning, Data Science, and Complex Systems to understand collective human behavior.
Senior Research Scientist at IBM Research and Explainability lead at the MIT-IBM AI Lab in Cambridge, MA. Interested in all things (X)AI, NLP, Visualization. Hobbies: Social chair at #NeurIPS, MiniConf, Mementor-- http://hendrik.strobelt.com
Explainability, Computer Vision, Neuro-AI.🪴 Kempner Fellow @Harvard.
Prev. PhD @Brown, @Google, @GoPro. Crêpe lover.
📍 Boston | 🔗 thomasfel.me
PhD Researcher at #MPI_SP | MS and BS at KAIST | AI ethics, HCI, justice, accountability, fairness, explainability | he/him
http://thegcamilo.github.io/
PhD student at Columbia University working on human-AI collaboration, AI creativity and explainability. prev. intern @GoogleDeepMind, @AmazonScience
asaakyan.github.io
Assistant Professor at Eindhoven University of Technology | responsible AI, fairness, etc. | maintainer at fairlearn 👩💻
RE at Instadeep, PhD in computational neuroscience, MSc in CS, interested in ML for life sciences.
"Seung Hyun" | MS CS & BS Applied Math @UCSD 🌊 | LPCUWC 18' 🇭🇰 | AI Evaluation, Safety, Alignment | 🇰🇷
harry.scheon.com
Human/AI interaction. ML interpretability. Visualization as design, science, art. Professor at Harvard, and part-time at Google DeepMind.
Associate professor at IT University of Copenhagen: NLP, language models, interpretability, AI & society. Co-editor-in-chief of ACL Rolling Review. #NLProc #NLP
Computation & Complexity | AI Interpretability | Meta-theory | Computational Cognitive Science
https://fedeadolfi.github.io
On the job market!
Computer Science PhD student | AI interpretability | Vision + Language | Cogntive Science. Prev. intern @MicrosoftResearch.
https://martinagvilas.github.io/
PhD in ML/AI | Researching Efficient ML/AI (vision & language) 🍀 & Interpretability | @SapienzaRoma @EdinburghNLP | https://alessiodevoto.github.io/ | ex @NVIDIA
ai interpretability research and running • thinking about how models think • prev @MIT cs + physics
AI Evaluation and Interpretability @MicrosoftResearch, Prev PhD @CMU.
PhD student at Northeastern, previously at EpochAI. Doing AI interpretability.
diatkinson.github.io
Junior Professor CNRS (previously EPFL, TU Darmstadt) -- AI Interpretability, causal machine learning, and NLP. Currently visiting @NYU
https://peyrardm.github.io
Senior Researcher Machine Learning at BIFOLD | TU Berlin 🇩🇪
Prev at IPAM | UCLA | BCCN
Interpretability | XAI | NLP & Humanities | ML for Science
Machine learning, interpretability, visualization, Language Models, People+AI research
PhD student at Utah NLP, Mechanistic Interpretability, Trustworthy AI, Human-centered AI
AI/ML, Responsible AI @Nvidia
Assistant Prof of AI & Decision-Making @MIT EECS
I run the Algorithmic Alignment Group (https://algorithmicalignment.csail.mit.edu/) in CSAIL.
I work on value (mis)alignment in AI systems.
https://people.csail.mit.edu/dhm/
Assoc. Professor at UC Berkeley
Artificial and biological intelligence and language
Linguistics Lead at Project CETI 🐳
PI Berkeley Biological and Artificial Language Lab 🗣️
College Principal of Bowles Hall 🏰
https://www.gasperbegus.com
Assistant Professor, University of Copenhagen; interpretability, xAI, factuality, accountability, xAI diagnostics https://apepa.github.io/
Interpretable Deep Networks. http://baulab.info/ @davidbau
AI Professor and Founding Director @ https://cair.uia.no | Chair of Technical Steering Committee @ https://www.literal-labs.ai | Book: https://tsetlinmachine.org
Stanford MS&E Postdoc | Human-Centered AI & OR
Prev: @CornellORIE @MSFTResearch, @IBMResearch, @uoftmie 🌈
Assistant Professor at Imperial College London | EEE Department and I-X.
Neuro-symbolic AI, Safe AI, Generative Models
Previously: Post-doc at TU Wien, DPhil at the University of Oxford.
Senior applied scientist @Microsoft | PhD from @UChicagoCS | Build LLM copilot for group communications.
CS researcher in uncertainty reasoning (wherever it appears: risk analysis, AI, philosophy, ...), mostly mixing sets and probabilities. Posts mostly on this topic (french and english), and a bit about others. Personal account and opinions.
Explainability of deep neural nets and causality https://tfjgeorge.github.io/
Associate Professor @UAntwerp, sqIRL/IDLab, imec.
#RepresentationLearning, #Model #Interpretability & #Explainability
A guy who plays with toy bricks, enjoys research and gaming.
Opinions are my own
idlab.uantwerpen.be/~joramasmogrovejo