🚨 New FREE self-paced course!
We are excited to launch the #EpiTrainingKit #Africa: Introduction to Infectious Disease Modelling for Public Health.
🎯 Tailored for the African context
🌍 With a gender perspective
🆓 Open-access and online
#EpiTKit #Epiverse #PublicHealth
04.08.2025 18:30
👍 16
🔁 8
💬 1
📌 0
#EpiverseTRACE is now on Bluesky & LinkedIn! 🎉
We’re expanding to be more inclusive & diverse, reaching a wider audience in public health & data science.
Want to know more about what we do?⁉️🤔
🧵a thread!
11.03.2025 10:28
👍 13
🔁 12
💬 1
📌 0
Something I don't understand is: why can't LLMs write novel-length fiction yet?
They've got the context length for it. And new models seem capable of the multi-hop reasoning required for plot. So why hasn't anyone demoed a model that can write long interesting stories?
I do have a theory ... +
30.12.2024 00:17
👍 202
🔁 31
💬 48
📌 18
Finally, a Replacement for BERT: Introducing ModernBERT
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Great blog post (by a 15-author team!) on their release of ModernBERT, the continuing relevance of encoder-only models, and how they relate to, say, GPT-4/llama. Accessible enough that I might use this as an undergrad reading.
19.12.2024 19:11
👍 75
🔁 19
💬 1
📌 1
A photo of my open textbook, "Theory of Computing: An Open Introduction", on my bookshelf leaning up against some other classic theory texts.
With students writing my theory exam today, I figured it's a good time to share a link to my open textbook with all you current (and future!) theoreticians.
This term was the first time I used it in class, and students loved it. Big plans for future editions, so stay tuned!
taylorjsmith.xyz/tocopen/
16.12.2024 17:35
👍 14
🔁 1
💬 2
📌 0
Great tutorial on language models!
11.12.2024 08:04
👍 3
🔁 0
💬 0
📌 0
Check out this BEAUTIFUL interactive blog about cameras and lenses
ciechanow.ski/cameras-and-...
27.11.2024 02:54
👍 75
🔁 16
💬 6
📌 1
$100K or 100 Days: Trade-offs when Pre-Training with Academic Resources
Pre-training is notoriously compute-intensive and academic researchers are notoriously under-resourced. It is, therefore, commonly assumed that academics can't pre-train models. In this paper, we seek...
A timely paper exploring ways academics can pretrain larger models than they think, e.g. by trading time against GPU count.
Since the title is misleading, let me also say: US academics do not need $100k for this. They used 2,000 GPU hours in this paper; NSF will give you that. #MLSky
23.11.2024 13:50
👍 143
🔁 12
💬 10
📌 3
A poem for my last day working at the writing center for the semester (by Joseph Fasano)
23.11.2024 02:09
👍 15540
🔁 3288
💬 146
📌 132