πŸ”₯ε›§Robert Osazuwa Nessε›§πŸ”₯'s Avatar

πŸ”₯ε›§Robert Osazuwa Nessε›§πŸ”₯

@osazuwa

Probabilistic machine Learning, causal inference, language models. Teach at http://Altdeep.ai & @Northeastern, work at @MSFTResearch.

1,431
Followers
66
Following
10
Posts
10.10.2023
Joined
Posts Following

Latest posts by πŸ”₯ε›§Robert Osazuwa Nessε›§πŸ”₯ @osazuwa

Preview
My Book Is Out! Why I Wrote It and How You Can Help Bridging the Gap Between Deep Learning and Causal Inferenceβ€”A Code-First Approach

newsletter.altdeep.ai/p/my-book-is... The connection between genAI and causality is obvious but could nevery find any good learning material that made the connection.

So I wrote a book

24.02.2025 22:57 πŸ‘ 9 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0

Glad to hear this! Was hoping the 2nd chapter primer would hit but wasn't sure.

24.02.2025 22:13 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Would love to meet if you have the time.

30.11.2023 10:24 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

My team at
@MSFTResearch
is seeking an intern interesting in task-specific distillation of #largelanguagemodels. Join us! Apply now: jobs.careers.microsoft.com/global/en/jo... #AIInternship

29.11.2023 22:39 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

My team at MSR is hiring an intern to explore the intersection of structured probabilistic reasoning and LLMs, and generative AI in general. Touches on causal reasoning, Bayesian modeling, and probabilistic ML. Join us! jobs.careers.microsoft.com/global/en/jo... #AIResearch #Internship

29.11.2023 22:14 πŸ‘ 6 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Modeling rapid language learning by distilling Bayesian priors... Humans can learn languages from remarkably little experience. Developing computational models that explain this ability has been a major challenge in cognitive science. Bayesian models that build...

I forget if I've already shared this but I'm so obsessed with this paper from the Toms (McCoy & Griffiths):

arxiv.org/abs/2305.14701

01.11.2023 20:59 πŸ‘ 11 πŸ” 6 πŸ’¬ 2 πŸ“Œ 0

nvm. I realized your original post was not about a Bayesian statistic.

01.11.2023 00:44 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Anyone know of any work that evaluates the relationship between LLM prompting strategies and generalizability? Eg, if one applies a bunch of prompting hacks to ramp up accuracy on a benchmark, you might be sacrificing the ability for that prompt to generalize to new settings?

01.11.2023 00:41 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Ever heard of Bayesian statistics Loo or Waic?

27.10.2023 23:50 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I got #COVID19. I have one toddler. We're a two-parent household with external help, so I can self-quarantine and just wait to stop feeling shitty while my wife does heavy lifting.

My heart weeps for parents who had to do this alone, especially during the pandemic.

17.10.2023 11:09 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

βœ‹

10.10.2023 17:31 πŸ‘ 5 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0