Austin Tripp's Avatar

Austin Tripp

@austinjtripp

(ML ∪ Bayesian optimization ∪ active learning) ∩ (drug discovery) Researcher @valenceai.bsky.social Details: austintripp.ca

1,408
Followers
234
Following
32
Posts
17.11.2024
Joined
Posts Following

Latest posts by Austin Tripp @austinjtripp

(I wrote this because this was something I inferred over time, and I think it's helpful to new reviewers to explain acceptance criteria more explicitly)

25.06.2025 08:14 👍 0 🔁 0 💬 0 📌 0

That is, a paper should provide either a unique result or a unique idea (ideally both), and on top of that should have no correctness issues.

Full post is here: www.austintripp.ca/blog/2025-06...

Happy to hear comments/feedback! I know my approach is just 1 of many!

25.06.2025 08:14 👍 0 🔁 0 💬 1 📌 0

NeurIPS reviews are due in 1 week 😱

If it's your first time reviewing (or if you don't feel totally confident about accept/reject), I recently wrote a blog post where I explain how *I* approach reviewing. Essentially:

accept = correct AND (result OR idea) [...]

25.06.2025 08:14 👍 3 🔁 0 💬 1 📌 0
Post image

1/ At Valence Labs, @recursionpharma.bsky.social's AI research engine, we’re focused on advancing drug discovery outcomes through cutting-edge computational methods

Today, we're excited to share our vision for building virtual cells, guided by the predict-explain-discover framework 🧵

20.05.2025 15:53 👍 13 🔁 6 💬 2 📌 2
Chebyshev Scalarization Explained I've been reading about multi-objective optimization recently.1 Many papers state limitations of "linear scalarization" approaches, mainly that it might not be able to represent all Pareto-optimal sol

If none of this makes any sense to you but you think multi-objective optimization is relevant, check out my full post below (where I explain MOO in more detail too). Bonus: also has an interactive visualization (kudos to Claude 3.7)

www.austintripp.ca/blog/2025-05...

16.05.2025 11:07 👍 0 🔁 0 💬 0 📌 0

2. Unfortunately, maximizing the Chebyshev objective may produce points which are *not* Pareto optimal (so some filtering might be required)

...

16.05.2025 11:07 👍 0 🔁 0 💬 1 📌 0

For anybody working on multi-objective optimization: I recently did a deep-dive on Chebyshev scalarization and wrote a blog post. Key findings:

1. Unlike linear scalarization, varying the weights of Chebyshev scalarization will find *all* points on the Pareto front (not just the convex part)

...

16.05.2025 11:07 👍 1 🔁 0 💬 1 📌 0

(1/3)The poster submission deadline for MoML 2025 has been extended to May 20th, 2025.

Don’t miss an opportunity to share your work at this years conference.

Submit here: portal.ml4dd.com/moml-2025-po...

13.05.2025 13:19 👍 1 🔁 1 💬 1 📌 1

Really interesting essay- disagreements about AI existential risk might *really* be disagreements about dual-use nature of future technologies (since this is the vector people think AI could cause extinction).

24.04.2025 11:30 👍 6 🔁 1 💬 0 📌 0
Coding python packages with AI I tried using some new LLM tools to code 2 entire python packages (instead of editing a handful of lines at a time, which is what I did previously). It went well! These tools are not perfect, but they

Claude and Gemini do a pretty good job at coding some niche python packages from just a prompt. Some editing required, but if you haven't tried it yet then I highly recommend!

www.austintripp.ca/blog/2025-04...

14.04.2025 08:38 👍 2 🔁 0 💬 1 📌 0

(also, with ICML reviewing starting, this post will probably be the first in a series of posts about peer reviewing, stay tuned! 👀)

14.02.2025 10:08 👍 0 🔁 0 💬 0 📌 0
Is offline model-based optimization a realistic problem? (I'm not conv This is a "quickpost": a post which I have tried to write quickly, without very much editing/polishing. For more details on quickposts, see this blog post. Offline model-based optimization (OMBO in

I wrote a blog post explaining this in more detail: www.austintripp.ca/blog/2025-02...

If you think I'm wrong, I'd genuinely like to hear why. Please comment in 🧵

14.02.2025 10:08 👍 1 🔁 0 💬 1 📌 0

Can anybody explain to me why so many ML papers study "offline model-based optimization"? This is essentially "1-shot optimization".

My main concern is "are there 1-shot optimization problems in real life"? Papers mention "drug discovery (DD)" as an example, but 1-shot DD never happens, no? 😂

14.02.2025 10:08 👍 1 🔁 0 💬 1 📌 0

People who are masking are smart for two reasons:

1. They do not want to get brain damage
2. They are not getting brain damage

07.02.2025 07:38 👍 171 🔁 28 💬 3 📌 1

Second note: there are a lot of more standard topics too, eg AI for science stuff, I'm just not posting that here.

31.01.2025 09:37 👍 0 🔁 0 💬 0 📌 0

Also funny:

- Position: ML researchers should try to ensure their code is not a heaping pile of dogsh*t

- Position: ML researchers should learn basic math (I'm talking to you, people who don't add error bars to their plots!!)

- Position: focusing on meaningless benchmarks is stupid

31.01.2025 09:36 👍 1 🔁 0 💬 0 📌 0

Other abstracts:

- Position: what if we started holding ML papers to actual standards?

- Position: reviewers should actually read the papers they are reviewing

- Position: reviewers should *at least try* to judge whether paper's claims are true before accepting them

31.01.2025 09:29 👍 0 🔁 0 💬 0 📌 0

(Note: titles are summarized/anonymized since I don't think I'm allowed to share)

31.01.2025 09:27 👍 0 🔁 0 💬 0 📌 0

My bid screen for ICML position papers is basically:

- "Position: ML conference peer review is sh*t"

- "Position: Let's abolish conference reviewing"

- "Position: C'mon ML reviewers, surely we can do better than *this*"

Am I in a "review hate" echo chamber or is everybody else seeing this too? 😶

31.01.2025 09:26 👍 3 🔁 0 💬 4 📌 0

Easy to get started on the antiviral challenge! I plan to submit some GP baselines from my PhD work (possibly with a collaborator).

26.01.2025 21:25 👍 5 🔁 1 💬 0 📌 0
Post image

🏁 The antiviral challenge is live! 🏁

Ready to test your skills on new data? Hosted in partnership with @asapdiscovery.bsky.social and @omsf.io, we've prepared detailed notebooks showcasing how to format your data and submit your solutions. 🧑‍💻

14.01.2025 14:31 👍 17 🔁 6 💬 1 📌 2
Preview
GitHub - AustinT/phd-thesis: LaTeX code for my PhD thesis (https://doi.org/10.17863/CAM.114023) LaTeX code for my PhD thesis (https://doi.org/10.17863/CAM.114023) - AustinT/phd-thesis

My PhD thesis is finally online. Thanks @cambridgemlg.bsky.social for a wonderful 4.5 years learning about probabilistic ML 😍

Code: github.com/austint/phd-...

Thesis DOI: doi.org/10.17863/CAM...

13.01.2025 08:43 👍 18 🔁 2 💬 0 📌 0
What ML researchers and users get wrong: optimistic assumptions ML is often done poorly, both by "ML experts" (by which I mean people who understand the algorithms but not the data) and "ML users" (by which I mean people who understand their data, but not the algo

Slightly longer version of this post is on my blog: www.austintripp.ca/blog/2025-01...

10.01.2025 09:40 👍 0 🔁 0 💬 0 📌 0

A common issue I see in ML, both from ML "experts" and "users", is overly optimistic assumptions.

"experts" (people designing algs) usually assume the data is very simple

"users" (people using algs) usually assume that algorithms are more robust than they really are

Conclusion: always be careful!

10.01.2025 09:40 👍 19 🔁 4 💬 1 📌 0
Post image Post image Post image

📊 Imagining the Future of ML Evaluation in Drug Discovery

Our recent paper discussed the limitations of static leaderboards—they never tell the full story. What if we had a better and easier way of evaluating methods?

A vision for the future, in the latest blog 🧵

polarishub.io/blog/imagini...

18.12.2024 17:19 👍 5 🔁 1 💬 1 📌 1

Valence is a great place to work- come find me at NeurIPS today if you want to learn more!

15.12.2024 17:53 👍 3 🔁 0 💬 0 📌 0

This looks like a really cool competition for small molecule property prediction in both 3D and 2D- great opportunity to work with real data 🚀

06.12.2024 19:41 👍 5 🔁 0 💬 1 📌 0

Finally, I'll be at the Recursion/Valence/NVIDIA social on Thursday Dec 12. See you there!

03.12.2024 12:20 👍 0 🔁 0 💬 0 📌 0

**Early-career researchers**: with the obvious caveat that I am still relatively junior myself, I'm happy to talk about applications, research directions, industry vs academia. I'm especially interested in helping anyone from an underrepresented background.

03.12.2024 12:20 👍 0 🔁 0 💬 1 📌 0

**BO/AL**: part of what's exciting about Recursion/Valence is the possibility of deploying BO/AL techniques at scale for real-world drug discovery tasks. We are always happy to collaborate with academics. Please reach out if you have research which you believe could be relevant 😀

03.12.2024 12:20 👍 0 🔁 0 💬 1 📌 0