This year's ICML review process was⦠something. The acknowledgement button felt like a placebo, and reviewer engagement was basically a ghost town, a solid 0/10. Not exactly a goldmine of constructive feedback.
This year's ICML review process was⦠something. The acknowledgement button felt like a placebo, and reviewer engagement was basically a ghost town, a solid 0/10. Not exactly a goldmine of constructive feedback.
How do people write a long equation in a two-column format latex file, like in ICML?
I think droid is the closest
Need some help π
How can you 0-shot transfer predictions of long-term performance across reward functions *and* risk-sensitive utilities?
We can do this via Distributional Successor Features. Our recent work introduces the 1st tractable & provably convergent algos for learning DSFs.
#NeurIPS2024 #6704
12 Dec, 11-2
Incredible visualization, Harley! Can't stop watching it! π
Just realized Ilya Sutskever has won the NeurIPS Test of Time Award three years in a row: 2022 for AlexNet, 2023 for Word2Vec, and 2024 for Seq2Seq π€―
This is insane! Hats off to him!
Not sure about how updated it is. But it has options to see scores before/after rebuttal.
from here: papercopilot.com/statistics/i...
6/ Research integrity demands better. The #ICLR community deserves a more rigorous and fair process. If we care about quality, we need to hold conferences accountable for their decisions.
5/ And letβs be clear: itβs not the big tech companies bearing the brunt of this broken system. Itβs PhD students and independent researchers who suffer most. Low-quality peer reviews impact their careers, publications, and opportunities, while corporations skate by.
4/ When conference acceptance becomes this arbitrary, itβs not just about individual papers. Weβre undermining the entire scientific evaluation systemβand that affects the integrity of AI research as a whole.
3/ There are a few key questions that come to mind:
1- Did removing ratings of 4 and 7 distort the review system?
2- Has forcing author reviews led to systematically low-quality evaluations?
3- Could the changes in the review process be impacting overall quality standards?
2/ Hereβs the issue: the top ~30% of papers have average ratings starting around 5.6. But since 6 is "borderline accept," this means papers below the "borderline accept" threshold will get accepted. How does this make sense?
1/ Just did a deep dive into #ICLR paper acceptance stats, and something doesnβt quite add up. With the traditional acceptance rate around ~30%, the numbers are telling a strange story. π§΅ @iclr-conf.bsky.social
RLC will be held at the Univ. of Alberta, Edmonton, in 2025. I'm happy to say that we now have the conference's website out: rl-conference.cc/index.html
Looking forward to seeing you all there!
@rl-conference.bsky.social
#reinforcementlearning
If we'd avoided the 'learning' label, we'd still be hearing 'but can it think?' Now we just get 'is it conscious?' Guess we leveled up the existential questions! π
But it was advertised by a deepminder of I'm not mistaken
Why I think peer reviewing in ML needs a major change of approach. Right now, it's less "advancing science" and more "petty revenge for that one bad review I got." At this rate, we'll all just submit papers directly into the void.
π
Just wish #ICLR reviewers were participating more actively in discussions