π§΅10/10 Lastly, huge thanks to my co-advisors Niloy and Duygu!
For more details check out our paper below-
π Project Website: monetgpt.github.io
π Arxiv: arxiv.org/abs/2505.06176
π§΅10/10 Lastly, huge thanks to my co-advisors Niloy and Duygu!
For more details check out our paper below-
π Project Website: monetgpt.github.io
π Arxiv: arxiv.org/abs/2505.06176
π§΅9/10 We quantitaively evaluate on the Adobe5k dataset as well as conduct user studies by expert and novice users. Our evaluations show that MonetGPT outperforms open-source alternatives and performs comparably to Google Photos AutoEnhance (closed-source).
π§΅8/10 Photo editing is subjective π¨. Our framework adapts to user preference by guidance from natural language tags like βvibrantβ or βretro vibeβ to produce personalized and stylistically distinct retouching plans from the same input image.
π§΅7/10 Our puzzle-based training with a 'reasoning as a pathway' approach allows MonetGPT to generate detailed justifications for each edit, delivering truly explainable image retouching
π§΅6/10 π§© Puzzle C builds planning capabilities. The model learns to generate a complete, multi-step retouching plan to enhance a photo, structuring its reasoning as a sequence of discrete issues and solutions for clarity and control.
π§΅5/10 π§© Puzzle B imparts aesthetic judgement. By ranking professionally edited photos against altered versions, the MLLM learns to recognize the visual characteristics of an optimally adjusted image for any given operation, building an internal aesthetic model.
π§΅4/10 π§© Puzzle A builds an understanding of individual operations. The MLLM learns to map visual changes in before/after images to a specific tool and its precise parameter value, effectively learning the semantics of our procedural library.
π§΅3/10 Our key recipe: MLLMs struggle to predict edit values directly. We solve this by generating rich textual reasoning for each puzzle βοΈ. We then fine-tune MonetGPT on this data, creating a 'reasoning pathway' that enables it to regress final adjustment values.
π§΅2/10 MLLMs lack the visual understanding to plan edits. π§ So, we use expert photos as our ground truth and work backward, procedurally creating puzzles by assuming any change to an expert edit makes it less optimal
π§΅1/10 Excited to share our #SIGGRAPH paper "MonetGPT: Solving Puzzles Enhances MLLMs' Image Retouching Skills" π
We explore how to make MLLMs operation-aware by solving visual puzzles and propose a procedural framework for image retouching
#MLLM
π§΅10/10 Lastly, huge thanks to my co-advisors Niloy and Duygu!
For more details check out our paper below-
π Project Website: monetgpt.github.io
π Arxiv: arxiv.org/abs/2505.06176
π§΅9/10 We quantitaively evaluate on the Adobe5k dataset as well as conduct user studies by expert and novice users. Our evaluations show that MonetGPT outperforms open-source alternatives and performs comparably to Google Photos AutoEnhance (closed-source).
π§΅8/10 Photo editing is subjective π¨. Our framework adapts to user preference by guidance from natural language tags like βvibrantβ or βretro vibeβ to produce personalized and stylistically distinct retouching plans from the same input image.
π§΅7/10 Our puzzle-based training with a 'reasoning as a pathway' approach allows MonetGPT to generate detailed justifications for each edit, delivering truly explainable image retouching
π§΅6/10 π§© Puzzle C builds planning capabilities. The model learns to generate a complete, multi-step retouching plan to enhance a photo, structuring its reasoning as a sequence of discrete issues and solutions for clarity and control.
π§΅5/10 π§© Puzzle B imparts aesthetic judgement. By ranking professionally edited photos against altered versions, the MLLM learns to recognize the visual characteristics of an optimally adjusted image for any given operation, building an internal aesthetic model.
π§΅4/10 π§© Puzzle A builds an understanding of individual operations. The MLLM learns to map visual changes in before/after images to a specific tool and its precise parameter value, effectively learning the semantics of our procedural library.
π§΅3/10 Our key recipe: MLLMs struggle to predict edit values directly. We solve this by generating rich textual reasoning for each puzzle βοΈ. We then fine-tune MonetGPT on this data, creating a 'reasoning pathway' that enables it to regress final adjustment values.
π§΅2/10 MLLMs lack the visual understanding to plan edits. π§ So, we use expert photos as our ground truth and work backward, procedurally creating puzzles by assuming any change to an expert edit makes it less optimal
Hi Londoners
Join us on April 15 for an evening on Gen AI for 3D at UCL! We have an amazing list of keynote speakers and lightning talks. Register at londongenai.github.io
Very excited to co-organize this with Michael and Preddy!
Amazon came pretty late to India and we already had some homegrown companies like Flipkart which now competes with Amazon and is valued at $40B.
I think big tech's early access killed homegrown companies. China and to an extent South Korea (Naver) has some great tech companies because of barriers
Who will tell the silicon valley tech bros that it wasn't them alone
Haha exactly what I did today!
The image illustrates the evolution of cleaning tasks, balancing time (ATUS) and well-being (ATUS-WB). A central figure considers two options: Manual Labor (vacuum cleaner) and Automated Labor (robotic vacuum), connected by an orange arrow labeled B1-K.
π€What tasks do we want robots to handle? Are these preferences based on saved time or feelings we associate with the tasks?
Introducing Why Automate This?βa study exploring automation preferences across social groups, using feelings & time-spent as key factors. π (1/5)
Have a great time in Seattle!
All the papers I've reviewed still have some reviewers who haven't participated in the discussion/replied to the rebuttal at all. This is true even after the authors have nudged the reviewers a few times already :(
Added you!
Added you!
Added you!
π