COSMO Lab's Avatar

COSMO Lab

@labcosmo

Computational Science and Modelling of materials and molecules at the atomic-scale, with machine learning.

1,308
Followers
195
Following
172
Posts
25.10.2023
Joined
Posts Following

Latest posts by COSMO Lab @labcosmo

Mendeleev’s nano-clusters - The Atomistic Cookbook

If this sounds too good to be true, don't believe me, see for yourself: there is already a πŸ§‘β€πŸ³ πŸ“– recipe waiting for you try atomistic-cookbook.org/examples/men...

03.03.2026 11:45 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

The simulation is stable for a combined 1.6 ns, and the final structures make a lot of physical sense: noble gases, and at higher temperature more volatile stuff, leave the particle. Accuracy is quantitative: the MAE error on forces is ~150 meV/Γ… - pretty exceptional for something so outlandish!

03.03.2026 11:45 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Video thumbnail

So let us show you just how *universal* #PET-MAD-1.5 can be. This is a movie of a parallel tempering simulation, with replicas from 300K to 3000K, of what we call a "Mendeleev cluster" - one atom each of every element from 1 to 102.

03.03.2026 11:45 πŸ‘ 10 πŸ” 5 πŸ’¬ 2 πŸ“Œ 1
High-quality, high-information datasets for universal atomistic machine learning

Looking forward to see what you do with the dataset, MAD-1.5, and the universal potential, PET-MAD-1.5! πŸ”—: πŸ“„ arxiv.org/html/2603.02...; πŸš€ github.com/lab-cosmo/up...

03.03.2026 11:15 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

This is the result of a massive effort by @cesaremalosso.bsky.social, @filippobigi.bsky.social, @ppegolo.bsky.social, @jwasci.bsky.social, @mahrossi.bsky.social and Arslan Mazitov, as well as all the πŸ§‘β€πŸš€ working tirelessly on the conceptual and software infrastructure.

03.03.2026 11:15 πŸ‘ 2 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Preview
GitHub - lab-cosmo/upet: Universal interatomic potentials for advanced materials modeling Universal interatomic potentials for advanced materials modeling - lab-cosmo/upet

Now, this is just a milestone on that path, but it's already something worth sharing, so thanks to arXiv you can read about it arxiv.org/html/2603.02..., and thanks to uPET github.com/lab-cosmo/up... and metatomic, you can try already a universal MLIP trained on it.

03.03.2026 11:15 πŸ‘ 2 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0
Post image

πŸ“’ We have been working on a new universal atomistic dataset that combines the principles of MAD with a meta-GGA level of theory, so we can all simulate water that does not freeze at 500K 🧊 , and have all our bases covered, with reference data for every isotope with a half-life above 24 hours ☒️

03.03.2026 11:15 πŸ‘ 11 πŸ” 3 πŸ’¬ 1 πŸ“Œ 0
Snapshot of a ring-polymer instanton calculation for the reaction of methane with a H radical

Snapshot of a ring-polymer instanton calculation for the reaction of methane with a H radical

New recipe just landed on the atomistic-cookbook.org πŸ§‘β€πŸ³πŸ“–. Thanks to @yairlitman.bsky.social for explaining how to use ipi-code.org to perform ring-polymer instanton calculations of reaction rates that include quantum tunneling effects βš›οΈβš‘. Check it out πŸ‘‰ atomistic-cookbook.org/examples/rin...

23.02.2026 18:52 πŸ‘ 6 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
How to Train a Shallow Ensemble

πŸ“’ New #preprint is out! Investigating the many flavors of last-layer #UQ, Moritz and πŸ§‘β€πŸš€Matthias propose a practitioners' guide on "how to train a shallow ensemble". TL;DR? for good calibration use NLL, include force, and optimize the backbone, fine-tuning for speed! πŸ“ƒπŸ”—βž‘οΈ arxiv.org/html/2602.15...

18.02.2026 16:04 πŸ‘ 3 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0

welcome!

17.02.2026 12:24 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Resolving the body-order paradox of machine learning interatomic potentials In many cases, the predictions of machine learning interatomic potentials (MLIPs) can be interpreted as a sum of body-ordered contributions, which is explicit w

... but it is too inflexible to learn highly-correlated fragment energies, and does not extrapolate well to different densities. If you are curious to learn more, read the whole story here pubs.aip.org/aip/jcp/arti...

14.02.2026 21:35 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Our conclusion is that the body ordered decomposition is not a very useful prior for conditioning MLIPs. An architecture such as MACE that forces a fast-converging effective decomposition learns well in the low-data regime ...

14.02.2026 21:35 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Interpolative and extrapolative performance of different architectures for intermediate H cluster densities, with and without fragments included into training.

Interpolative and extrapolative performance of different architectures for intermediate H cluster densities, with and without fragments included into training.

Last but not least, of the architectures we tried, only PET 😸 could actually do something good out of learning these fragments, while MACE sees its extrapolative power degrade significantly when forced to learn the slowly-converging vacuum cluster expansion.

14.02.2026 21:35 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Changes in the learned expansion as the fraction of fragments added to the training set is increased

Changes in the learned expansion as the fraction of fragments added to the training set is increased

Third, only architectures with sufficient expressive power, such as MACE or PET, can learn the true expansion, when exposed to fragments in the training set.

14.02.2026 21:35 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Body-ordered energy decomposition for low and high-density hydrogen clusters

Body-ordered energy decomposition for low and high-density hydrogen clusters

Body-ordered energy decomposition for different MLIP architectures. None learns spontaneously the "true" decomposition

Body-ordered energy decomposition for different MLIP architectures. None learns spontaneously the "true" decomposition

This is a story with many layers. First, the quantum cluster expansion does not converge, mostly because the fragments have crazy geometries and highly-correlated electronic structure. Second, MLIPs exposed only to compact structures learn their own effective body-ordered decomposition.

14.02.2026 21:35 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Resolving the body-order paradox of machine learning interatomic potentials In many cases, the predictions of machine learning interatomic potentials (MLIPs) can be interpreted as a sum of body-ordered contributions, which is explicit w

A πŸ§‘β€πŸš€ dream team led by Sanggyu Chong, with some FCI help from Joonho Lee @harvard.edu, tries to answer this question, and to understand what it implies for the accuracy and transferability of MLIPs in this new πŸ“„ fresh off the press at @aip.bsky.social #JCP.
pubs.aip.org/aip/jcp/arti...

14.02.2026 21:35 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
The body ordered expansion, equations

The body ordered expansion, equations

Many #machinelearning potentials are built (or understood) in terms of "atomic cluster expansions" that link directly to a body-ordered energy decomposition that can be computed explicitly with a sequence of electronic structure calculations. But what kind of expansion do they learn in practice? A🧡

14.02.2026 21:35 πŸ‘ 7 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0
Preview
metatensor and metatomic: Foundational libraries for interoperable atomistic machine learning Incorporation of machine learning (ML) techniques into atomic-scale modeling has proven to be an extremely effective strategy to improve the accuracy and reduce

Hot off the press on hashtag @aip.bsky.social #JCP, an introduction to the metatensor ecosystem. High-quality πŸ§‘β€πŸš€ tools for atomistic hashtag#machinelearning - read on pubs.aip.org/aip/jcp/arti... and check it out at metatensor.org πŸ§‘β€πŸ³ πŸ“– recipes here atomistic-cookbook.org/software/met...

13.02.2026 18:56 πŸ‘ 4 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Post image

It is a stormy day, but the COSMO retreat is going strong!

12.02.2026 11:16 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
A New Reference Model for Machine-Learning–Driven Materials Discovery - EPFL AI Center Researchers at EPFL’s Laboratory of Computational Science and Modeling (COSMO) have reached a significant milestone in material science, reaching the top position on Matbench Discovery, the leading be...

πŸ§‘β€πŸš€ Filippo, Arslan and Paolo doing some PET talk with our friends at the @epfl-ai-center.bsky.social ai.epfl.ch/a-new-refere...

09.02.2026 11:13 πŸ‘ 5 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Post image

If you want to learn about materials modeling, from DFT to MD, well marinated in a spicy ML sauce, don't miss out the @ictp.bsky.social / @nccr-marvel.bsky.social college. Details and application instructions here indico.ictp.it/event/11146/. See you in Miramare!

08.02.2026 07:46 πŸ‘ 9 πŸ” 3 πŸ’¬ 0 πŸ“Œ 2
International AI competition aims to speed up the development of materials for the green transition The Pioneer Center CAPeX at DTU has announced the winners of the first phase (Stage 1) an international competition in partnership with the Novo Nordisk Foundation, and the Danish Centre for AI Innovation (DCAI), on using machine learning models to predict synthesis recipes for novel nanoparticles.

PET continues its victory round of benchmarks and challenges πŸ₯‡πŸ₯‰. And this one has a (bit far-fetched) end goal that would also make it useful! Congrats to Filippo and Cesare (and @marceldotsci.bsky.social who got a honorable mention and will also try further his LOREM model)πŸš€ dtu.dk/english/news...

06.02.2026 12:53 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Sergey collects the edmx doctoral distinction prize

Sergey collects the edmx doctoral distinction prize

Congratulations to πŸ§‘β€πŸš€ Sergey Pozdnyakov who very deservedly won the @materials-epfl.bsky.social doctoral distinction award. A good time to go check on his papers, if you haven't read them already!

03.02.2026 15:43 πŸ‘ 8 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Cluster highlights in chemiscope 1.0 RC3

Cluster highlights in chemiscope 1.0 RC3

Release candidate 3 of chemiscope 1.0 is out, with class and range based highlighting of points. Try it, break it, report it on github.com/lab-cosmo/ch...

01.02.2026 11:59 πŸ‘ 1 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Preview
A new national research programme recognizes EPFL's expertise The Swiss Confederation launches six new National Centres of Competence in Research (NCCRs). The NCCR β€œSeparations”, which aims to accelerate research in separation sciences - the quest for chemical a...

Fantastic news from the @snf-fns.ch, who despite the budget cuts managed to fund six new NCCRs. Looking forward to doing some cool simulations to advance separation science! actu.epfl.ch/news/a-new-n...

30.01.2026 09:59 πŸ‘ 5 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Pareto front for PET-OMAT models

Pareto front for PET-OMAT models

If you're scared by the 700M parameters (you shouldn't be) there's a whole set of models from 🐁 to 🦣. You can find them all on github.com/lab-cosmo/upet !

23.01.2026 07:02 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Table showing results of a few representative universal model on the matbench leaderboard

Table showing results of a few representative universal model on the matbench leaderboard

If you got curious by the PET-OAM results a week ago, you can learn more reading up arxiv.org/abs/2601.16195. Including some general considerations on how to train and use safely an unconstrained ML potential.

23.01.2026 07:02 πŸ‘ 2 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
GitHub - lab-cosmo/upet: Universal interatomic potentials for advanced materials modeling Universal interatomic potentials for advanced materials modeling - lab-cosmo/upet

You can fetch the model here github.com/lab-cosmo/upet, as easy as `pip install upet`, and then, for the ASE interface, `from upet.calculator import UPETCalculator;
calculator = UPETCalculator(model="pet-oam-xl", version="1.0.0", device="cuda")` Have fun and go break it!

14.01.2026 06:36 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Screenshot of the matbench discovery leaderboard as of 14.01.2026, showing a PET based model in the top position

Screenshot of the matbench discovery leaderboard as of 14.01.2026, showing a PET based model in the top position

Not going to make a big deal out of a benchmark table, but PET just got the top spot on matbench-discovery.materialsproject.org. And don't be fooled by the huge parameters count, it's faster and can handle larger structures than eSEN-30M πŸš€. Kudos to πŸ§‘β€πŸš€ Filippo, Arslan and Paolo!

14.01.2026 06:32 πŸ‘ 8 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0
Zooming in on a large-scale dataset to showcase the new adaptive resolution features in chemiscope

Zooming in on a large-scale dataset to showcase the new adaptive resolution features in chemiscope

Post image Post image

πŸ“’ chemiscope.org 1.0.0rc1 just dropped on pypi! We are making (a few) breaking changes to the interfaces, fixing a ton of bugs and introducing some exciting features (you can finally load datasets with > 100k points!). We'd be grateful if you test, break and report πŸ› github.com/lab-cosmo/ch...

05.01.2026 14:42 πŸ‘ 4 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0