L O U D E R
L O U D E R
@ifbookspod.bsky.social will at some point
I dropped gramamrly a LONG time ago for LanguageTool, who is really multi-language, and far far less obnoxious about AI (and cheaper!)
"Hell is other people" is actually about people who swim super slowly in the fast lane at the pool.
dan simmons, author of multiple, deeply contemptible works of racism fiction, has died. congratulations, and no, hyperion was not as good as the obituaries will say it was
Metacritic has a better stance on generative AI than virtually all of academia right now.
This is such a cool paper.
10% of programming is letting the intrusive thoughts win, and the remaining 90% is debugging
I think part of the issue is critical AI folks get dismissed at the door if we donβt give any merit to potential benefits while boosters donβt have to do the same to skeptical ideas. Itβs a double standard.
ok but the experience of one of the most senior people in your field calling your research a trivial waste of time at an early age is worth it for
character development I guess?
Very nice write-up of Maria Isabel's research on the biodiversity trends in Colombia's Tropical Dry Forest
nouvelles.umontreal.ca/en/article/2...
The very best one I ever had in Canada would have been a 6/10 in France. They don't really compare.
#rstats is a great language for two functions:
pyCall and JuliaCall
the rest of it is meh
The "please plant native plants or go to hell" one goes EXTREMELY hard
Expert engagement and evidence use in treaty negotiations 24 Pages Posted: Anna Bezruki affiliation not provided to SSRN Chloe Batie Yale University Colin Carlson Yale University Date Written: February 10, 2026 Abstract Science-and evidence-based decision-making are core values in global health, but in multilateral processes, they often take a backseat to politics. The Pandemic Agreement requires Parties to use "the best available science and evidence as the basis for public health decisions for pandemic prevention, preparedness and response," but is the treaty itself evidence-based? In this chapter, we trace how scientific and technical evidence were introduced into the Intergovernmental Negotiating Body. External experts were a key source of advice, especially on legal issues, but they were mostly excluded from the negotiations. Over time, Member States began to treat Relevant Stakeholders as a secondary source of technical expertise, introducing potential conflicts of interest into the process. In the end, scientists and stakeholders successfully leveraged scientific authority to facilitate the incorporation of One Health approach to pandemic prevention (Articles 4 and 5)-but otherwise, the treaty was shaped more by politics and pragmatism than by science. Moving forward, the Conference of Parties will be an opportunity for Member States to establish formal channels for scientific evidence synthesis and engagement-or to preserve a status quo that falls substantially behind evidence-based global governance in other areas. Keywords: Citation networks, evidence-based policy, One Health, Pandemic Agreement, scientometrics, World Health Organization
Figure 1. Evidence use in the WHO Pandemic Agreement negotiations: (A) by user type and evidence type; and (B) by individual users and documents. Flows are proportional to the number of instances. In panel B, shaded flows represent self-citations by individual contributors (purple) or by organizations (red).
Figure 2. Evidence use networks connecting documents (red) and users (blue). Edge width and node size is proportional to the number of references made. For visual clarity, only the most influential nodes are labeled. Layout is based on network connections and readability, and does not necessarily reflect node properties (e.g., centrality).
NEWβΌοΈ For the forthcoming Pandemic Agreement travaux prΓ©paratoires edited by @alexandraphelan.bsky.social, we did a deep dive on how experts and evidence shaped the negotiations. What we found is a microcosm of global health: limited time, high stakes, and familiar shady actors like Big Pharma. (1/3)
This morning I solved a little scripting problem in ~ 10 minutes.
ChatGPT failed to solve it after 40 minutes.
Claude failed to solve it in 20 minutes, then tried to convince me that I wanted to solve a different problem.
This is why we don't let LLMs near any @epic-biodiversity.org code.
"All these modellers do is put colours on a map of the world"
Would love to attain that level of confidence in my ability to evaluate other sub-fields in my discipline. No filter, no thoughts, just confident statements.
Must be so incredibly liberating.
1. The thing about science that these jokers don't understand is that science cannot be vibe-coded.
Whatever its flaws, the point with vibe coding is that you're trying to quickly make something that sorta works, where you can immediately sorta see if it sorta works and then sorta use it.
I see - but isn't that what the defense is for?
What about the papers?
The protagonist of Jusant looking over a vast expanse of sand and rocks
Jusant is a wonderful game. Impeccable esthetic and vibes. Definitely comforting me into my "play more indie games" 2026 challenge.
I went into shock
@natrevbiodiv.nature.com if you think this would be a good review idea, let me know!
Some of this can be solved through more collaborations, no doubt. But this is also the time to maybe iron out some assumptions, and like the "how" we work (the quant machinery of SDMs) to the "why" (data biases / biological realities). When this is done, the cross-field roadmap will write itself.
The net result is that when other fields that are likely to push the methodological state of the art forward by an OoM (this is ML!) or come up with the next series of very challenging applications (this is paleo!), we make it difficult for them. Not on purpose!
Crucially, these other fields are not to blame. The literature on SDMs in ecology is reliant upon academic folklore. There are things that are only written between the lines of a hundred disconnected papers, and some of these things are wrong anyways.
There is a lot of common ecological knowledge that is lost at the boundaries between fields. Whether this is about data acquisition, biological assumptions, or end-uses (ML in particular struggles with this), or correct use of information and biological underpinnings (paleo suffers most of this).
This past year, I have been fascinated by the way fields that are not biodiversity sciences / biogeography use and think about species distribution models. Most of my attention was on ML and paleo/anthropo, which, I think, are interesting end points. I believe there are important lessons for us. π
true
Publishing in society journals means nothing as soon as these journals are captured by Wiley et al. and turned into mandatory open access.
And societies are to blame for letting themselves be captured and turned into a money making machine for for-profit publishers.