Naming things at the edge of scientific understanding is hard I guess! I have to relearn every few years that Vitamin K is not the same thing as potassium.
Naming things at the edge of scientific understanding is hard I guess! I have to relearn every few years that Vitamin K is not the same thing as potassium.
wait, so vitamins A, B, C, and D were just named in their order of discovery?
on the other hand, learning about new productivity hacks and tools is also, in my opinion, a cope to avoid doing actual work on things that feel hard and scary. signed, someone was frequently targeted by pocket (RIP) recommendations on firefox new tabs
Yup, android! When I discovered this feature back on my Pixel 4a it became a requirement for me for any future phone
I actually prefer a phone - extra dim, night mode, amber text for bedtime reading is hard to beat. I would consider buying an e reader if I thought it would compete on this.
almost a decade! how can that be true when I was almost certainly in high school or middle school a decade a go. (I will not be performing subtraction to verify)
for some reason, Stephen Wolfram's walking desk strategy has not caught on despite his amazing 2019 blog post writings.stephenwolfram.com/2019/02/seek...
the idea of "deli papers" being an issue is also new to me! The type of "thinly sliced contribution" that I expected to read about is the "minimum publishable unit" approach to scoping papers and projects. But that's distinct from submitting multiple "slices" at once. (Also probably harder to solve)
the slip from "this thing causes problems for some people" to "everyone is better off avoiding this" is so prevalent in baby safety recommendations. I'm not sure it's always wrong! But it's definitely not always right.
I'm curious why SVMs in particular as the place for introducing kernels to ML students - what about least squares/ridge regression?
Fwiw, I don't think the article refutes a pro-Sacks view! I read it before seeing any discourse and didn't find it to be inflammatory or damning in any way. Just more insight into Oliver Sacks as a person.
If you work at the intersection of CS and economics (or think your work is of interest to those who do!) consider submitting to the ESIF Economics and AI+ML meeting this summer at Cornell: www.econometricsociety.org/regional-act...
this includes Markov chains, Kalman filtering, optimal linear-quadratic control... can't seem to get away from linear models and quadratic costs
having just finished the lecture portion of my PhD level ML in Feedback Systems course, and it turns out that everything I understand in ML/control is basically linear least squares
Given the NSF funding priorities, everything's got to be AI these days. That or quantum I guess.
I launched these balloons as part of an outreach program: bowers.cornell.edu/news-stories... it was a lot of fun! Turns out weather balloons are a great way to talk about both classic and modern "AI"
And in early August, balloon #2 ended it's journey in the Black Sea π«‘
small conferences are lots of fun! consider joining us next June in LA, and share with folks who might be interested.
Actually, I launched two balloons. But the first one crashed into the ocean off the coast of Nova Scotia due to a ballast issue. RIP
A weather balloon's path from North America over the Atlantic to North Africa
two weeks ago, I launched a weather balloon. After a loop de loop over the middle of the Atlantic, it's currently making its way over north Africa. windbornesystems.com/balloon-soci...
E.g. at netflix "we initially did not observe significant improvements in performance over well-tuned non-deep-learning approaches. Only when we added numerous features of heterogeneous types to the input data, deep-learning models did start to shine in our setting" ojs.aaai.org/aimagazine/i...
But more generally, this seems like a repeated pattern for deep learning. One of the main advantages is how flexible the models are, and how many disparate types of inputs they can be trained to use. This is in contrast to physics based or simpler ML models.
however, I do think the deep models are promising, particularly because its easy for non-experts to incorporate new data sources as inputs. Credit to my brother for this perspective! www.nytimes.com/2025/07/13/b...
they also rely on the initial conditions/"state estimates" provided by physics based models
my (1000 foot view) of recent advances in deep weather models is that they basically get better accuracy metrics by doing a better job of being Bayes optimal. I.e., predicting the mean of future outcomes (vs more realistic looking high resolution weather patterns).
basically, there are still wins from giving the models better initial conditions. (eventually that will saturate because of the chaotic dynamics/numerical precision/etc)
they suggest that there are still gains to be made by collecting/incorporating more real time weather data, specifically that we could get than currently usable predictions several more days into the future
I've been getting interested in weather prediction, where (depending on how much you believe physics models) there are some interesting things you can say: journals.ametsoc.org/view/journal...
totally agree, the market of baby products feels pretty scammy (not to mention all the targeted ads). But the same anxieties that all these products prey on seem to be reinforced by the safety culture...or at least this is the case on reddit forums full of anxious new parents.