> and those were objectively superior to horses in several ways
Do you think there is no angle along which ChatGPT is objectively better than the thing replacing it?
> and those were objectively superior to horses in several ways
Do you think there is no angle along which ChatGPT is objectively better than the thing replacing it?
And to be clear β none of the arguments youβre making are in the essay! The essay is making arguments which are far far weaker, and mostly revolve around trivializing what AI is currently capable of.
Do you think the first cars were cheaper or more reliable than a horse? You can call me overly ambitious for making this connection, but you must also see the precedent.
I think itβs reasonable to say βthese essays arenβt yet that goodβ (theyβre not!) and βthis stuff costs a crazy amount of money upfrontβ (it does!), but thatβs not the argument an essay like this makes; the essay says βthis is going nowhereβ, and that, to me, I find so hard to sympathize with.
But weβre talking about capabilities, not profits. **we couldnβt do this 5 years ago**, and now we can. Itβs crazy to call that fumes!!!!
Iβm so confused by seeing technology that has progressed from garbled sentences to being able to write many college-level essays over a span of just 6 yearsβ¦. And concluding that the field is all fumes. Just mid. Iβm incredulous.
Idk I feel like our job needs to be partially educational but I donβt know how to reach folks who arenβt interested in listening.
Posted on Bluesky because my Twitter will be an echo chamber of agreement.
Itβs hard for me to take the opposing camp against AI seriously (those saying AI isnβt very good, not the camp which says itβs unjust) when their proponents essays are so filled with rhetorical tricks (ending by aligning AI to DOGE?!?!?) and a lack of desire to seriously grapple with AIβs value.
Todayβs NYT column could have been written in 1900 decrying the mid-ness of the horseless carriage. It says AI is a fizzling fad while mentioning βwithout any self awarenessβ that AI can βpredict my lecture [β¦] anticipate essay prompts, research questions [β¦] and then, finally, write a paperβ
Someone has gotta start hyping MCBench on here Iβm lost
Unfortunately everyone serious (vis-a-via being a real scientist) refuses to engage on the topic in good faith. Iβve legitimately considered writing papers in the conference where the anti-AI congregate just to force them to the tableβ¦.
The most impressive paper I read is a paper which tells me the authors were able to convincingly show me that an empirical statement about deep neural networks holds true in a general way.
A few years ago the best work was to do focused small scale architecture/data innovation, but I feel increasingly unable to trustfully extrapolate innovation from small scale. Thatβs why extrapolative-science-as-artifact is so valuable.
If you canβt train big models, the best experience you can get is working on projects that develop and show a clear ability to do subtle deep learning empirical science. Hard to understate how valuable that skill is.
Itβs been eye opening to me how much AI people (a) enjoy LLM poetry as capability examples but (b) have a remarkably surface-level understanding of what makes for good poetry
I realize for every 10 people voting Trump for Gaza, 9 of them were bots, but β¦. man β¦. Iβd love to hear from that 10% right now.
this is also a comment on the problem with modern governments
It does seem emblematic of the problem with modern society that this company is lodging a court case instead of asking the government "hey, can you adjust the law so that this requirement is generalized to the extent where we can helpfully comply"
Common sense? π
I donβt care so much about the charity more about the blind trust.
I wish academic ML was a bit more skeptical of papers and less skeptical of industry. I get that it sucks to not have visibility on details, but it doesnβt invalidate the results. On the flip side, there are too many papers whose message are parroted despite sketchy experiments.
+1 this is pretty clear virtue signaling IMO
get real
youβre telling me he _went up into a tree_?!?!?
Oh shit now here is a tactic I like
Ugh it sucks I have to decide between a feed full of people defending Elonβs parenting and a feed full of people telling me Iβm a moron for finding value in one of the most helpful tools of all time.
If you are someone who thinks LLMs are bad for the world, claiming they are useless will **actively hurt your cause**; because the people you need to convince are people that are using them and scratching their heads as to why youβre claiming what you are.
Idk I wanna demilitarize not keep this stupid arms race goingβ¦. I get why people are upset, especially artists + writers. I just hope they realize that most tech people are actually very sympathic and are just being turned off by their insistence on holding a view of extreme denial.
By far the most common thing I see anti-LLM people say is βyouβre an idiot for using a tool which can lie to youβ β¦. Which is a hilariously false equivalence.