Fair enough, we're probably further than the onset stage at this point.
@ilintar.org
Philosopher of language, Java software engineer, occasional rants about politics and religion. Currently residing in Warsaw, Poland. Moderately conservative institutionalist, slightly anti-clerical Catholic. No hate speech allowed.
Fair enough, we're probably further than the onset stage at this point.
Yeah, it's called "onset of dementia".
Yup. It's one of those cases where in 500 years (if humankind is still alive by then), historians will quarrel about this and claim "there must be critical pieces of information missing, it's not possible that it simply happened that way, people wouldn't be that stupid".
The national suicide being committed by the Republican-led United States has no historical precedent. Centuries from now (if humanity survives that long), historians will struggle to explain how such a thing was possible.
"Sorry, all our FBI agents have been busy covering up investigations or assisting Kash Patel on his sports trips."
Yeah, I'm expecting him to either die or be 25d this year honestly. The degradation is quite rapid and striking, I don't see how they manage it for another year let alone 3.
I mean, it'd be hilarious if the DoJ got itself into contempt proceedings for telling its employees who are probably drowned in cases resulting from its own lawlessness to use AI tools. Just saying.
Next stop: she dumps him for using her as an in-article self-aggrandizing prop.
PrzecieΕΌ to biedny niewinny sΕodziaczek, popatrz na niego.
Exactly this. He wants to TACO like with all previous cases, but Netanyahu designed this one so there *would be* no TACO option on the table.
I see my favorite market index seems to be making a comeback:
"Meanwhile, we do war crimes and masturbate to videos of SOME GOOD OLD ULTRAVIOLENCE"
No, "standing order" and "eternal order" are by no means the same. But I see you're not interested in the distinction, so I'll see myself out.
Now, there might be *branches* of conservatism that believe in *eternal* order, but those are very, very extreme ones. Most conservatists believe in some degree of institutional change - just don't see change as something that is inherently beneficial.
That's not what conservatism is, at its core.
Conservatism is the belief that there is merit to the *standing* order because it was arrived at by trial-and-error and the longer institutions stand, the more likely they are to be actually worthwile and the more risky tinkering with them is.
Jesus Christ, we have a 14-year old wanker literally running the biggest armed force in the world...
"You're absolutely right! Upon analyzing the image further, I realize that was a school, not a military base. Shall I prepare a coverup story for you?"
I don't think this guy has heard of a concept named "penance".
How about stepping down from your seat, Mr. Congressman? Nah, I forgot, penance for killing someone must be something completely inconsequential, like saying "Our Father" 10x. God forbid it's an actual act of responsibility.
The tool actually gave him a solution to a problem he couldn't solve. It didn't correctly generalize the solution, but it still *moved the problem forward*.
That's what we're trying to say here all the time: LLMs are a *useful tool*. They're not useless, they're also not AGI or standalone oracles.
Thanks :) doing a little quantization laboratory right now to find out the possibility of improving various low-bit quantization methods.
Strong "This is fine" vibes.
Generally you could tell the difference by asking people who are generally knowledgeable in the field. Most true crypto experts would tell you that its uses were very narrow, technical and not easily extendable. Not so for LLMs - here the experts in the field agree that it's been a breakthrough.
The entire selling point of crypto was to make all explanations so convoluted, magical-sounding and full of technobabble that people would believe that "this thingamagic" would somehow solve problems with X (where X = any topic crypto was advertised for).
Crypto had virtually no legitimate uses and was 99% scam. LLM is not like that. The key difference: I can take a person from the street, give them an LLM to play with and they'll be able to do something with it and tell me about it. With crypto, it was impossible.
Ten to wie jak humor czΕowiekowi poprawiΔ...
Yes, as well as "not in all domains are LLMs equally good". There are areas in which LLMs will oneshot complex problems, there are areas where LLMs will struggle to competently write 300 lines of code (try writing/optimizing nonstandard CUDA kernels with LLMs for example).
...something completely different to what I said it says. Those are not people who come here with good intentions. They come with a predetermined agenda determined to stifle any voices of opposition.
Okay, how about this: the majority of discussions on AI on this site are overflowing with sea lions drowning people who write *anything positive* about LLMs out. In this discussion alone I've been targetted by at least 3. One of them tried to gaslight me about a text I linked suggesting it says...
But that is a discussion to be had with people who recognize that LLMs *do* in fact produce good, manageable code and can be used as tools in building bigger projects even if they can't yet manage them on their own. Not with people who just copy-paste the same response over and over.
LLMs do have efficacy tradeoffs. I've written about the experience myself - an LLM can write 90% of the code, but if you don't recognize the moment where you have take the wheel, trying to force it to correct the remaining 10% can completely obliterate any time gains from the first 90%.