If not, I'll go with Soul Patch Kids.
If not, I'll go with Soul Patch Kids.
Can we just delete a letter? Gummi Ears.
I can never believe how many separate steps are involved in a Kickstarter book release. It's all I've been doing for a month or two. Kind of like air travel. When I'm in a plane, I'm thinking, "I'll never do this again." But then ta-da, I'm somewhere I wanted to be. www.rudyrucker.com/sqinks/
2/2 Sooner than they realize, local devices (desktops, laptops, phones, and tablets) are going to be providing users with a better, more performant user experience than anything they can do centrally. At that point, why would anyone choose to give their money and IP to a centralized provider?
1/2 I think OpenAI's current model dooms them to being a dinosaur, sooner than they know. We're already seeing local models that execute with the same levels of accuracy as centralized ChatGPT. Local tool integrations and data access are MUCH simpler and more secure than anything OpenAI provides.
You cannot preach a subtext that says more than half of the country is less than human and not expect someone you are constantly denigrating to view you as less than human, too, and act upon it. It's a horrible reaction, but there are too many impulsive, dumb people out there to agitate this way.
I just checked. I have exactly 4 selfies of me in a 4000+ image camera roll. Not on my list of deciding factors.
I guess is should have tagged @rudytheelder.bsky.social too.
You'd probably like Rudy Rucker's "Juicy Ghosts" novel. Definitely inspired by the 2016 election, but set far enough in the future for plausible deniability. www.rudyrucker.com/juicyghosts/
LOL! ChatGPT doesn't give out "Facts". It's a chat simulator that is trained to respond in ways that fool uninformed humans into believing a simulated intelligence is providing credible answers. It has absolutely no way to self-reflect on the accuracy of its responses. You are being fooled.
The world didn't end with the advent of relational databases. It isn't going to end at the hands of natural language queries on a vector database, either. These are clever parlor tricks designed to part fools from money, first and foremost. They have utility, but nothing approaching intelligence.
Prove it. Just a supposition on your part and not borne out in reality. Your own first sentence says otherwise, too. Perhaps you don't really understand the limits of LLMs? It's a chat simulator. It isn't intelligence. At best, it's an inefficient way to query a static database of probabilities.
Unfortunately, many of the most vocal seem to have "committed suicide" in the past few years.
What about the current crop of chat simulators do you think we aren't equipped to handle? Have you been fooled into thinking you are dealing with something that is actually intelligent? LLMs are just fancy database look-ups hooked to probabilistic text generators. They aren't a meaningful threat.
LLMs have likely distracted the industry in the wrong direction for the next 5 years at least. It results in a chat simulator, not intelligence, at least no more so than a Google search is "intelligent". But apparently, it is good enough to get a lot of dumb investors to spend on the wrong tech.
They aren't all shaped like that and some had specific purposes, like hanging prayer books or other gear. I'm really not interested in doing your research for you. You can go with Occam's Razor, which says that similarly shaped objects and their uses today are the right interpretation. Or not.
There are hundreds of modern equivalents for hanging gear from backpack straps, belts, or as "quick release" components. Just go google a bit for terms like "belt", "strap", "buckle", "hanger", "hook", etc. The common feature is the two parallel slots for a strap/belt to weave through.
Actually not a mystery at all. They either hung from a belt or were sewn onto clothing to provide convenient hooks for purses, pomanders, rosaries, or anything else one might hang from a belt, strap, or harness. "Archaeologists" lack of knowledge is proportional to lack of practical experience.
Their entire business model is about conning you into spending more $$$ to buy tokens and to give up more and more info they can harvest. The big, centralized models are for that and for scamming uneducated investors out of capital. It's not about doing real, practical work. There are better tools.
Let's hope it doesn't. LLMs are not a proper path towards AGI or any sort of real machine reasoning. We've had 2 years of software parlor tricks, aimed at parting investors and their cash. Time to try something different. The industry has painted itself into a corner with centralized models.
I just posted a fork of feedlandInstall that has all of the Docker and Docker Compose bits in it, plus a how-to for people who already are using Docker.
github.com/cshotton/fee...
Is there an easy way to get a websocket feed of news.scripting..com like Feedland produces? The curated, focused topics of news⦠are better suited for my agent pipelines to work on than the flood coming from Feedland.
(I am guessing running a Feedland server on news's RSS feed is the short answer.)
Gatekeep much? The account belongs to "The New York Times." Not Steve Bergin. Maybe they get to decide what they post in their own account? "Games" happens to be one of the 10 or so things in the top level menu of nytimes.com. Or did you not know that?
Who makes this stuff up? Blue was 3 legit, one totally fabricated.
Please, no. That formula isn't broad enough in its appeal.
Slop Generators, indeed. A lot of words to tell you one thing. If your job or work product disallows accessing or using proprietary information, using a LLM is a bad idea. Things like clean room implementations, reverse engineering, etc. will become liable. asahilinux.org/docs/project...
I suspect there are numerous dead drops tied to her continued health and she has made people aware of that.
A lot of people suspect that they are not actually sworn-in ICE agents, but are private contractors (bounty hunters) that are working on a per capture contract. They don't have real badges and they don't show their faces because they aren't legally doing what they are doing.
Your personal interpretation of what constitutes authorship isn't really relevant. The point was about infringement and what constitutes it, not authorship. If what a LLM fabricates to represent a knowledge base is considered infringement, how is that meaningfully different from humans re: the law?
Yeah, I think we all know that. Sorry the subtlety of the point being made was lost. Regardless of whether it is meat or software, transforming knowledge into a unique work product is not copyright infringement. Confusing authorship with infringement is a red herring.