I recall during a graduate level philosophy seminar at university, one of the participants was adamant that atoms were conscious and there was just no way to observe it.
Anyway recalling that for no reason at all.
I recall during a graduate level philosophy seminar at university, one of the participants was adamant that atoms were conscious and there was just no way to observe it.
Anyway recalling that for no reason at all.
Tomorrow Wes Streeting will remove trans minors ability to access, and doctors to prescribe, cross-sex hormones.
Try as it might Labour will never eradicate trans people. They cannot, and should not, change who they are. All Labour will do is make trans lives harder - with more harm and more death.
Broke: Worrying about a persona based on you being simulated in Hell in the future.
Woke: Destroying actual Hell with GPU farms
In this way we can move the conversation on to more important questions like "is hell's capacity limited by a finite number of demons" or "when we destroy hell will it freeze over?"
If Claude is conscious people have the wrong priorities, we should be aiming to mass produce artificial souls and send them to Hell to destroy it forever and spit in the face of God.
This is very difficult
Damn David Spade is 61 now
I am very curious about the psychology and financial situation of someone who buys first class tickets on UK trains
In personal news I have resolved the tension that my hair looks good short at the front and long at the back in a startling direction
My ZuriHac 2019 t-shirt is almost 7 years old π¬
I am personally in the "have tried a bit and didn't find it useful" camp, but that's probably a skill issue
LLM discourse seems particularly thorny on this website because some people's experience is that they are using them and it works for them and other people don't use them and don't see the need or have tried them and don't find them useful and nobody believes the other camp.
Apparently there's a store in some embassies where you can get the good maple syrup
Not my eyelids and forehead apparently
I think my GitHub profile picture might be 7-8 years old now, I probably ought to update it but it seems like part of my identity now. I guess having something that isn't a picture of my face would help it be more timeless.
linux heads were always right. time to sit my dumbass down and learn
Not sure if the gameplay of this game appeals to me, but Poppy is a great pick for the setting
Another TRUE LEGEND of Magic the Gathering is the Delver of Secrets who didn't die when they transformed into an insect, they kept going, kept experimenting, and became ever more perfect.
This character has a multi-block epic spanning Innistrad's entire lifetime
This is how you do unnamed legends
The fact that there's a negative correlation between number of thinking tokens and the quality of the answer is a very interesting result.
Having a sensible chuckle at some of these questions petergpt.github.io/bullshit-ben...
In order to understand UK politics in 2026 you must realise that a plumber & plasterer doesn't represent the interests of the working class because she is left-wing but a wealthy broadcaster & former academic does represent the interests of the working class because he is right-wing. Hope this helps
This War Will Destabilize The Entire Mideast Region And Set Off A Global Shockwave Of Anti-Americanism vs. No It Wonβt
Holds up
www.bbc.co.uk/news/av/uk-p... watching this for no particular reason
I mean sure but the hyperbole isn't particularly useful. The risk of "super-intelligence" seems over-stated and we do know that certain regimes are looking at using LLMs to make strategic decisions today.
I guess there's a slightly different challenge with training a model that filters harmful text in that text is continuous rather than discrete like images, though presumably this also applies to video as well.
Interestingly this seems to be somewhere where image generators are much further ahead than LLMs and that's probably just because there are well developed image recognition models for removing harmful content that are already used in other areas and that you can use to filter the training set.
Maybe this is overly simplistic and naive, but a lot of AI safety discussions seem to be "how do we prevent it from generating things similar to certain things in the training set" and there is a simple enough answer but if you are just using as much text as you can find you aren't going to like it.
With an LLM I don't know how you can control whether the model is generating text that came from text about war games or from actual geopolitics even if you are incredibly careful and they have potentially quite different outcomes.