He was in stl for a couple of years and i carpooled multiple long drives to tournaments with him so if thats not a seal of approval what is
He was in stl for a couple of years and i carpooled multiple long drives to tournaments with him so if thats not a seal of approval what is
Vintage cube fully deproxied.
I think sol ring and mana crypt are so comparable id have to go with sol ring just to maintain the whole alpha thing
las vegas is an exception to this because I have family out there
nah I regret every time I fly to a tournament; if its not within a 5 hour drive I'm off it
hi i have power that would love to see more play
im going to play with the cards in my cube and none other.
i joke but it genuinely really hurts to watch the game that in many ways saved my life a decade ago turn into a case study for everything that's bad about corporate art and intellectual property
top 6 non-eevees:
mudkip
mudkip
mudkip
mudkip
mudkip
mudkip
We are thrilled to announce that our NEW Large Language Model will be released on 11.18.25.
My whole group chat is mourning competitive magic and trying to quit but I'm so glad magic is making more money than ever.
I also started during lorwyn and im also planning on lorwyn being my off ramp! Ive kinda already off ramped but ill stop doing cube updates after lorwyn too
Card Titan I take back everything bad I've ever said about you
Charlie Kirk was a man of violence. He cheered for violence, he helped create violence. He used violent rhetoric to inspire others to violent actions. He spent his time on this earth making it a more hateful, violent place.
I do not consider it especially shocking that he met a violent end.
I need Democratic online types to understand that because of this, Newsom is dead to me. Dead. I'd sooner vote for an actual corpse. There is no hope, none, that he'll win me over, *because he has established he is a bad person*. There's no coming back from this. Take my advice and move the fuck on.
Oh we are nowhere close to AGI and it doesn't have to do with costs, it has to do with the models themselves. The models don't think. They don't 'learn' in the sense that we do. They *can't* grow. They're just the worlds most sophisticated matrix multiplication systems.
Right, and I'm saying I think you're wrong, with the smart money. The problem is, whether or not it can be done, I think its close to 0% for anyone to actually try to do it. There's simply no money in it. Magic doesn't have the same allure to it like league or history to it like chess.
yeah basically. The question isn't can it be done. It can. the question is, can you get the right people on the problem and can you secure the funding to do it.
granted, the question we're trying to solve isn't optimal play, its 'better than a human'.
I don't think magic is precisely quantifiable actually: arxiv.org/abs/1904.09828
Actually I think this would be the most difficult part about building an AI model for magic; you'd need to build a simulation engine to perform the reinforcement learning, and uhhhh. It can be done I guess. but man I wouldn't wish to be the one in charge of that
my point was that its explicitly not solvable, and even though there is mathematical structure to language, no one actually uses any of it when building these models.
I would agree that LLMs don't beat humans at language in general, but when broken into smaller tasks, models often outperform humans now (e.g. a model can outperform humans on translations consistently, or summarization). I just brought up language as its my area of expertise.
The outcome is often just a new position that is favorable by some reward metric. It might be computing 5-10 positions deep, but that's an infinitesimally small slice to all the future positions possible.
I think there's a misunderstanding about what the outcomes here are. The machine isn't taking some input position and computing every possible future position from that and determining which wins and loses. Because that's not computable.
like one layer activates mostly on numbers, or quotes. Or another mostly on prepositional phrases. But that's an oddity. In general, none of it is human readable in any sense, and whatever the model 'learned' in its weights is a complete black box
There's no module for syntax, or word order, or structure of any kind. We just build absolutely enormous models and then throw all the data we can find at it and let it learn or discover any structures on its own. You'll often find layers that seem to do something specific to a human
but like, for example, we build neural networks for language. They translate, summarize, etc. But, as someone who builds language models for a living, the current state of the art doesn't actually utilize anything about the structure of language
I would need to know how you understand neural networks to work I guess. But for the most part, neural networks as a whole aren't solving problems; they approximate them. That's kind of the whole shtick with optimization algorithms (and neural networks are optimization algorithms).