Are they in the Epstein files also or is it some other kind of leverage? Let's get digging.
Are they in the Epstein files also or is it some other kind of leverage? Let's get digging.
By all means yes let's pull out, but it might be too late to avoid the expensive part.
The petrodollar is based on promises about the completeness of US protection for the gulf states. I'm not sure what could be done to give Iran an incentive to stop deteriorating faith in that promise.
You lose your shot at jury nullification if you miss and hit somebody's car. Also you gotta have a hell of an arm.
I would suggest a disguise.
They exist: en.wikipedia.org/wiki/Circumh...
But I think this photo was edited.
In real ones the color is distributed radially from a point source, not all swirly like this, because the refractive index of water is constant (unlike, say, the the thickness of an oil slick).
I think his handlers would love us to believe that democracy is a threat. A nuke would be an effective cherry on top for them.
Might be he wanted us to be low on ammo at a certain time.
If we make it about which sorts of people are worth the oxygen they consume, we become value-addled just like the people we oppose.
They're worth protecting from measles because they're people, not because they're the right kind of people.
We would be so much better off if the conversation could move on to why this vaccine is safer than that one, or why this pathogen is more dangerous than that one. They're incredibly diverse.
Like, do the anti vaxxers even know that measles occasionally causes brain damage?
Jesus said give to Caesar what is Caesar's, and all of the money was Caesar's. If you wanted to feed the hungry without magic, Barabas was your guy--which is why the richest men on earth had him killed.
I thought about making some kind of paint thrower, but I'm a lousy shot, and I figure I'd spoil whatever audience goodwill I've mustered if I end up painting somebody's car by mistake.
Gotta stay within the bounds of what's likely to be jury-nullified.
3D printed drones powered by compressed air that only need to stay aloft for 10 seconds to do their job... that sort of thing. It could be a lot of fun.
If it caught on, people might get competative with how quickly/creatively they could pull it off. Like, I imagine you could make a sort of kite that might land/ensnare a set of cameras, that sort of thing.
I didn't see that apparatus but it sounds right. I'm imagining a rig where I can hoist a piece of fabric up and over and then pull. Drawstring cinches so the whole top of the pole is in a bag.
Best done near a traffic jam so it's kinda like performance art.
Current idea involves fabric string and poles, but it's a work in progress.
If the government uses OpenAI to power mass surveillance, you could blind enough cameras that OpenAI doesn't have enough data to do a good job. That would help anybody who competes with OpenAI.
Electric motors towing a generator, that's how the diesel powered trains do it also.
The existing stuff validates the new stuff which can then be used to as existing stuff to validate even more new stuff. and as long as you don't get greedy, you can get pretty good results doing this.
You can use an NMG to generate a harness that uses these sources of truth to constrain its outputs, so that your prompting doesn't have to do the heavy lifting.
Not sure if that still counts as vibe coding, my only point is that you don't then need additional tests for your generated harness.
Most new stuff still has to integrate with existing systems, consume existing data, and maintain baseline assumptions about existing telemetry.
Well for those things, I google it or write it myself. It's only the tests that were previously not economical to run at all that I'm using AI for.
But none of that would justify the use of "handcuffs". I'm worried about people taking the threat of such attacks too lightly. I think it's important that we maintain an appropriately paranoid posture regarding these things.
Also, Adrian Tchaikovsky's "Service Model" is great fun, and its plot is echoing around in my mind presently.
There's a lot to talk about re: the likelihood, efficacy, and defense against attacks of that nature--but all of that would distract from my point. I needed an arbitrary bad thing that's obviously worth preventing as a stand-in.
The likelihood of that happening is below acceptable tolerances.
Sure, it's possible that there's a bug in the generated AST-checker which then translates to non-equivalent rust being generated. But the generated code would have to compile, lint, and pass all of the same tests that the c++ preimage did (more handcuffs, not all are tests).
Here's a good example: ladybird.org/posts/adopti...
They generated a test harness which failed unless there was AST-level equivalence between the c++ code and rust code. Then they used the AST-checker as handcuffs to ensure that their generated rust code was equivalent.
Humans engaging in prompt injection attacks, seeding malicious training data, or tampering at the API boundary between myself and the model would be the malicious ones.
As for the "handcuffs" usage, it's a potentially malicious object, like a robot that might try to stab me.
I get that restraints don't summon warm fuzzy vibes, but this is a situation where keeping humans safe means not worrying about whether the robot consents to being tied up.
The idea that someone understands the whole thing has been a lie for decades. There's room for rigorous empiricism here.