The undisputed king of progressive!
The undisputed king of progressive!
Yeah, the weirdest part about that post is that Bluesky seems pretty reliable, on both web and iOS.
Important dogfooding milestone - editing my first actual video in my own (as yet unnamed) video editor.
Am I the only one who thinks these two could be brothers? It's uncanny how similar they look, just slightly different haircuts and lighting.
(my wife thinks I'm crazy because I don't want to raise and graze).
"Rest and vest" means joining BigCo and cruising along easily while you wait for your options to vest.
"Raise and graze" is a thing now too - raising startup capital when the faucet is open due to an AI boom and just pay yourself a comfortable salary for a number of years.
I need to move away from this kind of work ASAP, but right now I need it to pay the bills!
Working by the hour/day usually is never a prompt. It's the "milestone" contracts that are a PITA, because no-one ever wants to be the one to actually sign off that a milestone is complete, so payments can get delayed by weeks or even months from when the work is actually complete.
Most of my income comes from contract/consulting work, which is a mixed bag. Freedom and flexibility is great, but the downside is that I spend ~20% of my time on admin (accounting, tax, gently trying to persuade people to pay their invoices, the list goes on).
I've replicated this setup with glfw and the timing is now much more reliable (only occasionally losing between 1 and 3 frames per second, which is what I'd expect with a simple timer).
Lack of vsync aside, I feel like something isn't quite right with the default GTK embedder.
I'm going to try the glfw embedder to see if that sheds light on things.
This is without any additional rendering, the texture is simply created/colored on the first invocation then inside populate() I'm measuring the time since the last call.
I think this inconsistent timing is definitely the source of the rendering jank I've been seeing.
...which is fine, but as soon as I introduce an external texture, the "populate" call (which I assume marks some kind of dirty flag to trigger a repaint) is wildly variable - regularly losing anywhere from 5-15 frames every second.
I've just been doing a little bit more Flutter profiling on Linux. It seems there's actually no support for hardware vsync - everything runs with a single timer. With a fairly vanilla app, this occasionally drops ~1 frame every second...
Oh ok, Iβll dig a bit deeper. I actually have been using it already for speech recognition (via API), but I hadnβt seen any suggestion that it would be a good candidate for non-speech tasks.
That's just transcription/speech-to-text though, right? It's not designed as a generic pretrained model for fine-tuning for downstream tasks.
I guess I put multimodal encoders into a separate bucket because they're often way too heavy for real-world audio tasks.
However, I didn't actually know that PE-AV also includes PE-A-Frame, which is an audio-text encoder (no vision). Promising! Will look more closely and see how it performs.
NVIDIA have a text-to-speech model (UALM) coming soon that *still* uses a Whisper backbone as feature inputs.
github.com/NVIDIA/audio...
There's been a slight lull in pretrained audio models over the past few years. 2022/2023 saw a flurry of releases (WavLM, HuBERT, CLAP, Whisper etc), with not much since then. I think everyone's been focusing on Audio Language Models instead (Qwen-Omni, Moshi, etc).
I personally know a couple of independent ML researchers who are legitimate, I wouldn't immediately discount a paper simply because it's not affiliated with a name you recognize. Unfortunately it's a slog, because everything needs to go through a crackpot filter first.
Oh cool, Iβve always hoped someone would try that. Iβm interested to know how much overhead the OS itself adds to CPU inference on small models.
That this is even being *discussed* is irresponsible! Based on my daily experience with Claude, there's a 30% chance it forgets which side it's on and decides to steer the missile straight into the Pentagon!
web.archive.org/web/20260227...
"If an intercontinental ballistic missile was launched at the United States, could the military use Anthropicβs Claude AI system to help shoot it down?
An Anthropic spokesperson [said] the company has agreed to allow Claude to be used for missile defense."
1) Make deal with Department of War to use your thing
2) Department of War wants to use your thing for war
3) Surprised Pikachu face
I'm amazed you think Sam Altman has any credibility whatsoever, the guy is the most transparently deceptive of all of them.
...at this point it probably doesn't matter if they refuse (because OpenAI will just swoop in and do it). To some extent, they have now directly caused the problem they said they were worrying about.
Over the last few years, I've seen Anthropic relax their self-declared safety standards time and time again (and shriek louder and louder about the dangers of superintelligence)...
I don't even trust Claude with unrestricted access to my *hard drive*, the idea that you would hand it control of a *weapons system* is just mindboggling.
If they had been more honest/restrained about LLM capabilities (and less fixated on a fat DOD contract), they wouldn't be in this position.
It's hard for me to give credit to Anthropic for their response to a problem that they basically created in the first place.
If you're genuinely worried about safety, (a) don't sign a contract with Department of War and (b) don't beat the drum about superintelligence/natsec/foreign threats.
Case in point - this is what greets me on resume after suspend.