What do you mean by "doesn't work", do you have an error message? A crash? I could try myself tomorrow :)
What do you mean by "doesn't work", do you have an error message? A crash? I could try myself tomorrow :)
Are you sure it's not just a crude check for the su binary? They say "Should have used Play Integrity like Google did with RCS".
Mon application bancaire fonctionne encore donc on croise les doigts.
Avant les Play services j'utilisais mon iPad pour valider les opérations...
L'article parle de "ROMs" en ce sens pour que la plupart des gens comprennent, mais il s'agit bien de l'utilisation de Play Integrity : twitter.com/MishaalRahma...
Que ça ne fonctionnera plus sur les OS non-certifiés par Google, dont GrapheneOS malheureusement.
GrapheneOS lui-même supporte RCS sans problème, mais c'est dommage que la seule implémentation disponible soit celle de Google (ou Samsung).
J'ai souvent trouvé RCS buggé (en particulier avec un VPN), cela dit je ne sais pas si ça vaut le coup de se prendre la tête sachant que Play Integrity va être implémenté dans Google Messages : 9to5google.com/2024/02/29/g...
yuzu's demise is a huge loss for everyone. Even if you don't use emulation right now, it's part of the sustainability of human culture - let that sink in.
Huge thanks and respect to the yuzu & citra dev team.
Yup! Same.
I have no reason to use Microsoft Edge anymore. Chromium 122 has a built-in V8 security toggle which disables JIT. As of latest stable, they even provide a fallback implementation for WASM, akin to Edge's Drumbrake.
Also, I've been meaning to try to build AOSP (and more specifically GrapheneOS) on that thing.
Problem(s): AOSP doesn't support arm64 nor does it support macOS.
So maybe a Rosetta-enabled Linux VM could work. While Rosetta has good performance, I still expect a measurable impact. We'll see...
And a few other background apps (Orbstack). Don't think 32GB RAM would've been an issue except for the LLM inference. But I'm not actually using any LLM for another purpose than testing.
Got this machine mainly for Blender.
M3 Max 16C is truly a beast. Base configuration (48GB RAM) can handle at the same time: a Windows 11 VM, Edge with 20 tabs including Discord/Element/YouTube, VSCode while doing LLM inference with a quite good model (mixtral 8x7b is my favorite atm).
Without breaking a sweat.
Yes, melatonin has been helpful. It's just that sometimes it makes getting out of bed even harder.
Really wish I could sleep like a normal person. But no, I have to stay awake every night.
daniel.haxx.se/blog/2024/01/02/the-i-in-llm-stands-for-intelligence/
Great article. Again, LLMs are powerful tools that show no real signs of intelligence to me. Actually believing it is a major pitfall with LLM use.
Anyone else doing LLM inference on their MacBook Pro? They seem to be pretty good devices for that (NOT training) thanks to unified memory (VRAM can usually take up to 70% of the total memory).
I can run mixtral8x7b (Q5) at around 27t/s which seems pretty fast for a laptop.
2018 i7 MacBook Pro > M3 Max blew my mind.
But it's not even the performance that surprised me. It's the battery life. It feels good to have a real laptop!
`void foo(T const t[const static 1])`
foo() is a C function that takes an immutable pointer that points to at least one immutable value (non-NULL).
This is what I often want, but man that is UNREADABLE.
TIL about "t ptr[static 1]" as a parameter instead of "t* ptr" to explicitly tell the compiler ptr should never be NULL.
Having integrated clang-tidy in my pipeline really has helped me to produce (hopefully) better C code. Many compiler flags are also helpful, albeit annoying sometimes, but that just means I was used to do something in a bad way.
I only know that most of the RCS infrastructure used today runs on Google servers (even many carriers rely on Google's Jibe Cloud)
I've spent the entire week-end trying to make my last C project as "robust" as possible: sanitizers, hardened compilation flags, static analysis...
My conclusion: C is one of humanity's biggest mistake.
Apparently Apple wants the standard itself to adopt E2EE, so there's hope
I'm used to that so it's fine, but yeah, AZERTY isn't the best layout for programming for sure
I'd say it's not even relevant to the carrier. It's not part of the standard, so it's up to who implements the messaging app supporting RCS. And Apple might just roll their own RCS server?
Well, at least your have transit encryption. That's still better than SMS, though it's a very low bar.
My 2018 MBP has a Radeon 550X I think. My 1070 desktop was a lot better/faster, if that's of any indication. But it's decent for light Blender scenes/renders
It's great that Apple plans to support RCS, but you should know: E2EE isn't part of the standard
I've been learning Blender recently. Now I see the value of having a decent GPU for my laptop. Intel MacBooks had trash GPUs tbh.
For instance on Mac AZERTY, I have to press three keys for "[]" : shift + option + 5. Cumbersome sometimes...