There you are!
There you are!
Reading Bee Speaker now. Irae may be a predecessor of Worsel...
Now we need the librarian.
Very nice! And the MLX community has released a version that will run on your 64 gig Mac.
Simon, iirc you have a 64 gb Mac. As you may know it's possible to run OSS 120-b on that, using the Unsloth q2 model. Thanks for your wonderful posts!
Ethan, I appreciate your posts enough that I will rejoin LinkedIn to continue seeing them regularly. And of course I read and enjoy your newsletter.
Peter Kenny narrating Iain M. Banks’ writing is luscious, and not to be missed.
please keep posting here.
What a cool idea! I'm trying this idea with a Project Gutenberg book and have asked for a text-based adventure.
Thanks for Moonbound!
No, haven't tried that.
Maybe a smaller model with large context window (haven't tried this myself). One example: Llama 3.1 8b has 128K context window.
Similar experience, gave up on it.
I deeply appreciate your contributions to a better life and better health for many people. Regarding world hunger, ChatGPT has some advice for you. chatgpt.com/share/67b9e9...
Thanks for the nudge on MLX models, Simon. They seem to be more memory efficient on Macs than GGUF versions.
I like how one can tick a box in LM Studio search and find only MLX models. I haven't tried the option Simon mentions.
a chain of thought model it seems from the name.
New: The largest medical A.I. randomized controlled trial yet performed, enrolling >100,000 women undergoing mammography screening
The use of AI led to 29% higher detection of cancer, no increase of false positives, and reduced workload compared with radiologists w/o AI thelancet.com/journals/lan...
wrote about Deep Research, which is very, very good at doing nuanced and complex research.
It is also the first narrow agent that can do sophisticated and likely quite economically valuable work, which tells us something important about the future. open.substack.com/pub/oneusefu...
Indeed. The Viture Pro does as you suggest. So all one needs is a lightweight display in glasses.
"Let there be pudding!" I'm in for the pudding potluck.
Thanks, looking forward to the ggufs
that's pretty awful
Any thoughts on the 70b model, quantized?
This weekend I wrote a post on which AI to use right now (at least for general, individual users). Model strength may matter less to most users than the capabilities of the apps and the other features that each model includes. It is a little complicated.
www.oneusefulthing.org/p/which-ai-t...
A fine book, worthy of a reread now and then.
I'm almost unwilling to inflict my questions on it
Love your creative prompts and the results
Russell Goose, I believe