πΆοΈNew release of Diamond ONNX, the Clojure ONNX model runtime integration. Now work out of the box on all OSs, CPU, and GPUs! github.com/uncomplicate...
πΆοΈNew release of Diamond ONNX, the Clojure ONNX model runtime integration. Now work out of the box on all OSs, CPU, and GPUs! github.com/uncomplicate...
Can we now load and run the inference on the real deal models, such as the open LLMs from the Hugging Face, for example? Let's see with Gemma 3!
dragan.rocks/articles/25/...
#Clojure #AI #LLMs
π₯³π₯³New release of Diamond ONNX Runtime for Clojure. 0.21.0 is in the @clojars
Run your AI and ML models on CPU or GPU.
github.com/uncomplicate...
Clojure Runs ONNX AI Models Now - Join the AI fun! dragan.rocks/articles/25/...
Join the AI fun directly from Clojure. Although (sadly) Clojure has not find its way in the big guns AI arena, Clojure is a very capable technology for integrating stuff into real-world applications!
Clojure Tensor and Deep Learning library Deep Diamond now works on Apple M CPU. Check out the fresh #Apple Silicon support in 0.36.1!
Please RT this message so Clojurists actually hear about it :)
#Clojure #AI #Pytorch
github.com/uncomplicate/deep-diamond
Clojure fast matrix library Neanderthal has just been updated with native Apple silicon engine
Please check the new release 0.54.0 in Clojars.
#Java #AI #Clojure #CUDA #Apple
neanderthal.uncomplicate.org
For all programmers thinking that they'll leave the "boring tasks" to AI code assistants, while they just do the "creative parts". You won't be able to even get to the creative parts, yet alone to solve them...
arxiv.org/abs/2506.08872
Fast matrices and number crunching now available on Apple Silicon #MacOS. Check out the newest snapshots of Neanderthal in the Clojars! Add 0.54.0-SNAPSHOT to your project.clj and you're ready to go!
github.com/uncomplicate...
#Clojure #NumPy #CUDA
So, we created SQL so "analysts" can query the DB and get rid of programmers. They delegated this to programmers anyways. Now, they created AI that writes SQL, that queries the DB for "analysts". Guess who's going to be stuck writing shitty AI prompts. news.ycombinator.com/item?id=4400...
Damn, this is the clearest evidence yet of my βAI powered Dunning-Krugerβ hypothesis: that AI proponents are only bullish about AI for work they donβt actually know or understand.
We need to put the punk back in cyber.
Apple M CPU Accelerate backend implemented for #Clojure Neanderthal! Now you have 3 superfast native CPU and 2 GPU choices when crunching numbers on the JVM!
Still available as snapshots on github.com/uncomplicate/neanderthal
(waiting for upstream releases). Thank you clojuriststogether.org
Vectors/matrices/tensors are really the economy of scale at work! Don't process individual elements in your own loops; use the built-in operations to process the whole structure without looking in! CPU, GPU, CUDA, etc..
aiprobook.com
#Clojure #PyTorch #programming
I love physical books too!
I can't provide those for my books due to logistical issues, but I don't have anything against you printing the PDF (no drm) and binding in a hardcover binding (if such shops are available in your area).
If youβre a dev whoβs felt like most ML/Math content talks over your head β this is for you.
You can preview the books or support the work on my site. β€οΈ
π aiprobook.com
Or just retweet this thread so others can find it.
They've helped hundreds of devs actually get backprop, eigenvalues, gradient descent, and more β without needing a PhD or pretending math is magic.
A few chapters are even free at aiprobook.com if you want to explore. Lots of content is available as blog articles.
That feedback led me to create two books:
π Deep Learning for Programmers
π Linear Algebra for Programmers
Theyβre built entirely from the intuition that if you can code, you can understand math.
Code-first, jargon-free, honest.
One day, a blog post of mine made the front page of Hacker News.
It didn't break my server, but it was read by many people.
That gave me a signal: there's a hunger out there for programmers who want hands-on, code-first explanations of βscaryβ math concepts.
10y ago, many programmers were frustrated trying to understand how Deep Learning worked under the hood. Every resource was either:
Way too theoretical
Or shallow βframework tutorialsβ
I started writing blog posts at dragan.rocks just to explain things to my past self.
I've spent many years building HPC and ML libraries, and writing 2 books that teach Linear Algebra and Deep Learning to actual programmers (not math PhDs).
Here's how I went from writing my first blog post to building a following, front-paging Hacker News, and useful books.π§΅π
Thank you!
I really don't know, as I don't get any contact details from Patreon. I am surprised that they close accounts for such reasons. The only thing that I can suggest is to try with a more traditional email, such as gmail...
The best programmers arenβt good because they know more.
Theyβre good because they ask:
βWhatβs really going on here?β
Programmers treat linear algebra like magic.
But it's not magic.
It's code. It's vectors. It's yours to master.
Linear Algebra for Programmers shows you howβwith zero fluff.
If you write code, you need this book.
π aiprobook.com/numerical-li...
#DevLife #MachineLearning #AI #Coding
π If you're a programmer struggling with math, Linear Algebra for Programmers by Dragan Djuric is the book you didnβt know you needed.
π’ No fluff. Just the math that powers ML, graphics, and moreβexplained in code.
π Get smarter where it counts: aiprobook.com/numerical-li...
#AI
A lot of code that makes PyTorch useful might already be in the Deep Diamond. No need to create a PyTorch port, just the integration of the most useful stuff from libtorch into Clojure.
That's all right. I might do the PyTorch part, and other people will do some other pieces.
Nothing stops us from doing the same deep integration to onnx, of course. Or any other runner. But, as I understand, the selling pitch for onnx is portability, not performance. Why I would go with PyTorch is that most models are developed on PyTorch anyway, so there's less friction there...
This would be orthogonal, for production use of the models. You could collaborate with Python colleagues in whatever way you can collaborate now. The point is that when there is a model (public or private) that you want to build your application on, you can run it from the JVM, without Python.