Masteful sequence of quotes in Mike and Ike which I never noticed before
@bgavran
I'm building neural networks that generate provably correct code, and the software infrastructure for training them. Recently experimenting with TensorType: https://github.com/bgavran/TensorType www.brunogavranovic.com
Masteful sequence of quotes in Mike and Ike which I never noticed before
never, ever, ever, ever accept "how will you pay for it?" as an argument against social programs.
RE: https://mathstodon.xyz/@julesh/116103778140182700
Some more work I've been a part of:
New blog post!
Autodiff through function types: Categorical semantics the ultimate backpropagator
https://julesh.com/posts/2026-02-20-categorical-semantics-ultimate-backpropagator.html
The transition from “AI can’t do novel science” to “of course AI does novel science” will be like every other similar AI transition.
First the over-enthusiastic claims that are debunked, then smart people use AI to help them, then AI starts to do more of the work, then minor discoveries, & then…
there is a widespread belief among people with even a little technical and scientific savvy that Alchemy was a bunch of hooey, that they never published their experiments, that alchemists had to each discover on their own not to drink mercury, etc
but this is urban legend!
This is not what is happening at all.
The amount of misinformation on BlueSky about AI is insane, and it keeps promising that AI is all hype that is going away soon.
A really dangerous position that cedes all AI policy and decisions about how it will be used to others.
Also Futurism is clickbait
That's not what ndarrays are: they are homogeneous arrays of elements: numpy.org/devdocs/refe...
"it's impossible to reproduce ndarray in a type safe way." What part of it is impossible to reproduce?
This is a catch-22 problem: nobody is working on non-cubical tensors because they're slow, and they're slow because nobody is working on them.
One of the goals of TensorType is to try out machine learning with them. If something new works, that'll be good incentive to work on making it fast
Indeed, eventually the plan is to use mutable arrays, and either partially write a new, or leverage existing backends for fast tensor contractions.
The plan is to first make it correct, then make it fast
Even more, building things concretely ended up facilitating research: I now understand that a tensor is simply a composition of containers, and that this perspective is *enough* to get us everything that NumPy gives us.
It's just a matter of implementing it.
I'm quite proud of how far I've been able to get with TensorType: https://github.com/bgavran/TensorType
What started out as a casual "I wonder if I can implement type-safe tensors" question has now evolved into a fully-fledged library
Come and visit me at the poster session at #QIP26 today! :)))
If you're interested in some of the design choices behind TensorType, have a look at the great blog post that @Andrev just posted:
types.pl/@Andrev/1159...
TLDR; Tensors in NumPy are secretly built out of the composition product of containers
Dear USAans, ICE will not let you complete an electoral process which might result in a government that might hold them to account. If you want your democracy back, you have to get rid of them *first*. Abolish ICE, you say? How, I ask?
On the drive home I was idly thinking about what changes I'd make, if I could, to our system of governance after this administration is - ideally - gone.
I suppose in no particular order, here is a list of what I'd do, sorted by the mechanism for doing it.
If you've been curious what I've been up to, the recently published report from GLAIVE reveals a part of it:
https://glaive-research.org/2025/12/08/q4-report.html
And if I got it right,
∂List = List × List -> two lists
Different products of containers, with examples on the List container:
List ⊗ List -> rectangular array
List ∘ List -> ragged array
List × List -> two lists
List + List -> a boolean value and a list
I should also say that C needs to be a decidable container, meaning that the domain/codomain of ∂ is likely muddled even further?
But surely there is a (largest) subcategory of Cont for which the derivative is well-defined. Is it known what that subcategory is?
This is because its action on morphisms cannot be defined for an arbitrary lens (to see this, take the unique lens I -> 1, where I is the unit of the tensor product of containers, and 1 is the terminal container. Then the set of lenses ∂I -> ∂1 is empty.)
Remember kids, theoretical physicists have irreparably ruined public trust in science by saying they *hope* we *could* find BSM evidence at the LHC, but Elon gets to predict his products will "do the sci-fi" and be wrong 100 times per year for profit but he's just giving them hope
Had an absolutely fantastic time in Scotland. It's been great seeing a lot of friendly faces, and making exciting research progress with Glaive
London ---🚅---> Glasgow
I do get a burst of inspiration every now and then. But these are few and far in-between.
Quite hoping it gets back - those have been good days, and I often think about how I wouldn't have become the mathematician and computer scientist I am today if it were not for those days
Is there a container interpretation in the case of an arbitrary category S?
Containers and quotient containers can be interpreted as choices of a category S with some constraints on it, and a functor S -> Set.
For an ordinary container, S has to be a discrete category, i.e. a set.