Acer's Avatar

Acer

@acerfur

Pure mathematics student at Cambridge. Loves analytic NT. | πŸ‡¬πŸ‡§πŸ‡΅πŸ‡Ή He/Him | 21 Bi/Demi | SFW

268
Followers
548
Following
85
Posts
17.08.2023
Joined
Posts Following

Latest posts by Acer @acerfur

oh- would

01.03.2026 20:46 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I also dislike that I can't make private accounts on this platform lol

19.02.2026 08:02 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image Post image

I think about Move 37 often. It will be so incredible one day when (hopefully safely!) we can get the models to come up with these creative breakthroughs in scientific domains.

A novel cancer drug, a RTP superconductor, a proof of RH, etc, etc. are all things that would net benefit humanity.

18.02.2026 10:21 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Guh I hate that hardly any of the accounts I follow and interact with are on this platform. Would have abandoned Twitter a long time ago if not for that…

18.02.2026 10:13 πŸ‘ 3 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0

Would an eventual LLM-generated correct proof of the Riemann hypothesis suffice to be impressed?

18.02.2026 09:37 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I mean it is impressive because it’s work that would be suitable for a human postdoc to write up and publish. Certainly we didn’t have models prior that could do work of such sophistication only a year ago.

18.02.2026 09:36 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I don’t really agree with this. The reasoning models have proven capable of deductive reasoning in combining established results and ideas in novel ways to form a new result (e.g. recent ErdΕ‘s problems results). They have yet to prove capable of forming a novel useful non-trivial concept though.

18.02.2026 09:33 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Won’t announce this on my Twitter for a while, but I’ll be working at OpenAI in San Francisco during the summer.

If any oomfs there are interested in meeting up, lmk!

16.02.2026 02:35 πŸ‘ 5 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Anyhow we originally were to classify this as a human-AI collaboration result prior to the new model variant. When it was able to replicate the proof, we then moved it to an autonomous AI result.

02.02.2026 23:43 πŸ‘ 5 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I then also formalised my corrected proof in Lean 4. Eventually though, a newer model variant was able to independently reproduce the proof except it made a mistake in the proof of Lemma 2 where it took strict inequalities despite nothing stopping the b_k from being one apart.

02.02.2026 23:43 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

It was LaTeX. The proof in the Aletheia ErdΕ‘s problems paper is actually in fact my proof. We tried the model on a few different Deep Think variants, but none produced a quite right proof, but one gave something right enough that I could see how to fix it to give a correct proof.

02.02.2026 23:43 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Irrationality of rapidly converging series: a problem of ErdΕ‘s and Graham Answering a question of ErdΕ‘s and Graham, we show that the double exponential growth condition $\limsup_{n\to\infty}a_n^{1/Ο•^n}=\infty$ for a monotonically increasing sequence of positive integers $\{...

My collaborators and I later generalised Aletheia’s solution to 1051 to give this paper that I’m quite proud of. I think there’s room to push the results further but we haven’t explored it yet. arxiv.org/abs/2601.21442

02.02.2026 23:40 πŸ‘ 1 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Post image

Thanks! We tried to be be very careful in the announcement of these results. I was pushing for several caveats in this work hah. I like its solution for 1051. I felt confident enough to write this paragraph because it seemed like it was in another class of LLM proofs on the ErdΕ‘s problems so far.

02.02.2026 23:38 πŸ‘ 16 πŸ” 1 πŸ’¬ 2 πŸ“Œ 0
ErdΕ‘s Problems Blog - A retrospective on problem 728 and the use of AI on ErdΕ‘s problems

A new blog post by @acerfur.bsky.social describing his experience as a pioneer of using AI tools to solve ErdΕ‘s problems:

www.erdosproblems.com/forum/thread...

26.01.2026 08:17 πŸ‘ 5 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

AI is capable now of generating new interesting mathematics.

But it's much easier for it to generate plausible-sounding nonsense.

I am concerned that the latter, copied and promoted by users with no understanding of the mathematics, is going to drown out the former.

14.01.2026 09:43 πŸ‘ 6 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

Chat do I go to San Francisco to work on reasoning for mathematics at OpenAI

12.01.2026 02:33 πŸ‘ 5 πŸ” 0 πŸ’¬ 4 πŸ“Œ 0
ErdΕ‘s Problem #728

Sure thing. It’s all here, say:

www.erdosproblems.com/728

12.01.2026 01:04 πŸ‘ 6 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0

Yes I know what epistemic means. Regardless, people more familiar with the subject of mathematics than you have confirmed that these are novel proofs. Perhaps put your ego and denialist cope in check. No need to be condescending.

12.01.2026 00:50 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

>it has no epistemic certainty it is a proof or not

FWIW actually this is not true of GPT-5.2. It is generally highly cautious and will not say it has a proof unless it’s very confident all the logical deductions correctly follow. It will more often than not know its limitations and happily concede

12.01.2026 00:34 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

One of the big challenges now in using AI for mathematics is the credit/attribution problem. AI has a tendency to use observations/techniques without giving credit as to where it 'learnt' about them (mainly because it's forgotten itself).

11.01.2026 08:53 πŸ‘ 3 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0

Elaborate?

11.01.2026 05:05 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

You’re saying this like humans get things right all the time and that we don’t make mistakes or lie lol

11.01.2026 04:21 πŸ‘ 6 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Yeah pretty much

10.01.2026 22:41 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Ok good, I interpreted it that way as well. GPT-5.2 Pro seemed to have given a positive answer to that at the end of the conversation link I sent which seems to be a plausible sketch to me but I only looked at it very briefly. Will try to get Aristotle to autoformalise it when I’m back home.

10.01.2026 22:38 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Indeed yeah this is what I’m currently exploring. Waiting on others to chime in on how to best interpret the intent of the others.

10.01.2026 22:22 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

>can’t prove a thing
>proves open ErdΕ‘s problems as true

Interesting

10.01.2026 22:17 πŸ‘ 8 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

All well and good, but even if it just predicts the next token, I’ll happily take an LLM-generated proof the Riemann hypothesis!

10.01.2026 22:01 πŸ‘ 10 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I agree, but if we want these models to have a crack at the hardest of problems then they need to be able to develop new theory. When will a model be capable of doing what Galois, Ramsey, or Grothendieck did and starting a whole new field of maths (or developing it with a novel concept like schemes)

10.01.2026 21:54 πŸ‘ 4 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

All good! Glad this is interesting to you! Big fan of your comics :)

10.01.2026 21:46 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

and yet… they can exceed some humans. My opinion is that if it can imitate reasoning well enough that it would be passed as human reasoning, then we should give it the benefit of the doubt and say it *is* reasoning Γ  la Turing’s test.

10.01.2026 21:46 πŸ‘ 12 πŸ” 1 πŸ’¬ 2 πŸ“Œ 0