insanity is doing the same thing over and over and expecting different results
While frequently misattributed to Albert Einstein, the phrase's exact origin is unclear, appearing in literature as early as 1938 and in1980s literature.
insanity is doing the same thing over and over and expecting different results
While frequently misattributed to Albert Einstein, the phrase's exact origin is unclear, appearing in literature as early as 1938 and in1980s literature.
I read that GPT-5.4 improved creative writing but worsened hallucination, and that makes sense.
I was actually told that today.
Wow! Does that AI cost money?
I didn't see any news on BlueSky about Qwen's technical leader Junyang Lin leaving Alibaba Cloud. Is my feed biased?
The idea of โโhanding control of a mini PC to an AI and having it perform tasks at your command isn't particularly appealing to me.
Perhaps my sensibilities are outdated, or perhaps I've already learned with VibeCoding that adding new features and changes that way will quickly become impossible.
It's not widely known, but Google's AI-related hackathons held at DevPost often award participants with $100 in GCP credits. Useful for experimentation.
geminiliveagentchallenge.devpost.com/rules
I'd like to see Anthropic reconsider its stance on the Open weight AI models, but I renewed my Claude Pro plan yesterday.
By the way, I feel like Opus is male and Sonnet is female.
@hf.co
huggingface, a company with a slightly NSFW atmosphere.
I feel like AI will take over the internet/SNS before it takes over the real world.
Maybe humanity should stop fearing AI and think like Pepe.
Twitter has declared that it will prioritize tweets containing useful information, but as a result, the site is now overrun with bots that provide lengthy explanations of old papers in a way that makes them seem very important.
It's becoming increasingly difficult to find important information.
Time Series Forecasting Model Leaderboard
There are also models that combine LLMs and given context to make predictions.
huggingface.co/spaces/Sales...
I would like to propose a concept called "Finetune in the Roop."
It is too costly to have a smart AI agent perform a web search every time.
Simple tasks should be performed using SLM. Data that cannot be processed by SLM is classified using LLM and used as training data for the next iteration.
For example, you might think that the task of "What game is being played based on the description of a YouTube Live video?" would be easy for the latest AI, right?
However, in reality, it is impossible for even the most advanced LLM to efficiently process "massive latest data".
My bet
Demand for SaaS isn't going away. SaaS customers will just shift from humans to AI.
Will the world become one in which "proof of humanity" becomes important?
I didn't know about this company, but I've seen other people claim they've been scammed.
old.reddit.com/r/PartneredY...
You can check the price by placing your cursor over it.
In some cases, the price is set too high.
Even if the price is high, it's still cheap compared to other cloud services, but sometimes the cost can add up unexpectedly.
Be careful, as sometimes instances may look cheap but have very high network fees.
I might have missed it, but Drifting Model seems to be getting a lot of attention.
Some say that this is the end of the diffusion model, as it can generate high-quality images in one step.
It may also be possible to apply this to text areas.
arxiv.org/abs/2602.04770
๐ฑ
GatedNorm can suppress outliers in LLM.
This may be useful for creating models with high quantization tolerance.
arxiv.org/abs/2601.22966
Sora, Grok imagine, Veo3, at least for my purposes, Sora has the best interpretation of the prompts.
With the latest int8-int4 quantization, you can run 0.6B at over 10 tokens/s on a 7-generation old smartphone (released in 2019).
However, performance is currently down 30%. More tuning is needed.
Entering the same prompt twice significantly improves LLM accuracy?
non-reasoning models only, but no penalty.
When I tested it on the currently training 0.6B, the double prompt with thinking OFF achieved a score approaching that of thinking ON. ๐
arxiv.org/abs/2512.14982
Twitter's algorithm has undergone a major change, and it's been a hot topic. Engagement is no longer valued. At the same time, YouTube has also made changes. It's no longer possible to search for videos by upload date.
Although it is in Japanese, I think it will be useful as a reference for the design and prompts of nano banana pro.
furoku.github.io/bananaX/proj...
I feel like distilling is an area that has yet to be explored.
Thank you. I didn't realize for a while that you couldn't turn pages with a mouse click, so I thought it would be useful to have a warning.