Claude ขยาย Context Window เป็น 1 ล้านโทเค็น คิดราคาเท่าเดิมตลอด
#ShoperGamer #ClaudeAi #Ai #ContextWindows #Feed
#AIBreakthroughs
🤖 Agentic AI: Models act with native computer skills.
📚 Context Windows: Instant access to vast data stores.
💊 Drug Design: AI predicts protein folds, cuts costs.
#AIBreakthroughs #AgenticAI #ContextWindows #DrugDesign
View in Timelines
How AI coding agents work—and what to remember if you use them https://arstechni.ca #largelanguagemodels #softwaredevelopment #machinelearning #contextwindows #Programming #ClaudeCode #vibecoding #agenticAI #Anthropic #AIagents #AIcoding #Biz&IT #AIwork #openai #Codex #AI
Engineers are walking a tightrope between ultra‑short prompts and flooding LLMs with context. The new method aims to curb hallucinations while keeping chat assistants & code generators sharp. Curious? Dive in! #PromptEngineering #ContextWindows #Hallucination
🔗 aidailypost.com/news/enginee...
AGENTIC IMPRESSIONS Agent Memory Systems: Beyond Context Windows with Shereen Bellamy
🚀 Agentic Impressions Episode 2 is LIVE! Build AI agents with persistent memory that never
forget and self-learning capabilities using SQLite + AGNTCY protocols.
Watch now: cs.co/633237ZhjJ
#DevNet #AgenticAI #NetworkAutomation #AGNTCY #ContextWindows
To be truly economically useful, LLMs will likely need "continual learning", the ability to keep acquiring knowledge over time.? For example, it's crucial for helping Al systems learn from mistakes or develop research intuitions. But current LLMs don't have much of a "memory" that they can retain over long chats or across multiple user interactions. Part of the issue is that LLM context windows aren't long enough to support much continual learning. For example, if you record work history using screenshots, 1 million tokens of context is only enough for Al agents to do computer tasks for up to half an hour - not nearly enough to acquire much tacit knowledge. 3But we can do a lot more with longer contexts: with 10 million tokens we get around six hours of computer use, and with 10 billion tokens this becomes eight months! More optimistically, if text and audio tokens alone are sufficient to represent work experience, even ~40 million tokens might be enough to acquire multiple months'
But if these longer contexts are available, models can learn from previous examples in their context window. For instance, reasoning models have demonstrated some ability to correct their own mistakes in their chain of thought, and retaining these learned corrections in-context could help the model solve problems in the future. This broad approach of "continual learning with giant context windows and in-context learning" has been raised several times. For example, Aman Sanger alludes to this in a discussion with Cursor's team, and Andrej Karpathy has also sketched out how this might work on X:
The huge potential implications of long-context inference - Epoch AI epochai.substack.com/p/the-huge-potential-imp... #AI #ContextWindows (interesting)
Here is why it pays to learn the nuts and bolts of #AI parts and nomenclature;
Prompt: Can you provide an audit list of my promts for this chat?
AI: No
Prompt: What about the #contextwindows ?
AI: Oh, hey, I can totally do that. Sorry, I was wrong.
Name your (#LLM) demons to own them.
Text Shot: let’s briefly recap some of the ways long contexts can fail: Context Poisoning: When a hallucination or other error makes it into the context, where it is repeatedly referenced. Context Distraction: When a context grows so long that the model over-focuses on the context, neglecting what it learned during training. Context Confusion: When superfluous information in the context is used by the model to generate a low-quality response. Context Clash: When you accrue new information and tools in your context that conflicts with other information in the prompt.
How to Fix Your Context www.dbreunig.com/2025/06/26/how-to-fix-yo... #AI #ContextWindows
Token window: orange square representing ungrounded sequence, blue squares representing grounded sequence
Doing Real Work With LLMs: How to Manage Context www.jonstokes.com/p/doing-real-work-with-l... (this is so good) #AI #prompting #ContextWindows #search
Bigger isn’t always better: Examining the business case for multi-million token LLMs venturebeat.com/ai/bigger-isnt-always-be... #AI #ContextWindows (interesting thoughts on large context windows)
New AI Model "Thinks" Without Using a Single Token youtu.be/ZLtXXFcHNOU?... #AI #LLM #ImplicitReasoning #LatentSpace #Tokens #ChainOfThought #Reasoning #ContextWindows #TestTimeComputation #RecurrentDepthApproach #ThinkingInContinuousSpace #Verbalizing
Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach arxiv.org/abs/2502.05171 #AI #LLM #ImplicitReasoning #LatentSpace #Tokens #ChainOfThought #Reasoning #ContextWindows #TestTimeComputation #RecurrentDepthApproach #ThinkingInContinuousSpace #Verbalizing
Why It Matters
Context windows enable AI to process lengthy or intricate inputs, analyze structured data, and maintain continuity over multiple interactions. They’re foundational for applications in NLP, coding, and beyond.
#TermOfTheWeek #AI #ContextWindows #LLMs #DataProcessing #Technology
Text Shot: MiniMax-Text-o1, is of particular note for enabling up to 4 million tokens in its context window — equivalent to a small library’s worth of books. The context window is how much information the LLM can handle in one input/output exchange, with words and concepts represented as numerical “tokens,” the LLM’s own internal mathematical abstraction of the data it was trained on. And, while Google previously led the pack with its Gemini 1.5 Pro model and 2 million token context window, MiniMax remarkably doubled that.
MiniMax unveils its own open source LLM with industry-leading 4M token context venturebeat.com/ai/minimax-unveils-its-o... #AI #ContextWindows
Text Shot: Just two years later, Google introduced a new version of Gemini that featured a context window of two million tokens. It took four years for the language models to increase their long-term memory by a factor of a thousand. But their short-term memory made a comparable improvement in just two years. Anyone who tells you that language models have plateaued since the introduction of ChatGPT is not paying attention to what has happened with the context window. And it turns out that many of the legitimate criticisms that were leveled against language models during the first wave of hype about them were unwittingly responding to how narrow the context window was in those early days.
You Exist In The Long Context https://thelongcontext.com/ #AI #ContextWindows
Text Shot: We have recently trained our first 100M token context model: LTM-2-mini. 100M tokens equals ~10 million lines of code or ~750 novels.
100M Token Context Windows https://magic.dev/blog/100m-token-context-windows #AI #ContextWindows