NEW POST
Fragments: the future of senior developers, junior developers, more on cognitive debt, DevEx versus AgentEx, the role of IDEs, consequences of task switching in supervisory programming
martinfowler.com/fragments/20...
NEW POST
Fragments: the future of senior developers, junior developers, more on cognitive debt, DevEx versus AgentEx, the role of IDEs, consequences of task switching in supervisory programming
martinfowler.com/fragments/20...
As a Chinese person, I see this more as a triumph of open-source models surpassing all closed ones. What's even more interesting is that DeepSeek's sponsor is a quantitative fund company. Rumors suggest that they've made a fortune by shorting Nvidia π.
Don't forget you can pin lists.
Pin this DDD list if you want to easily browse content from DDD folks like @vaughnvernon.bsky.social, @suksr.bsky.social, @ruthmalan.bsky.social, @heimeshoff.bsky.social and many more.
bsky.app/profile/did:...
#ddDesign #domainDrivenDesign #ddd
True... It's phenomenal!
Local communities are hosting their own talks, both in-person and online. @chrissimon.au from DDD Australia in Sydney delivered an insightful session on Modular Monoliths & Microservices using Kruchten's 4+1 Viewsβcomplete with online participants joining the discussion! #GDDDD
The first direction can lead to AGI, but the reasonableness of the ROI is doubtful. The second direction cannot lead us to AGI, but it can help AI companies find scenarios that can generate value.ββββββββββββββββ
The other is to establish the context of language in more focused scenarios, which is the current focus of many companies' efforts in vertical AI applications.
So the future of AI may go in two directions: one is to establish a general context model, which would restore language and text back to the context in which it occurred, allowing AI to precisely understand the world.
π§΅LLMs have already reached the limit of expressing all possible meanings in written language, and even adding more training data would not uncover more potential meanings from the text. #ContextMatters
Just as even the best parrot remains a parrot without general intelligence of human being, LLMs may be hitting the ceiling of literal language itself, rather than a ceiling created by insufficient data. #ContextMatters
LLMs are essentially "next word prediction" machines. This lack of context leads LLMs to hallucinations, where the model generates text that's out of context for human's common sense. #ContextMatters
Hello Bluesky!