2000'lerde: Grand tourismo V, minecraft ,
Öncesinde: lotus challenge (lotus 3: the ultimate challenge imiş asıl adı). :)
2000'lerde: Grand tourismo V, minecraft ,
Öncesinde: lotus challenge (lotus 3: the ultimate challenge imiş asıl adı). :)
gemini 3, as of 29.11.2025, same prompt. seems to be the best among the three.
chatgpt as of 29.11.2025, same prompt.
+9 months and DeepSeek has almost no improvement.
Ben rastlamadım ama açık unutulduğunu anlamak (ve kapatmaya motivasyon yaratmak) için öyle yapılmış olabilir. Açıkken bir kırmızı led yakmak (ışıkları hiç söndürmemek) hem pratik hem de şikayetsiz çözüm olabilirmiş sanki.
Impressive capability. On mobile, I'd
prefer if it jumps to the start of its response, otherwise I have to scroll back each time.
One of the simpler and still missing feature is "search among all AI assistants" for past sessions.
GSM uzmanı değilim ama SMS veya şebeke / sistem mesajlarına pek güvenmemekte fayda var. Uzmanı gelip "dert etmeyin, bu kadar olmaz" vb derse de inanmayın :)
The key point is changing a vehicle from boat to airplane and forgetting to update the data file to include the required parameters... and not catching it before release. Seems like an interface (data and code) management problem.
the AI tools are able to generate parts and assemble them to an assembly within a few minutes, via python scripting inside FreeCad.
The only problem: the AI most of the time doesn't know what it's building (this image is an NC plotter, designed by DeepSeek, the parts have meaningful names though)
Kaplama makinesi çıktı son yıllarda ve böylece kitapçılar uygun maliyete aile saati kültürünün yerini aldılar.
good to know.. it's the 'selpak' of zippers in recent years.
Conclusion:
1. DeepSeek worked great, though displayed the board using text, which is a lot better than no-show.
2. ChatGPT worked fabulously with very little guidance that took it seconds to improve the graphic.
3. Gemini parsed the file but failed and admitted to fail at displaying the board.
Gemini responded with "here's the board" and nothing was displayed. It took some convincing to let it try drawing with text. And it displayed only one stone on the board. When I prompted to try to show all 20 moves, it couldn't display them, and gave up saying sorry.
ChatGPT successfully drew the board with graphics. The only problem being white stones hard to see on the white background. When reminded, it successfully changed the background to light brown (on my suggestion). Note that row and column names are not displayed.
Deepseek drew the board with Text, but at the 10th general move, not usernickname's 10th move.
Test 2: can you draw the board at the 10th move of usernickname?
ChatGpt identified and parsed the file correctly, described the settings of the game, with the significant summary of the settings.
Gemini misidentified the file as a TsumeGo file (another website related to Go). In fact the source of the file (KGS) was already written inside the file, and was successfully identified by DeepSeek.
Deepseek identified and parsed the file correctly, described ALL the settings of the game.
This time, I tested the three AI tools with a SGF file. (SGF is a file format that describe Go matches.) The following are the results:
3 is usually a good intro for newbies, the main drawback is that tech stack is changing rapidly so the content will age out quickly.
For Gemini, I gave extra chances and prompts to come up with a working code. But somehow it couldn't!
example prompt: "now write the complete and functional code to compare all methods"
and its code included a line like:
# ... (function remains exactly the same as the previous response) ...
:((
My overall review:
1) DeepSeek runs the best at understanding the context and coming up with working code and ideas.
2) ChatGPT can do things, but it didn't come up with metrics when comparing different methods. It ran out of credits.
3) Gemini.. seems to have fallen behind the competition.
DeepSeek managed to write the code, however it implemented a deprecated method at first. When I prompted the error, it suggested removing that method, and the comparison was available at first try. Comparison included some metrics such as: detection time, matching time and number of matches.
ChatGPT went out of credits for its best model, but still managed to write the code, however it implemented a deprecated method. Then corrected (after a few tries) and made all methods work, but didn't report any comparable findings.
Gemini failed awfully again, this time not at the algorithm level. It made a syntax error for printing something I had not specifically asked for.
After this, I asked all of the AI systems to improve the code for comparing several alternative methods (to be suggested by the AI) and reporting the results.