This project was inspired by a course from @DeepLearningAI. I would like to thank them too!
@aslandizaji
Artificial Intelligence, Machine Learning, Neuroscience, Complex Systems, Economics. PhD Student at the University of Tehran. Cofounder: @AutocurriculaLab, @NeuroAILab, @LangTechAI. https://sites.google.com/a/umich.edu/aslansdizaji/
This project was inspired by a course from @DeepLearningAI. I would like to thank them too!
π¦ Check it out: github.com/aslansd/nvid... π Built on top of NVIDIA NeMoβs framework-agnostic agent runtime and LangGraphβs workflow engine for stateful agent execution.
Would love your feedback and contributions! π
combining stateful planning, iterative exploration, and result synthesis in a seamless pipeline. Itβs designed for extensible, production-ready AI research automation with modern agent tooling.
π Just open-sourced a Deep Research Agent built using NVIDIA NeMo Agent Toolkit (NAT) + LangGraph Deep Agent Framework!
This project demonstrates how to orchestrate multi-agent workflows for complex research tasks β
Built upon the original ADK sample: [https://github.com/google/adk-samples/tree/main/python/agents/machine-learning-engineering]. Full Colab notebook & Gradio UI included. Check it out: [https://github.com/aslansd/MLE-Agent-GoogleADK-Gradio] #MLE #MultiAgent #GenAI
Just released my MLE-STAR inspired Machine Learning Engineering Agent! π€ It uses the Google ADK to orchestrate Initialization, Refinement, and Ensemble agents. Trains SOTA models, driven by search and targeted code refinement.
Finally, this app was inspired by two courses from @DeepLearningAI and @LangChainAI Academy. I would like to thank them!
Check it out here π
GitHub (code): github.com/aslansd/deep...
Streamlit app (try it out): d3yjnms2ch6yxcthmrvqnn.streamlit.app
Would love feedback from researchers, builders, and anyone interested in multi-agent systems π
The results?
π Final answers with explanations
π Automatically generated charts
π Full trace of every step the system took
Itβs like having a research assistant that works systematically and shows its work!
My app combines all of this into a multi-agent framework:
A Planner breaks down your research question
Specialized agents (web researcher, chart generator, summarizer) handle subtasks
Everything is orchestrated into a transparent workflow you can trace
Traditional LLM agents can be βshallowβ β they just loop through tools and struggle with long, complex tasks.
Deep Agents (a new feature in LangGraph) bring:
β
Task planning (TODOs)
β
Sub-agent delegation
β
Context offloading to files
β
Robust reasoning prompts
Itβs a Streamlit app powered by LangGraph Deep Agents that can take your research question, plan a workflow, fetch data, generate charts, and explain results step by step.
π Excited to share something Iβve been building: the Multi-Agent Deep Research Assistant π§ β¨
π GitHub: github.com/aslansd/deep...
π App: d3yjnms2ch6yxcthmrvqnn.streamlit.app
Big thanks to @DeepLearningAI & @LangChainAI Academy for the resources that made this possible.
Multi-Modal RAG App (Streamlit + Ollama)
Built a lightweight Retrieval-Augmented Generation (RAG) system that processes both text + image docs. Users can load files, build a vector store, and run retrieval-grounded QA.
github.com/aslansd/mult...
React Native + Expo Go App for Open Deep Research
Brought the same framework to mobile, enabling research automation on the go.
github.com/aslansd/open... expo.dev/accounts/asa...
Streamlit App for Open Deep Research
Adapted LangChainβs Open Deep Research (LangGraph) into a Streamlit app to support advanced research workflows: ingestion, retrieval, multi-modal analysis, and context-aware Q&A.
github.com/aslansd/open...
rneaovknvzddyykhkeu2et.streamlit.app
Over the past two weeks at LangTechAI, I built 3 generative AI apps on top of LangChain / LangGraph frameworks β inspired by courses from @DeepLearningAI & @LangChainAI Academy.
Feel free to use any one of the above apps and I would be happy to receive any feedback.
These three projects were not possible without taking the online courses offered by DeepLearningAI and LangChain Academy. Here, I would like to thank them. I would like to thank CrewAI, Gradio, Ollama, and TogetherAI too.
cycles. It will provide the user a final markdown summary with all sources used.
(via seven different search APIs: DuckDuckGo, Tavily, Perplexity, Linkup, Exa, ArXiv, and PubMed), summarise the results of web search, reflect on the summary to examine knowledge gaps, generate a new search query to address the gaps, search, and improve the summary for a user-defined number of
In the third project, I extended one of the LangChain apps called Ollama Deep Researcher which is a local web research assistant built upon the multi-agent framework of LangGraph that uses any LLM hosted by Ollama. It accepts a topic and it will generate a web search query, gather web search results
In the second project, I built a Dungeon game simulating a fantasy world composed of kingdoms, towns, characters, and inventories powered by one of the LLMs provided by TogetherAI and used Gradio as a user interface. Again all the codes and results are brought in one notebook.
given by the user for their startup considering their expertise.For this purpose, I used the multi-agent framework of CrewAI combining it with LangChain, Gradio, and one of the open source LLMs of Ollama. All the codes and results are provided in a notebook.
In the first project, I simulated an environment similar to a startup having three cofounders: the first cofounder is more technical, the second cofounder is more product oriented, and the third cofounder is more business one. These three cofounders do brainstorming about various topics
Here are the three projects that I have done during past few months under LangTechAI:
github.com/aslansd/star...
github.com/aslansd/dung...
github.com/aslansd/olla...
Extra Day 26
Extra Day 25
Extra Day 24