I built "npm audit" for AI agents
I was adding MCP tools to a project when I realized something uncomfortable: I had no idea what the code I was installing could actually do. The README said "connects Claude to Blender." What it di...

Source: DEV Community
I was adding MCP tools to a project when I realized something uncomfortable: I had no idea what the code I was installing could actually do. The README said "connects Claude to Blender." What it didn't say was that one of the registered tools passes a raw string parameter to Python's exec() with no builtin restriction. The LLM doesn't get "Blender API access." It gets full Python execution on the host machine. I wanted a way to know this before running the code. So I built one. What reachscan does reachscan is a static analysis CLI for Python and TypeScript/JavaScript AI agent codebases. Point it at a repo, a PyPI package, or an MCP endpoint, and it tells you: What the code can do (shell exec, file access, network calls, credential access, dynamic code execution) Which of those capabilities the LLM can actually trigger (reachability analysis) The exact call path from the LLM entry point to the dangerous code pip install reachscan # Scan a GitHub repo reachscan https://github.com/user/r