Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging. [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, VLLM, NVIDIA NIM]
Multiple reports of LiteLLM logging or mishandling API keys / environment secrets. Review security advisory before pinning.
Source →This report is generated on demand by querying the GitHub API for repository metadata and published security advisories, then cross-referencing our curated database of known supply-chain incidents (xz-utils, event-stream, ua-parser-js, colors.js, tj-actions, Codecov, and more). Results are cached for 24 hours. We do not scan repository code or dependencies — for that, see Snyk or Socket. Verdict: CLEAN ≥80, CAUTION 50–79, COMPROMISED<50.
Repo looks clean but the live deployment might still be exposing headers, CORS, or SSL misconfigurations.
SCAN A URL →