Tools: ghsafe: Because That "Job Opportunity" Repo Could Be a Trap

Tools: ghsafe: Because That "Job Opportunity" Repo Could Be a Trap

Source: Dev.to

What I Built ## The Story That Started It All ## Introducing ghsafe πŸ›‘οΈ ## Scanning a malicious repository: ## Scanning a clean repository: ## Additional features: ## My Experience with GitHub Copilot CLI ## πŸ—οΈ Project Scaffolding ## πŸ” Writing Detection Rules ## 🎨 Rich Terminal UI ## πŸ€– AI Integration ## πŸ’‘ The Verdict GitHub Copilot CLI Challenge Submission This is a submission for the GitHub Copilot CLI Challenge A few weeks ago, my friend Muhammad Khamzah (a fullstack developer) shared a chilling experience on LinkedIn. Someone approached him on LinkedIn offering a job opportunity. The conversation seemed legit, it moved from LinkedIn to email, and eventually they asked him to check out a project on GitHub. But something felt off. The repository had a .vscode folder committed to it, something that's rarely pushed unless there's a reason. When he dug deeper (still on GitHub, before even cloning), he found hidden malicious scripts tucked away in the code. The scripts were cleverly concealed, you couldn't see them unless you scrolled horizontally on GitHub. The malware was designed to: Other developers who fell for similar scams lost thousands of dollars in cryptocurrency. The attackers stole wallet files, SSH keys, browser saved passwords, and API tokens, all from a single "job opportunity." You can read his original post here. This story hit hard. Not every developer is as careful as Khamzah. So I built ghsafe, a CLI tool that scans any GitHub repository for malicious patterns before you clone and run it. ghsafe acts as your first line of defense. It analyzes code for: It produces a beautiful, color-coded terminal report with a risk score from 0 to 100, code previews of suspicious lines, and a clear verdict: βœ… SAFE, ⚠️ SUSPICIOUS, or 🚨 DANGEROUS. For even deeper analysis, you can enable the --ai flag to send findings to GitHub Models for contextual threat assessment, staying fully within the GitHub ecosystem. It also supports OpenAI as a fallback. πŸ”— GitHub Repository: https://github.com/depapp/ghsafe The tool shows exactly where the suspicious code is, with line numbers and code previews: And provides AI Analysis and a clear recommendation: Exit codes make it CI/CD-friendly: 0 = safe, 1 = suspicious, 2 = dangerous. Building ghsafe in a single day would not have been possible without GitHub Copilot CLI. Here's how it supercharged my development: I used Copilot CLI to help me scaffold the entire project structure from the TypeScript config and tsup bundler setup to the Commander.js CLI framework. Instead of copying boilerplate from old projects, I described what I needed and Copilot CLI generated the right configurations. This was where Copilot CLI truly shined. Writing 35+ regex-based detection rules across 7 categories is tedious and error-prone. I described the malicious patterns I wanted to detect in natural language like "detect when code reads SSH keys from the filesystem" or "find Base64-encoded strings longer than 80 characters" and Copilot CLI helped me craft precise regex patterns and structure them into the rule engine. Building the beautiful terminal output with chalk, boxen, ora, and cli-table3 required getting a lot of formatting details right. Copilot CLI helped me compose the risk bar visualization, severity color coding, and the boxed report layout, turning a plain text scanner into something that looks and feels professional. Integrating the GitHub Models API for the optional deep analysis feature was a natural fit, it uses the same OpenAI SDK format but runs through GitHub's infrastructure, keeping the entire tool within the GitHub ecosystem. Copilot CLI helped me structure the provider selection logic (GitHub Models as primary, OpenAI as fallback) and craft the right system prompt for security analysis context. GitHub Copilot CLI felt like pair programming with a security-savvy senior developer. It didn't just autocomplete, it understood the intent behind what I was building and helped me move from idea to working product in record time. The entire tool, 35+ detection rules, rich terminal UI, AI integration, and documentation was built in a single focused session. If this tool helps even one developer avoid a phishing repo, it's worth it. Stay safe out there. πŸ›‘οΈ Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse COMMAND_BLOCK: # Scan a suspicious repo before you run it npx ghsafe scan https://github.com/suspicious-user/totally-legit-project Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK: # Scan a suspicious repo before you run it npx ghsafe scan https://github.com/suspicious-user/totally-legit-project COMMAND_BLOCK: # Scan a suspicious repo before you run it npx ghsafe scan https://github.com/suspicious-user/totally-legit-project CODE_BLOCK: npx ghsafe scan https://github.com/user/suspicious-repo Enter fullscreen mode Exit fullscreen mode CODE_BLOCK: npx ghsafe scan https://github.com/user/suspicious-repo CODE_BLOCK: npx ghsafe scan https://github.com/user/suspicious-repo COMMAND_BLOCK: # Scan a local directory npx ghsafe scan ./path/to/project # Enable AI-powered deep analysis (GitHub Models, recommended) export GITHUB_TOKEN=your-github-token npx ghsafe scan https://github.com/user/repo --ai # Alternative: Use OpenAI directly export OPENAI_API_KEY=your-openai-key npx ghsafe scan https://github.com/user/repo --ai # JSON output for CI/CD pipelines npx ghsafe scan ./project --json Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK: # Scan a local directory npx ghsafe scan ./path/to/project # Enable AI-powered deep analysis (GitHub Models, recommended) export GITHUB_TOKEN=your-github-token npx ghsafe scan https://github.com/user/repo --ai # Alternative: Use OpenAI directly export OPENAI_API_KEY=your-openai-key npx ghsafe scan https://github.com/user/repo --ai # JSON output for CI/CD pipelines npx ghsafe scan ./project --json COMMAND_BLOCK: # Scan a local directory npx ghsafe scan ./path/to/project # Enable AI-powered deep analysis (GitHub Models, recommended) export GITHUB_TOKEN=your-github-token npx ghsafe scan https://github.com/user/repo --ai # Alternative: Use OpenAI directly export OPENAI_API_KEY=your-openai-key npx ghsafe scan https://github.com/user/repo --ai # JSON output for CI/CD pipelines npx ghsafe scan ./project --json - 🎯 Run automatically when you open the project folder in VS Code (via .vscode/tasks.json with runOn: folderOpen) - πŸ’€ Execute without npm install, just opening the folder was enough - 🌐 Work on all operating systems, Windows, macOS, and Linux - πŸ•΅οΈ Deploy via Vercel, making the payload look like a normal serverless function