Tools: GitGuard: The AI Safety Net for your Repository

Tools: GitGuard: The AI Safety Net for your Repository

Source: Dev.to

What I Built ## How it works: ## My Experience with GitHub Copilot CLI ## The Challenge: Taming the LLM ## The Solution: Strict Prompt Engineering GitHub Copilot CLI Challenge Submission This is a submission for the GitHub Copilot CLI Challenge GitGuard is a command-line tool designed to bring "Psychological Safety" to Git operations. It acts as an intelligent firewall between your natural language intent and the actual execution of Git commands. We've all been there: staring at the terminal, sweating before hitting Enter on a git reset --hard, or Googling "how to undo last commit without losing files" for the 100th time. GitGuard solves this by using the GitHub Copilot CLI as a translation engine, but with a twist: it wraps the AI suggestions in a safety layer. Analyze: GitGuard consults Copilot to generate the correct Git command. Classify Risk: Before showing you the command, GitGuard's internal Risk Classifier analyzes the command (using Regex patterns) to detect destructive operations (deletions, force pushes, history rewrites). Verify: It presents the command with a clear explanation and a risk level (🟢 LOW, 🔴 HIGH). Refine: If the command isn't quite right, you can conversationally refine it (e.g., "add the force flag") without restarting. It is built 100% in Kotlin, leveraging the robust JVM ecosystem while providing a modern terminal UX. You can find the full source code and installation instructions here: 👉 GitHub Repository: yorky47/git-guard Handling Dangerous Operations Here is GitGuard protecting the user from a high-risk operation. It correctly identifies git reset --hard as dangerous and requires explicit confirmation. Building GitGuard was a unique experience because I didn't just use Copilot to write the code—I used the Copilot CLI as the core engine of my application. The biggest challenge was making the output of a Large Language Model (LLM) deterministic enough for a CLI tool. The standard gh copilot suggest output is designed for humans to read, not for software to parse. To solve this, I implemented a strict prompt injection technique within the CopilotService. I force the CLI to act as a JSON API: // From CopilotService.kt Kotlin private val TOOM_PROMPT = "Act:GitGuard Task:Intent2Git Output:JSON_Format:{command,explanation} Rules:StrictJSON,NoThinking,NoLogs" This allows GitGuard to parse the response reliably, extracting the command for execution and the explanation for the UI, while filtering out the "noise" typically associated with CLI outputs. Impact on Development Using the Copilot CLI extension expedited the core logic significantly. Instead of building a complex NLP model to understand Git intent, I could rely on Copilot's vast knowledge of Git syntax. This allowed me to focus on the Application Logic: the risk classification system, the terminal UI, and the safety guardrails. GitGuard proves that the GitHub Copilot CLI isn't just a helper tool for developers—it's a powerful backend API that enables a new class of intelligent developer tools. Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse - Translate: You type what you want to do in plain English (e.g., "undo last commit but keep changes"). - Analyze: GitGuard consults Copilot to generate the correct Git command. - Classify Risk: Before showing you the command, GitGuard's internal Risk Classifier analyzes the command (using Regex patterns) to detect destructive operations (deletions, force pushes, history rewrites). - Verify: It presents the command with a clear explanation and a risk level (🟢 LOW, 🔴 HIGH). - Refine: If the command isn't quite right, you can conversationally refine it (e.g., "add the force flag") without restarting.