Tools
Tools: AI-SLOP Detector v2.7.0 — Why We Built a Linter We Actually Use
2026-02-11
0 views
admin
The Problem (and the Example) ## The Context (Why This Became a Survival Tool) ## v2.7.0 (Core Value) ## Why We Update This Regularly ## Feature Details (What v2.7.0 Surfaces in VS Code) ## 1) Docstring Inflation (line-level) ## 2) Evidence-Based Claim Validation (line-level) ## 3) Hallucinated Dependencies (Problems panel) ## 4) Pattern Issues + Actionable Fixes ## 5) Debounced Lint-on-Type (1500ms) ## 6) LDR Grade in the Status Bar (quick definition) ## Before / After (Extension Coverage) ## Technical Structure (How CLI and VS Code Stay Aligned) ## Install ## CLI (PyPI) ## VS Code ## Repo Embed (dev.to Liquid Tag) ## flamehaven01 / AI-SLOP-Detector ## Stop shipping AI slop. Detects empty functions, fake documentation, and inflated comments in AI-generated code. Production-ready. ## AI-SLOP Detector v2.7.0 ## Quick Start ## What's New in v2.7.0 ## VS Code Extension Upgrade ## Question for Readers This is what passing lint can look like — and still be slop. Beautifully formatted, professionally worded, zero lint errors…and almost zero substance. No syntax problems. But: Traditional linters check form.
AI-SLOP Detector checks substance. Flamehaven builds high-complexity systems where “code that looks correct” isn’t just annoying — it’s expensive. Especially in RexSyn, our BioAI / regulated reasoning pipeline, slop is not a cosmetic issue: I’m a solo developer.
One AI partner.
No large team to catch “beautifully wrong” code in review. So I stopped hoping reviews would save me.
I built an editor-level gate. Three weeks ago, v2.6.3 introduced consent-based complexity: intentional complexity should be documented, not punished. v2.7.0 fixes a different gap: Previously, the VS Code extension surfaced only ~40% of the CLI’s intelligence.
Now it surfaces ~95%. Not “more rules.”
More visibility where developers actually work. This is not a hobby project. This is the gate we use before our own modules enter: If AI slop passes,
our governance fails. That’s why releases follow internal system pressure —
not marketing calendars. RexSyn evolves.
New modules expose new failure modes.
Governance gets stricter.
The detector adapts. If it breaks our workflow,
it gets fixed the same day. Every over-documented function gets flagged with exact ratio + location. If code claims maturity but lacks the structural markers that justify it, it gets flagged with what’s missing. Not just “unused imports.”
More like: “AI added this because it sounds relevant.” Instead of vague warnings, pattern issues now include a fix path: Lint-on-type no longer triggers analysis on every keystroke.
It runs after you pause typing — so governance doesn’t fight flow. LDR = Logic Density Ratio — a lightweight signal for how much real logic exists relative to scaffolding (docs / wrappers / decorative structure). In v2.7.0, the LDR letter grade is visible via status bar tooltip. Sometimes you don’t need a report.
You need a glance. Coverage: ~40% → ~95% of CLI output. The hard part isn’t detection.
It’s surfacing the right signal at the right time. Marketplace:
https://marketplace.visualstudio.com/items?itemName=Flamehaven.vscode-slop-detector If you want the repo card to render inline: Production-grade static analyzer for detecting AI-generated code quality issues with evidence-based validation. Detects six critical categories of AI-generated code problems with actionable, context-aware questions. Quick Navigation:
🚀 Quick Start •
✨ What's New •
🏗️ Architecture •
📊 Core Features •
⚙️ Configuration •
🔧 CLI Usage •
🚦 CI/CD Integration •
👨💻 Development When you review AI-generated code, what’s the hardest category of slop to catch? For us, it’s evidence-free claims —
code that says “production-ready” but has no tests, no logging, and no real error handling. Perfect form.
Missing substance. What patterns are you seeing in your codebase? Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse COMMAND_BLOCK:
"""
Enterprise-grade data processing pipeline with fault-tolerant
error handling and production-ready scalability features.
"""
import torch
import numpy as np
from typing import Optional, Dict, List def process_data(data: Optional[Dict[str, List[float]]]) -> Dict: """Process input data through the optimized pipeline. This function implements a production-ready data processing pipeline with enterprise-grade error handling, fault-tolerant retry mechanisms, and scalable architecture patterns. Args: data: Input data dictionary with feature vectors. Returns: Processed results with quality metrics. """ if data is None: return {} return {k: v for k, v in data.items()} Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
"""
Enterprise-grade data processing pipeline with fault-tolerant
error handling and production-ready scalability features.
"""
import torch
import numpy as np
from typing import Optional, Dict, List def process_data(data: Optional[Dict[str, List[float]]]) -> Dict: """Process input data through the optimized pipeline. This function implements a production-ready data processing pipeline with enterprise-grade error handling, fault-tolerant retry mechanisms, and scalable architecture patterns. Args: data: Input data dictionary with feature vectors. Returns: Processed results with quality metrics. """ if data is None: return {} return {k: v for k, v in data.items()} COMMAND_BLOCK:
"""
Enterprise-grade data processing pipeline with fault-tolerant
error handling and production-ready scalability features.
"""
import torch
import numpy as np
from typing import Optional, Dict, List def process_data(data: Optional[Dict[str, List[float]]]) -> Dict: """Process input data through the optimized pipeline. This function implements a production-ready data processing pipeline with enterprise-grade error handling, fault-tolerant retry mechanisms, and scalable architecture patterns. Args: data: Input data dictionary with feature vectors. Returns: Processed results with quality metrics. """ if data is None: return {} return {k: v for k, v in data.items()} CODE_BLOCK:
Docstring inflation: process_data (13 doc / 3 impl = 4.3x) Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
Docstring inflation: process_data (13 doc / 3 impl = 4.3x) CODE_BLOCK:
Docstring inflation: process_data (13 doc / 3 impl = 4.3x) CODE_BLOCK:
"production-ready" claim lacks evidence: error_handling, logging, tests Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
"production-ready" claim lacks evidence: error_handling, logging, tests CODE_BLOCK:
"production-ready" claim lacks evidence: error_handling, logging, tests CODE_BLOCK:
Hallucinated dependency: "torch" — imported but serves no verified purpose Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
Hallucinated dependency: "torch" — imported but serves no verified purpose CODE_BLOCK:
Hallucinated dependency: "torch" — imported but serves no verified purpose CODE_BLOCK:
Excessive try/except with bare except clause
Suggestion: Catch specific exceptions (ValueError, KeyError) instead of bare except Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
Excessive try/except with bare except clause
Suggestion: Catch specific exceptions (ValueError, KeyError) instead of bare except CODE_BLOCK:
Excessive try/except with bare except clause
Suggestion: Catch specific exceptions (ValueError, KeyError) instead of bare except COMMAND_BLOCK:
pip install ai-slop-detector # single file
slop-detector mycode.py # project scan
slop-detector --project ./src # CI/CD
slop-detector --project . --ci-mode hard --ci-report # JSON output
slop-detector mycode.py --json --output report.json Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
pip install ai-slop-detector # single file
slop-detector mycode.py # project scan
slop-detector --project ./src # CI/CD
slop-detector --project . --ci-mode hard --ci-report # JSON output
slop-detector mycode.py --json --output report.json COMMAND_BLOCK:
pip install ai-slop-detector # single file
slop-detector mycode.py # project scan
slop-detector --project ./src # CI/CD
slop-detector --project . --ci-mode hard --ci-report # JSON output
slop-detector mycode.py --json --output report.json CODE_BLOCK:
ext install Flamehaven.vscode-slop-detector Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
ext install Flamehaven.vscode-slop-detector CODE_BLOCK:
ext install Flamehaven.vscode-slop-detector COMMAND_BLOCK:
# Install from PyPI
pip install ai-slop-detector # Analyze a single file
slop-detector mycode.py # Scan entire project
slop-detector --project ./src # CI/CD Integration (Soft mode - PR comments only)
slop-detector --project ./src --ci-mode soft --ci-report # CI/CD Integration (Hard mode - fail build on issues)
slop-detector --project ./src --ci-mode hard --ci-report # Generate JSON report
slop-detector mycode.py --json --output report.json Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
# Install from PyPI
pip install ai-slop-detector # Analyze a single file
slop-detector mycode.py # Scan entire project
slop-detector --project ./src # CI/CD Integration (Soft mode - PR comments only)
slop-detector --project ./src --ci-mode soft --ci-report # CI/CD Integration (Hard mode - fail build on issues)
slop-detector --project ./src --ci-mode hard --ci-report # Generate JSON report
slop-detector mycode.py --json --output report.json - torch / numpy are imported and never used
- 13 lines of docstring for 3 lines of logic (4.3× inflation)
- claims like “enterprise-grade” / “production-ready” with no evidence
- the function is basically a dictionary identity operation dressed in corporate language - hallucinated dependencies can break audit replay
- inflated narratives can hide no-op logic
- “maturity claims” can pass review because they sound professional - SIDRCE audits
- Drift monitors
- Constitutional quality checks - the CLI emits a structured diagnostics report
- the extension maps it into: - Problems panel (line-level)
- status bar summaries (LDR / gate)
- inline highlights (ratios, evidence gaps) debounce is applied before analysis triggers
- debounce is applied before analysis triggers - debounce is applied before analysis triggers - Docstring Inflation Diagnostics: Line-level detection of over-documented functions with severity mapping (critical/warning)
- Evidence-Based Claim Validation: Flags unjustified jargon claims lacking… - GitHub: https://github.com/flamehaven01/AI-SLOP-Detector
- PyPI: https://pypi.org/project/ai-slop-detector/
- VS Code Marketplace: https://marketplace.visualstudio.com/items?itemName=Flamehaven.vscode-slop-detector
- Previous post (v2.6.3): https://dev.to/flamehaven01/ai-slop-detector-v263-is-live-now-on-vs-code-3oj4
how-totutorialguidedev.toaigitgithub