Tools: Superpowers: The Technology To "persuade" AI Agents—why...
Posted on Jan 30
• Originally published at blog.tumf.dev
Originally published on 2026-01-17 Original article (Japanese): superpowers: AIエージェントを「説得」する技術──心理学の原理がコード品質を変える理由
Have you ever thought, "I want AI agents to strictly adhere to TDD"?
AI coding agents like Claude Code and Cursor are convenient, but they have the issue of skipping skills and best practices when in a hurry. Just like humans, AI can make "human-like" decisions, such as not writing tests or skipping debugging steps because of time constraints.
Superpowers is a skill framework that tackles this problem using principles from psychology. It academically verifies that the persuasive principles outlined in Robert Cialdini's "Influence: The Psychology of Persuasion" are also effective for LLMs and implements a mechanism to "enforce skills" in AI agents.
Superpowers is a skill library for AI agents developed by Jesse Vincent (obra). It has gained 26.3k stars on GitHub and is experiencing explosive growth.
Supported agents: Claude Code, OpenAI Codex, OpenCode
What is most interesting about the design of superpowers is that it intentionally incorporates psychological principles of persuasion.
Robert Cialdini proposed principles of persuasion that influence human decision-making in his book "Influence: The Psychology of Persuasion" (later adding the Unity principle, now totaling seven):
Jesse Vincent referenced the paper "Call Me a Jerk: Persuading AI" published by a research team including Cialdini himself, confirming that these principles are also effective for LLMs.
Source: Dev.to