$ ai-agent/
├── app/ # Core application logic
└── skills/ # Skill definitions ├── fix-bug/ │ └── SKILL.md ├── draft-release-note/ │ └── SKILL.md ├── analyze-root-cause/ │ └── SKILL.md └── write-tests/ └── SKILL.md
ai-agent/
├── app/ # Core application logic
└── skills/ # Skill definitions ├── fix-bug/ │ └── SKILL.md ├── draft-release-note/ │ └── SKILL.md ├── analyze-root-cause/ │ └── SKILL.md └── write-tests/ └── SKILL.md
ai-agent/
├── app/ # Core application logic
└── skills/ # Skill definitions ├── fix-bug/ │ └── SKILL.md ├── draft-release-note/ │ └── SKILL.md ├── analyze-root-cause/ │ └── SKILL.md └── write-tests/ └── SKILL.md - Monitor the issue comment section
- When someone @-mentions your target user (e.g., copilot), immediately fire a webhook - Create a Pipeline Job in Jenkins, configure the Generic Webhook to receive requests from Jira Automation
- When the webhook is triggered, the Pipeline launches your backend Application - app/ directory: manages the overall application logic — receiving webhook requests, parsing user intent, dispatching Skills, returning results.
- skills/ directory: each Skill is a subdirectory named after the skill, containing a SKILL.md file (capitalized). This file describes, in natural language, what the skill does, what it takes as input, and what it produces as output. For example, draft-release-note/SKILL.md might contain: "Based on recent commit records and Jira issue lists, generate a draft release note in Markdown format." - Decoupled capabilities: adding a new skill just means creating a new subdirectory and SKILL.md file under skills/ — no changes to core Application code.
- Version-controlled: Skills files live in a Git repository with full change history and review mechanisms.
- Easy collaboration: anyone on the team can submit a new Skill or improve an existing one, without waiting for a developer's schedule. - In any Jira issue's comment section, @-mention your configured user and describe what you need. For example: @copilot generate the release notes for this version
- Jira Automation detects the @-mention event and sends a webhook request
- Jenkins receives the request and starts the Application
- The Application parses user intent and matches it to the appropriate Skill (e.g., draft-release-note)
- The AI executes the task according to the Skill definition, then writes the result back as a Jira comment - Pulling issue lists from the Jira API, filtering fields, formatting output → plain script.
- Parsing webhook requests, extracting key fields, assembling prompts → preprocessing script.
- Writing AI results back as Jira comments → still a script. - Right-size granularity. A Skill does one thing, but that thing should be general enough — for example, analyze-build-failure should work across multiple projects, not be hardcoded to one repository.
- Parameterize. Use input parameters (project name, version number, date range) to adapt to different scenarios, rather than creating a separate Skill for each one.
- Composable. Complex tasks can be built by chaining multiple base Skills. A "weekly release report" might be composed of collect-commits + draft-release-note, instead of writing yet another Skill. - Explicit input/output definitions. Every Skill must clearly state what it receives (fields, format) and what it produces (text, structure). Don't write "analyze the problem"; write "Receive a Jira Issue Key, read its Description and the last 10 Comments, output a root cause analysis in Chinese, under 200 words."
- Provide examples. Include an input/output example in SKILL.md. The AI understands your expectations much more accurately with a concrete sample.
- Define boundaries. Explicitly tell the AI what NOT to do. For example: "Do not modify original code, only output suggestions", "Do not fabricate information not present in the issue." A Skill without boundaries gives the AI too much room to improvise, and the results are often unreliable.
- Test iteratively. Run the same set of test cases against a Skill repeatedly, comparing output quality after each edit to SKILL.md. This is essentially unit testing — except the subject under test is the AI's prompt. - timestamp: request time
- skill_name: the matched Skill
- model: the model used
- input_tokens / output_tokens / total_tokens
- latency_ms: total time (network + inference)
- user_input: a summary of the original user input (for traceability)