Tools
Tools: I built Skillradar: find the right agent skill by describing your task (2.5k+ indexed)
2026-02-04
0 views
admin
1. Website mode (baseline semantic search) ## 2. Agent-native mode: let an agent turn vague prompts into structured search queries ## Question (Embeddings / for skill retrieval) I’m experimenting with a semantic search workflow for discovering agent skills from natural-language task descriptions. Many skill lists are still keyword-based, which makes it hard to compare similar skills before trying them. I indexed ~2.5k skills and use semantic retrieval to surface candidates for a given scenario. You can type a scenario like: I’d like to conduct a market analysis” …and get a ranked list of candidate skills. You can click a skill card to view details and inspect its SKILL.md / manifest. This is the part I personally use the most. Instead of going to a website and trying to craft the “right keywords”, I use an agent-side helper (a small “discover” prompt) to convert a vague request into a search goal + keywords, then query the index. This fits CLI-style agent workflows. After installation, the agent can: Here’s an example with a very “vague” need: I have a bunch of meeting notes scattered everywhere and I want to organize them better. Is there a skill for that?” The agent turns it into a query + keywords, retrieves candidates, and suggests what to install next. I’d love advice on how you’d embed and index a SKILL.md-style skill definition for semantic retrieval. Right now I’m thinking about embedding each skill from multiple “views” (e.g., what it does, use cases, inputs/outputs, examples, constraints), but I’m not fully sure what structure works best. Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse - Ask a couple of simple questions (e.g., install scope/path)
- Then you just describe your scenario in plain English — even if it’s abstract, vague, or messy
- discover-skills will translate that into a structured search (task goal + keywords), query the index, and return candidates with short match reasons - How would you chunk/structure SKILL.md (by section, by template fields, or by examples)?
- Single vector per skill vs multi-vector per section/view — and how do you aggregate scores at query time?
- Which fields usually move retrieval quality most (examples, tool/actions, constraints, tags, or “when not to use”)?
how-totutorialguidedev.toai