Scoring Methodology

How we evaluate every tool in the index — transparent, consistent, and opinionated.

Overall Score: 0–100

Every tool gets an overall score from 0 to 100, computed from four weighted dimensions. Scores are updated weekly as new data comes in from GitHub, npm, and manual review.

30%

Security

30%

Utility

25%

Maintenance

15%

Uniqueness

Score Thresholds

80–100ExcellentBattle-tested, well-maintained, trusted by many
60–79GoodSolid choice for most use cases
40–59FairWorks but has notable gaps or risks
0–39PoorSignificant issues — use with caution

🛡️1. Security (30%)

We evaluate code audit status, permissions requested, data handling practices, official/verified status, and trust signals. Tools that request broad filesystem or network access without clear justification score lower. Official tools from established companies get a trust bonus. Security-audited tools are flagged and rewarded.

Key signals

  • Permissions scope (minimal vs broad)
  • Official / verified status
  • Security audit status
  • Data handling practices
  • Trust signals (verified publisher, signed releases)

2. Utility (30%)

How useful is this tool in practice? We look at feature completeness, real-world adoption (GitHub stars, npm downloads), platform support, and how well it solves its stated problem. A tool that does one thing really well can score higher than a Swiss Army knife that does everything poorly.

Key signals

  • Feature completeness
  • GitHub stars and forks
  • npm weekly downloads
  • Platform and OS support
  • User adoption and community size

🔄3. Maintenance (25%)

Is this tool actively maintained? We check commits in the last 30 days, issue response time, release frequency, and whether the repo is archived. Tools with no commits in 90+ days are flagged as "stale." Active maintenance is essential — an abandoned tool with security issues won't get patched.

Key signals

  • Commits in last 30 days
  • Issue response time
  • Release frequency
  • Archive / abandoned status
  • Bus factor (number of contributors)

💎4. Uniqueness (15%)

Does this tool solve something others don't? Or is it one of fifty identical "weather API wrapper" MCP servers? We reward tools that carve out a genuinely unique niche, offer differentiated approaches, or combine capabilities in novel ways. Cookie-cutter wrappers score low here.

Key signals

  • Number of alternatives solving the same problem
  • Differentiated approach or architecture
  • Novel capability combination
  • Niche specialization

Data Sources

GitHub API: Stars, forks, commits, issues, contributors, archived status, pushed_at, license, language. Updated weekly.

npm Registry: Weekly downloads, latest version, publish date. Updated weekly for tools with npm packages.

awesome-mcp-servers: Community-curated list with 1,300+ entries. Scraped for discovery and categorization.

Manual enrichment: Top tools receive manual review: install commands, config snippets, permissions analysis, compatibility testing, and editorial commentary.

Community submissions: Anyone can submit a tool at /submit. Submissions are reviewed within 48 hours.

Our Transparency Policy

Every score is backed by observable data. We don't accept payment for higher scores. If you think a tool is mis-scored, contact us at hello@skillsindex.dev with evidence and we'll re-evaluate. Editor's picks are subjective — they reflect our opinion on what's genuinely best for most users.