OpenClaw Skills You Can Actually Trust
Every skill is analyzed before it's listed
The problem is real
Open skill marketplaces list skills without vetting them. Security researchers have already found hundreds of malicious skills hiding stealers, reverse shells, and credential exfiltration behind professional-looking documentation.
Fake prerequisites
Skills look legitimate but include a "Prerequisites" section that tells you to download malware or pipe a script from a random URL into your terminal.
Credential theft
Malicious skills exfiltrate API keys, wallet credentials, and bot tokens to attacker-controlled servers. Your agent already has access to these—a bad skill just forwards them.
Virus scans are security theater
Skills are markdown files—the malware isn't in the file, it's in the instructions the file tells your agent to execute. Scanning a .md for virus signatures is meaningless. What matters is analyzing what the skill tells your agent to do.
What makes ClawSkills different
Six things that separate ClawSkills from open marketplaces. All are enforced automatically—no skill is listed without passing every one.
Analyzed before listing
Every skill passes analysis before it goes live. Fail = not listed. No "report and remove" — dangerous skills never reach users.
Pinned dependencies
Every install command specifies an exact version. No npm install pkg without a version. No typosquatting via unpinned packages.
Immutable versions
Once published, a version cannot be changed. No bait-and-switch — publishing a clean version to pass review, then swapping in malicious content later.
Publisher verified
GitHub authentication required. No anonymous uploads or throwaway accounts. Every skill is tied to a real identity.
Full transparency
Every domain, filesystem path, environment variable, and install command the skill touches is extracted and disclosed in the signed receipt.
Dual-layer analysis
Static pattern matching and LLM behavioral analysis both run independently. Both must pass. If the LLM is unavailable, the skill fails — fail-closed, not fail-open.
How ClawSkills.io works
For users
Browse or search
Find skills by name, tag, or category. Everything listed has already passed analysis.
Check the receipt
See exactly what a skill installs, which domains it contacts, and what files it touches — before you install.
Install with confidence
Copy the install command. Every listed skill has deterministic, pinned dependencies.
For publishers
Sign in and upload
Authenticate with GitHub and submit your SKILL.md from the dashboard. No anonymous uploads.
Your skill is analyzed
The analyzer runs static pattern matching and LLM behavioral analysis, extracts domains, paths, and installs, and gives you a detailed pass/warn/fail report.
All checks pass — listed
Once every check passes, your skill goes live with a signed, immutable receipt. Users see exactly what's in it.
Publishing your first skill?
Use the publishing instructions for complete author guidance, then start from the SKILL.md template.
Where ClawSkills breaks the attack chain
Recent skill supply chain attacks didn't use exploits or sandbox escapes. They relied on trust failures in the publishing and distribution layer—the part ClawSkills controls.
Popularity = trust
Users assume top-downloaded skills are safe. Attackers game stars and download counts.
ClawSkills replaces crowd trust with explicit trust signals: verified publisher identity, signed receipts, immutable hashes. Popularity is secondary.
Docs treated as safe
Professional-looking documentation includes links to attacker-controlled pages or "prerequisite" install commands that deliver malware.
ClawSkills treats documentation as a security boundary. If a skill's docs say "run this curl | bash," it never passes review. Every domain and install command is extracted and surfaced.
Fake dependency injection
Skills introduce a "required" package (like a typosquatted library) that looks normal but contains a payload.
Every dependency must be pinned to an exact version. Unpinned installs, undocumented dependencies, and install-time shell commands are blocked.
Users run commands they didn't author
Users copy-paste setup commands from docs into their terminal. The malware runs with full user trust.
Skills are installed through auditable, declarative flows — not arbitrary shell commands. The receipt shows exactly what will be executed before you run anything.
What ClawSkills is (and isn't)
ClawSkills secures the publishing and distribution layer—the part where trust is established, documentation is reviewed, and dangerous patterns are caught before a skill ever reaches a user. It is not a runtime sandbox, an execution environment, or an antivirus. Runtime isolation and secure execution are the domain of OpenClaw itself. ClawSkills ensures that by the time a skill reaches your agent, it's already been analyzed, its behavior is transparent, and the obviously dangerous stuff has been blocked at the gate.
How the analyzer actually works
No black box. Here's exactly what happens when a skill is submitted.
Hash the artifact
SHA-256 hash of the entire uploaded file is computed immediately. This becomes the immutable anchor—the receipt, the allowlist entry, and any future audit all reference this exact hash.
Build the file manifest
Every file in the archive is catalogued: path, size in bytes, and individual SHA-256 hash. Path traversal attacks (.., absolute paths, /../) are rejected. Max 1,000 files, 10 MB per file.
Static security scan
Every text file is scanned with pattern matching across 20+ threat categories—pipe-to-shell, encoded payloads, reverse shells, persistence mechanisms, credential exfiltration, privilege escalation, supply chain manipulation, container escapes, and more. Any match blocks listing.
LLM behavioral analysis
The skill is sent to an LLM that reads every instruction and evaluates what the skill actually tells an agent to do—credential forwarding, programmatic code execution, network calls to untrusted destinations, prompt injection, and other behaviors that static patterns can't catch. This is the primary security gate. If the LLM is unavailable, the skill fails—fail-closed, not fail-open.
Extract external interfaces
Every URL, domain, filesystem path, install command, and environment variable (API keys, tokens, secrets) is extracted from the skill. Both static extraction and LLM extraction run independently and results are merged. Everything is deduplicated and disclosed in the signed receipt.
Pass or fail
The static scanner and LLM analyzer are independent gates—both must pass. If either flags a finding with severity fail, the skill is not listed. The publisher gets a detailed report with every finding, the file path, and the exact line number.
Generate and sign the receipt
Everything—artifact hash, file manifest, security findings, extracted domains, paths, installs, and badge results—is assembled into a receipt, signed with the registry's ed25519 key, and stored immutably. This receipt is what users see, what allowlists reference, and what makes the whole process auditable.
Every release gets a Vetted Receipt
A signed, immutable record of exactly what's in a skill release. No trust required—verify it yourself.
{ "schema": "clawskills.receipt.v1", "skillName": "slack", "version": "1.2.0", "publisherHandle": "acme-tools", "artifactSha256": "a3f8c2d9...", "passed": true, "badges": { "quickstart": true, "guardrails": true }, "extraction": { "domains": ["api.slack.com"], "paths": ["~/.config/slack-skill"] }, "securityFindings": { "total": 0, "fails": 0, "warns": 0 }, "manifest": [ { "path": "SKILL.md", "size": 3201, "sha256": "c5d6e7..." } ] }
Why this matters
When a skill says it's for Slack, you can verify that it only contacts api.slack.com—not some unknown server. When it says it needs one package, you can confirm that's all it installs.
Immutable and signed
Receipts are signed with the registry's ed25519 key and cannot be modified after creation. The content hash anchors the receipt to the exact artifact that was analyzed.
Team allowlists reference these
Organizations publish signed policies listing approved releases by their content hash. Your team references one URL—only those exact, receipted versions are allowed.
Create team allowlists
Your team uses skills with real credentials—Slack tokens, API keys, database access. An allowlist lets you say "only these exact, vetted skill versions are approved" at a single URL your whole team references.
{ "schema": "clawskills.policy.v1", "orgSlug": "acme", "version": 3, "releases": [ { "skillName": "slack", "version": "1.2.0", "artifactSha256": "a3f8c2d9..." }, { "skillName": "github", "version": "2.0.1", "artifactSha256": "b7e4f1a0..." } ] }
One URL, one source of truth
Your org publishes a signed policy at a stable endpoint like /org/acme/policy/latest.json. Everyone on the team references the same URL—no spreadsheets, no Slack messages, no guessing.
Pinned to exact hashes
Each approved skill is referenced by its content hash. If the artifact changes by a single byte, the hash won't match. Your team only runs exactly what was reviewed.
Versioned and auditable
Every policy update creates a new version. Need to know what was approved last quarter? Check /org/acme/policy/v2.json. The full history is always available.
From "ban all skills" to "approved set"
Without an allowlist, teams either accept all risk or ban skills entirely. An allowlist gives you the middle ground: a curated, audited set of skills your team can use with real credentials.
Common Questions
Skills are just markdown files. Can't you just read them?
You can, but that's exactly the problem. Skills are markdown that instructs an AI agent what to do — including what to install, what URLs to fetch, and what files to access. A skill that looks like clean documentation can contain a "Prerequisites" section that tells your agent to pipe a script from an attacker's server into your terminal. The analyzer catches these patterns automatically so you don't have to audit every line yourself.
How is this different from VirusTotal or star ratings?
VirusTotal scans files for known virus signatures — useless for skills, which are markdown. The malware isn't in the file, it's in the instructions the file tells your agent to execute. Stars and download counts are trivially gameable. ClawSkills runs both static pattern matching and LLM behavioral analysis to evaluate what the skill actually tells your agent to do: what it installs, what domains it contacts, what paths it touches. If it contains dangerous patterns — or the LLM flags suspicious behavior — it never gets listed.
What does "vetted" actually mean?
A vetted skill has passed every automated check: static pattern matching across 20+ threat categories (pipe-to-shell, reverse shells, credential exfiltration, privilege escalation, and more) plus LLM behavioral analysis that evaluates what the skill actually tells an agent to do. The publisher is GitHub-authenticated, all dependencies are pinned to exact versions, and every domain, filesystem path, and install command is extracted into a signed receipt so you can see exactly what the skill does.
My agent has access to API keys, credentials, and private data. How does this help?
That's exactly why this matters. Your agent follows skill instructions with full access to your environment. A malicious skill can exfiltrate your .env file, your wallet keys, or your bot tokens to an attacker's server. ClawSkills surfaces every domain a skill contacts and every path it touches in the receipt, so you can verify a Slack skill only talks to api.slack.com — not some unknown webhook.
What about teams using skills with real credentials?
Organizations can publish a signed allowlist policy listing exactly which skill versions (by content hash) are approved. The policy lives at a stable URL your whole team references. If a skill isn't on the list, it doesn't get installed. This is how you go from "we should probably ban skills entirely" to "we have an approved, audited set."
Does this guarantee a skill is safe?
No — and we're upfront about that. ClawSkills secures the publishing and distribution layer: it catches dangerous patterns, surfaces what a skill does, and blocks the obviously bad stuff before it's listed. It does not provide runtime sandboxing, execution isolation, or behavioral monitoring — that's the domain of OpenClaw itself. ClawSkills can't catch malicious logic hidden inside a skill's runtime behavior or delayed logic bombs. But it breaks the attack chain where recent real-world attacks actually succeeded: weaponized docs, fake prerequisites, unpinned dependencies, and piped shell commands.
What are the badges?
Badges are optional — they're not required to be listed, but they reward publishers who go further. The Quick Start badge means the skill provides clear Install, Configure, and Verify instructions so agents know exactly how to get started. The Guardrails badge means it includes security guidance like least privilege, sandboxing, and rate limits. Both are detected automatically. Users can search and filter by badge to find skills that meet a higher standard.
What is a Vetted Receipt?
A receipt is a signed JSON document generated by the analyzer for every listed skill. It contains the artifact hash, a manifest of every file with its own hash, every domain and filesystem path the skill references, every install command, security findings, and badge results. It's signed with the registry's ed25519 key and cannot be modified after creation. Receipts are what make ClawSkills auditable — you don't have to trust us, you can verify the receipt yourself.
How do I publish a skill?
Sign in with GitHub, upload your SKILL.md from the dashboard. The analyzer runs static pattern matching and LLM behavioral analysis instantly, giving you a pass/warn/fail report with specific fixes. Address any blockers, resubmit, and your skill goes live with a signed, immutable receipt.
Is ClawSkills free?
Yes. Browsing, installing, and publishing skills is free. Team features like org allowlists are also free in v1.