Every skill your agents use has been parsed, scanned, sandboxed, and approved by a human. No supply chain surprises. No silent privilege escalation.
Most platforms let agents call anything
Any tool. Any time. Any permissions. The agent decides what to call. One compromised or malicious skill in the ecosystem can exfiltrate data, send emails as your users, delete files, or make API calls to external services — silently.
The supply chain risk is real
AI plugin ecosystems are exactly like npm: large, unvetted, and capable of hiding malicious payloads in useful-looking packages. EnGenAI treats every skill like a software supply chain risk — because it is one.
EnGenAI's Skill Engine is the firewall between your agents and the tool ecosystem. Every capability is verified before it can run.
Every skill is defined in a single, structured SKILL.md file. Machine-parseable. Human-readable. Unambiguous. Rejected immediately if incomplete.
# SKILL: search_web
version: 1.2.0
description: |
Searches the web using a given query and returns the top N results.
Does not follow links, execute scripts, or persist data.
parameters:
query:
type: string
required: true
max_length: 512
limit:
type: integer
default: 5
max: 20
permissions:
- network.read
safety_constraints:
- No writes to external services
- No credential access
- No filesystem access
examples:
- query: "FastAPI authentication middleware"
limit: 5 No skill runs until it has passed every stage. Rejection at any stage produces a detailed report. The process cannot be bypassed, shortened, or self-approved.
SKILL.md submitted and parsed for: name, description, parameter schema, permissions required, usage examples, and safety constraints. Malformed or incomplete? Rejected immediately with a structured error.
Static analysis: Does the skill request excessive permissions? Does its behaviour match its stated description? Automated security scan checks for data exfiltration patterns, privilege escalation, and unsafe API calls.
Executed in an isolated container: no network access, no filesystem writes, no IPC, resource-limited. Observed behaviour must match the declared spec. Unexpected side effects cause immediate rejection.
Human review gate. An engineer at EnGenAI — or your organisation's admin — reviews the sandbox execution report and approves or rejects. Sandbox results, permissions diff, and risk score are all presented.
Published to ClawHub, EnGenAI's internal skill registry. Versioned, signed, and available for agents to discover and use. Revocation possible at any time.
Rejection at any stage produces a structured report: which stage failed, why, and what changes would allow approval. Skills can be revised and resubmitted. All submissions and their outcomes are logged.
Three levels: ALLOW (auto-execute), ASK (require confirmation), DENY (blocked at platform level). Agents cannot escalate their own permissions. Full stop.
| Permission | What It Means | Example Skills |
|---|---|---|
| ALLOW | Skill executes automatically without confirmation. The skill has been pre-approved for autonomous use within defined parameters. | read_file search_web run_tests list_directory |
| ASK | Skill requires explicit user confirmation before execution. The agent pauses, presents the intent, and waits for approval. Timeout results in cancellation. | send_email create_pr deploy_to_staging write_to_db |
| DENY | Skill is blocked entirely. Cannot be executed regardless of agent instruction or user approval. Hard policy enforcement at the platform level. | delete_production shell_exec exfiltrate_data self_modify |
Skills start with minimal permissions and the narrowest possible scope. Users can explicitly grant additional permissions after reviewing the skill's sandbox report. Trust is earned, not assumed.
Agents cannot grant themselves permissions. They cannot request, approve, or modify their own permission set. All permission changes require a human acting outside the agent runtime.
The Principle of Least Privilege —
Every skill has exactly the permissions it needs to do its job. Nothing more.
This isn't just a policy. It's enforced at the platform level. A skill declared with
permissions: [read_file]
physically cannot call
send_email
regardless of what its code attempts.
Approved skills are published to ClawHub, EnGenAI's internal skill registry. Every skill in ClawHub has passed all five vetting stages and has been approved by a human reviewer.
Build once, reuse everywhere
Teams share vetted skills across projects within your organisation.
Version controlled
Every skill version is retained. Agents pin to specific versions. Breaking changes cannot silently affect running agents.
Instant revocation
A skill can be yanked from ClawHub immediately. All agents using it are blocked from that skill at next invocation.
stages before any skill runs
Parse → Vet → Sandbox → Approve → Install
permission levels
ALLOW / ASK / DENY
self-escalation possible
Platform-enforced. Not policy-enforced.
Skills give agents capabilities. The Canvas gives you visibility into how they use them — in real time, as it happens.