Practical guide to building ADK agents with Skills and progressive disclosure
When we talk about a well-built AI agent, it is not enough for it to just follow instructions. A good agent needs to be able to load knowledge on demand and, in some cases, even create new instructions on its own. This is exactly where Google’s Agent Development Kit (ADK) and its SkillToolset come in.
Instead of trying to stuff all knowledge into the system prompt, ADK works with the concept of Skills: specialization blocks that the agent can list, load, and combine dynamically. With the right setup, the agent itself can generate a new Skill at runtime, save the content, and immediately start using this new knowledge.
The flow is simple:
- generate the Skill;
- load the Skill;
- use the Skill for the user’s task.
It does not matter whether you want a security checklist, a compliance audit routine, or a data pipeline validator: the architecture is the same. The only difference is the type of instruction you package inside each Skill.
The problem with monolithic prompts
A lot of people still build agents with a single giant prompt. They cram everything into one block: compliance rules, style guides, API docs, troubleshooting manuals, code standards, internal policies, and so on.
This can work as a quick fix when the agent has two or three very simple functions. But as soon as you start scaling to ten, fifteen, or twenty different tasks, this model falls apart. Every LLM call starts carrying thousands of instruction tokens, even when the user asks a trivial question that does not depend on 90% of that context.
The Agent Skills specification solves this with an architecture pattern called progressive disclosure. Instead of dumping everything at once, it splits knowledge loading into three well-defined levels:
- L1 – Metadata (~100 tokens per Skill)
Skill name and description. This level is loaded at initialization, for all Skills. It works like a menu the agent checks to decide what is relevant for each user request. - L2 – Instructions (< 5,000 tokens)
The Skill’s full body, with detailed steps and workflow. This is only loaded when the agent explicitly activates that Skill. - L3 – Resources (on demand)
External files: style guides, API specs, reference attachments, long examples. These are only added to context when the Skill itself asks for them via the resource loading tool.
In practice, an agent with 10 Skills stops loading something like 10,000 tokens in a single mega prompt and instead works with about 1,000 tokens of L1 metadata at startup, calling L2 and L3 only when needed. That is roughly a 90% reduction in base context per LLM call.
In ADK, this pattern is implemented by the SkillToolset class, which automatically generates three tools aligned with these levels:
list_skills– L1, list of available Skills;load_skill– L2, loads the Skill body;load_skill_resource– L3, loads external Skill resources.
Pattern 1: Inline skills (the sticky note in the code)
The first pattern is the simplest one: a Skill defined directly in Python code as an object, with name, description, and instructions. It works well for small, stable rules that almost never change.
A classic example is an SEO checklist Skill for reviewing blog posts:
seo_skill = models.Skill(
frontmatter=models.Frontmatter(
name="seo-checklist",
description="SEO optimization checklist for blog posts. Covers title tags, meta descriptions, heading structure, and readability.",
),
instructions=(
"When optimizing a blog post for SEO, check each item:\n"
"1. Title: 50-60 chars, primary keyword near the start\n"
"2. Meta description: 150-160 chars, includes a call-to-action\n"
"3. Headings: H2/H3 hierarchy, keywords in 2-3 headings\n"
"4. First paragraph: Primary keyword in first 100 words\n"
"5. Images: Alt text with keywords, compressed, descriptive names\n"
"Review the content against each item and suggest improvements."
),
)
In this format:
- frontmatter (name and description) becomes L1, always visible as part of the Skill list;
- instructions become L2, only loaded when the agent decides it needs that Skill.
If the user says something like Review my blog post for SEO, the agent queries L1, identifies that the seo-checklist Skill is relevant, calls load_skill, and starts applying the checklist step by step to the provided content.
Pattern 2: File-based skills (the organized reference folder)
Inline skills go a long way, but they start to feel limited when that capability needs supporting documentation: long guides, API specs, internal playbooks, etc. In that scenario, the second pattern comes in: the file-based Skill.
Each Skill lives in its own directory, with a central SKILL.md file and optional subfolders for references, assets, or scripts. The minimal structure looks like this:
skills/blog-writer/
├── SKILL.md # L2: Instructions
└── references/
└── style-guide.md # L3: Loaded on demand
The SKILL.md file starts with YAML frontmatter followed by a Markdown body with detailed instructions. The files inside references/ hold the heavier knowledge, such as full style guides or long specifications.
The Skill is loaded by ADK in a very straightforward way:
blog_writer_skill = load_skill_from_dir(
pathlib.Path(__file__).parent / "skills" / "blog-writer"
)
When the agent activates this Skill, it pulls L2 from SKILL.md. If, in the instructions, there is a step telling it to read the style guide, the agent will call load_skill_resource and fetch the references/style-guide.md file only at that moment.
This separation is powerful for two reasons:
- it keeps L2 lean and focused on the action flow;
- it removes huge documents from the main prompt, putting them behind L3, accessed on demand.
As a bonus, any agent compatible with the agentskills.io specification can consume the same folder. You write it once and reuse it across multiple agents.
Pattern 3: External skills (importing community repositories)
The third pattern is basically the natural evolution of the previous one. The architecture is the same as a file-based Skill, but instead of writing SKILL.md from scratch, you download a ready-made Skill from an external repository.
One example is collections like the awesome-claude-skills repository. You copy the Skill you are interested in to your project directory and load it with the same call:
content_researcher_skill = load_skill_from_dir(
pathlib.Path(__file__).parent / "skills" / "content-research-writer"
)
For ADK, it does not matter whether SKILL.md was written by you or by the community. The Agent Skills specification defines a universal directory format. If the folder follows that format, load_skill_from_dir just works.
Google itself publishes official ADK development Skills in the same pattern, installable via the command line, for example with:
npx skills add google/adk-docs -y -g
With that, you can combine:
- Skills you wrote yourself;
- internal company Skills;
- open Skills from the community and from Google.
These first three patterns cover everything that already exists: what you create, what you import from files, and what you download from third parties. One step is still missing: letting the agent create its own Skills.
Pattern 4: Meta skill (the runtime Skill factory)
The fourth pattern closes the loop. Here you create a meta skill: a Skill whose purpose is to generate new Skills, writing complete SKILL.md files from natural language requirements.
This makes the agent self-extensible. When a need arises that no existing Skill covers well, the agent can read the specification, generate a new ADK-compatible Skill, and immediately start using this new capability.
This meta skill is usually defined as an Inline Skill, but with a twist: it comes with a set of L3 resources, including:
- the official agentskills.io specification text;
- a well-formed Skill example that serves as a template.
It looks roughly like this:
skill_creator = models.Skill(
frontmatter=models.Frontmatter(
name="skill-creator",
description=(
"Creates new ADK-compatible skill definitions from requirements."
" Generates complete SKILL.md files following the Agent Skills"
" specification at agentskills.io."
),
),
instructions=(
"When asked to create a new skill, generate a complete SKILL.md file.\n\n"
"Read `references/skill-spec.md` for the format specification.\n"
"Read `references/example-skill.md` for a working example.\n\n"
"Follow these rules:\n"
"1. Name must be kebab-case, max 64 characters\n"
"2. Description must be under 1024 characters\n"
"3. Instructions should be clear, step-by-step\n"
"4. Reference files in references/ for detailed domain knowledge\n"
"5. Keep SKILL.md under 500 lines, put details in references/\n"
"6. Output the complete file content the user can save directly\n"
),
resources=models.Resources(
references={
"skill-spec.md": "# Agent Skills Specification (agentskills.io)...",
"example-skill.md": "# Example: Code Review Skill...",
}
),
)
The resources field uses models.Resources to embed the specification and a working example as logical L3 files. When the agent calls load_skill_resource("skill-creator", "references/skill-spec.md"), it receives the full spec content and follows it to generate the new SKILL.md.
Important best practice: as tempting as it is to automate everything, it is worth keeping someone in the loop to review generated Skills. Treat each new SKILL.md like a code dependency: review it, test it, and only then put it into production. ADK itself offers evaluation mechanisms to validate Skill behavior before you roll it out broadly.
The Skill factory in action
Picture this request: the user says:
I need a Skill to review Python code for security vulnerabilities.
The agent activates skill-creator, reads the L3 resources with the spec and example, and generates a complete SKILL.md with:
- a valid kebab-case name;
- instructions organized by risk type (input validation, authentication, encryption, etc.);
- an output format based on severity (low, medium, high).
This new Skill follows the same agentskills.io specification. Result: it does not just work in ADK, but in any other agent compatible with the format, such as:
- Gemini CLI;
- Claude Code;
- Cursor;
- and dozens of other tools that have already adopted the pattern.
Tying it all together with SkillToolset
Once you have your Skills defined (inline, file-based, external, and the meta skill), the final step is to package everything into a SkillToolset and hand it to the agent.
skill_toolset = SkillToolset(
skills=[seo_skill, blog_writer_skill, content_researcher_skill, skill_creator]
)
root_agent = Agent(
model="gemini-2.5-flash",
name="blog_skills_agent",
description="A blog-writing agent powered by reusable skills.",
instruction=(
"You are a blog-writing assistant with specialized skills.\n"
"Load relevant skills to get detailed instructions.\n"
"Use load_skill_resource to access reference materials.\n"
"Follow each skill's step-by-step instructions.\n"
"Always explain which skill you're using and why."
),
tools=[skill_toolset],
)
In this example:
seo_skillhandles the SEO checklist;blog_writer_skilldefines how to structure the text;content_researcher_skilltakes care of content research;skill_creatoris the factory, ready to create new Skills on demand.
If the user asks something like Create a Skill to write technical blog introductions, the agent uses the meta skill, generates a new SKILL.md, and replies with the content ready to be saved under skills/blog-intro-writer/SKILL.md. In the next session, you can load that directory with load_skill_from_dir and treat this Skill like any other.
Under the hood, SkillToolset follows the same progressive disclosure model, automatically generating:
list_skills– L1, always injected;load_skill– L2, called when needed;load_skill_resource– L3, for specific references.
Final tips for designing useful Skills
- Invest in the description
Thedescriptionfield is, in practice, the API docs the LLM sees at L1. Descriptions like SEO optimization checklist for blog posts help the agent understand when to activate the Skill. Vague descriptions leave the model guessing. - Start inline, move to files when it makes sense
You do not need to create a directory and full spec for everything. If the Skill fits in a few lines and does not require external references, keep it inline. Migrate to the file-based format when you need to reuse it across multiple agents or include larger L3 documentation. - Treat generated Skills like production code
What the meta skill generates is, effectively, part of your agent’s behavior. It is worth reviewing, validating in real scenarios, and, if possible, creating automated tests using ADK’s evaluation tools before putting them on a critical path.
In the end, the ADK play with Skills and SkillToolset is simple: instead of a huge, hard-to-maintain prompt, you get a set of modular knowledge blocks, loaded on demand, that can be written by you, by the community, or by the agent itself at runtime. Fewer tokens, more control, and an agent architecture much closer to modern software engineering best practices.
