LLM Agent Integration¶
DV Flow Manager provides built-in support for Large Language Model (LLM) agents, enabling AI assistants to discover, understand, and work with DFM flows effectively.
Overview¶
The goal of LLM integration is to enable AI agents to:
Discover available DFM capabilities, packages, tasks, and types
Understand DFM’s dataflow-based build system paradigm
Generate correct flow.yaml/flow.yaml configurations
Debug and modify existing flows with minimal hallucination
Execute builds and simulations via the dfm CLI
Run tasks dynamically from within LLM-driven Agent tasks
The dfm --help Output¶
The dfm --help command displays a short description of DFM and an absolute
path to the root skill.md file. This enables LLM agents to immediately locate
the comprehensive documentation:
$ dfm --help
usage: dfm [-h] [--log-level {NONE,INFO,DEBUG}] [-D NAME=VALUE] {graph,run,show,...} ...
DV Flow Manager (dfm) - A dataflow-based build system for silicon design and verification.
positional arguments:
{graph,run,show,...}
graph Generates the graph of a task
run run a flow
show Display and search packages, tasks, types, and tags
...
options:
-h, --help show this help message and exit
--log-level {NONE,INFO,DEBUG}
Configures debug level [INFO, DEBUG]
-D NAME=VALUE Parameter override; may be used multiple times
For LLM agents: See the skill file at: /absolute/path/to/site-packages/dv_flow/mgr/share/skill.md
The skill path is always an absolute path to ensure LLM agents can reliably locate and read the file regardless of the current working directory.
Parameter Overrides¶
DFM supports runtime parameter overrides via the -D option and -P parameter
file option. This allows you to customize task and package parameters without
modifying the flow definition.
Command Line Overrides (-D)¶
The -D option accepts parameter overrides in several forms:
# Package-level parameter (no dots)
dfm run build -D timeout=300
# Task parameter with leaf name (applies to any task with this param)
dfm run build -D top=counter
# Task-qualified parameter (specific task)
dfm run build -D build.top=counter
# Package-qualified parameter (fully qualified)
dfm run build -D myproject.build.top=counter
# Multiple overrides
dfm run build -D top=counter -D debug=true
Type Coercion:
Parameter values are automatically converted to the correct type:
List types: Single values become single-element lists
-D top=counter→["counter"]-D include=*.sv→["*.sv"]
Boolean types: String values are converted to booleans
-D debug=true→True-D debug=1→True-D debug=false→False
Integer types: Strings are parsed as integers
-D count=42→42
String types: Values remain as strings
-D msg=hello→"hello"
JSON Parameter File (-P)¶
For complex parameter types (nested dicts, lists of objects), use a JSON parameter file or inline JSON string:
From File:
dfm run build -P params.json
Inline JSON String (convenient for LLMs):
# Pass JSON directly as a string
dfm run build -P '{"tasks": {"build": {"top": ["counter", "decoder"]}}}'
# Complex nested structure
dfm run build -P '{"tasks": {"build": {"defines": {"DEBUG": 1}}}}'
JSON Format:
{
"package": {
"timeout": 300,
"verbose": true
},
"tasks": {
"build": {
"top": ["counter", "decoder"],
"defines": {
"DEBUG": 1,
"VERBOSE": true
},
"options": {
"opt_level": 2,
"warnings": ["all", "error"]
}
},
"myproject.test": {
"iterations": 1000
}
}
}
Override Precedence:
When both -D and -P are used, command-line -D options take precedence:
# params.json sets top=["default"]
# This overrides it with top=["counter"]
dfm run build -P params.json -D top=counter
# Also works with inline JSON
dfm run build -P '{"tasks": {"build": {"top": ["default"]}}}' -D top=counter
Parameter Resolution Order¶
Parameters are resolved in this priority order (highest to lowest):
Qualified task override:
-D pkg.task.param=valueTask-qualified override:
-D task.param=valueLeaf parameter override:
-D param=value(matches any task with this param)Package parameter override:
-D param=value(if package has this param)Default value: From flow definition
This allows you to:
Override specific task parameters with qualified names
Apply global overrides to all matching tasks with leaf names
Maintain backward compatibility with package-level parameters
The dfm show skills Command¶
The dfm show skills command lists and queries skills defined as DataSet types
tagged with std.AgentSkillTag. This provides programmatic access to package
capabilities for LLM agents.
Basic Usage¶
# List all skills from loaded packages
dfm show skills
# JSON output for programmatic use
dfm show skills --json
# Filter by package
dfm show skills --package hdlsim.vlt
# Show full skill documentation for a specific skill
dfm show skills hdlsim.vlt.AgentSkill --full
# Search skills by keyword
dfm show skills --search verilator
Example Output¶
$ dfm show skills
hdlsim.AgentSkill - Configure and run HDL simulations with various simulators
hdlsim.vlt.AgentSkill - Compile and run simulations with Verilator
hdlsim.vlt.VerilatorTraceSkill - Configure FST/VCD waveform tracing in Verilator
hdlsim.vcs.AgentSkill - Compile and run simulations with Synopsys VCS
$ dfm show skills --json
{
"skills": [
{
"name": "hdlsim.AgentSkill",
"package": "hdlsim",
"skill_name": "hdl-simulation",
"desc": "Configure and run HDL simulations with various simulators",
"is_default": true
},
{
"name": "hdlsim.vlt.AgentSkill",
"package": "hdlsim.vlt",
"skill_name": "verilator-simulation",
"desc": "Compile and run simulations with Verilator",
"is_default": true
}
]
}
Skill Definition¶
Skills are defined as DataSet types tagged with std.AgentSkillTag:
package:
name: hdlsim.vlt
types:
- name: AgentSkill
uses: std.DataSet
tags:
- std.AgentSkillTag
with:
name:
type: str
value: "verilator-simulation"
desc:
type: str
value: "Compile and run simulations with Verilator"
skill_doc:
type: str
value: |
# Verilator Simulation
## Quick Start
```yaml
imports:
- name: hdlsim.vlt
as: sim
tasks:
- name: build
uses: sim.SimImage
needs: [rtl]
with:
top: [my_module]
```
LLM Call Interface (Server Mode)¶
When running inside an LLM-driven std.Agent task, the dfm command
automatically connects to the parent DFM session via a Unix socket server.
This enables LLMs to execute tasks that share resources with the parent session.
How It Works¶
When
dfm runstarts, a command server is created on a Unix socketThe socket path is set in
DFM_SERVER_SOCKETenvironment variableChild processes (like LLM assistants) detect this variable
The
dfmcommand runs in client mode, forwarding requests to the serverTasks execute within the parent session’s context
Benefits¶
Resource Sharing: Respects parent session’s parallelism limits (
-j)State Consistency: Sees outputs from tasks already completed in the session
Cache Sharing: Uses the same memento cache for incremental builds
Unified Logging: All task output appears in the parent session’s logs
Commands Available in Server Mode¶
When DFM_SERVER_SOCKET is set, the following commands work:
# Execute tasks via parent session
dfm run task1 task2
dfm run task1 -D param=value
dfm run task1 --timeout 300
# Query project state
dfm show tasks
dfm show task my_project.build
dfm context --json
# Validate configuration
dfm validate
# Health check
dfm ping
Example: LLM Generating and Compiling RTL¶
When an LLM running inside a std.Agent task needs to compile generated code:
# 1. LLM creates RTL files
cat > counter.sv << 'EOF'
module counter(input clk, rst_n, output logic [7:0] count);
always_ff @(posedge clk or negedge rst_n)
if (!rst_n) count <= 0;
else count <= count + 1;
endmodule
EOF
# 2. Run compilation via parent DFM session with parameter override
# The 'top' parameter is a list, so string "counter" becomes ["counter"]
dfm run hdlsim.vlt.SimImage -D top=counter
# Alternative: fully qualified task parameter
dfm run hdlsim.vlt.SimImage -D hdlsim.vlt.SimImage.top=counter
# Alternative: using a task-qualified name
dfm run SimImage -D SimImage.top=counter
# 3. Output is JSON with status, outputs, and markers
# {"status": 0, "outputs": [...], "markers": []}
All server mode commands return JSON responses:
Success Response:
{
"status": 0,
"outputs": [
{
"task": "hdlsim.vlt.SimImage",
"changed": true,
"output": [
{
"type": "hdlsim.SimImage",
"exe_path": "/path/to/rundir/Vtop"
}
]
}
],
"markers": []
}
Error Response:
{
"status": 1,
"outputs": [],
"markers": [
{
"task": "hdlsim.vlt.SimImage",
"msg": "Compilation failed: syntax error",
"severity": "error"
}
]
}
Agent-Friendly Discovery¶
JSON Output for dfm show¶
The dfm show commands support --json output for programmatic consumption:
# List packages as JSON
dfm show packages --json
# Get task details as JSON
dfm show task std.FileSet --json
# Show project structure as JSON
dfm show project --json
# List available skills
dfm show skills --json
# Get full project context
dfm context --json
This enables agents to query DFM metadata and construct correct configurations.
The dfm context Command¶
The dfm context command provides comprehensive project information in a
single JSON output, ideal for LLM consumption:
$ dfm context --json
{
"project": {
"name": "my_project",
"root_dir": "/path/to/project",
"rundir": "/path/to/rundir"
},
"tasks": [
{"name": "my_project.build", "scope": "root", "uses": "hdlsim.vlt.SimImage"},
{"name": "my_project.rtl", "scope": "local", "uses": "std.FileSet"}
],
"types": [...],
"skills": [...]
}
The dfm agent Command¶
The dfm agent command launches an AI assistant with comprehensive DV Flow
context derived from your project. This command enables interactive agent sessions
with automatic context injection from agent-related tasks.
Basic Usage¶
# Launch agent with default assistant
dfm agent
# Launch with specific tasks providing context
dfm agent MySkill MyPersona MyReference
# Launch with specific assistant and model
dfm agent -a copilot -m gpt-4 MySkill
# Output context as JSON (debugging)
dfm agent --json MySkill
Command Options¶
dfm agent [OPTIONS] [TASKS...]
Positional Arguments:
tasks Task references to use as context (skills, personas, tools, references)
Options:
-a, --assistant Specify assistant (copilot, codex, mock)
-m, --model Specify the AI model to use
--config-file FILE Output assistant config file for debugging
--json Output context as JSON instead of launching
--clean Clean rundir before executing tasks
--ui MODE Select UI mode (log, progress, progressbar, tui)
-c, --config Specifies active configuration for root package
-D NAME=VALUE Parameter overrides
How It Works¶
Context Collection: The command evaluates specified tasks (and their dependencies)
Resource Extraction: Extracts agent resources (skills, personas, tools, references)
Prompt Generation: Builds a comprehensive system prompt with project context
Assistant Launch: Launches the AI assistant with the generated context
Agent Resource Types¶
DV Flow provides four standard agent resource types that can be used with dfm agent:
- AgentSkill
Defines a capability or skill that the AI agent can use. Skills typically document how to accomplish specific tasks within your project.
Uses:
std.AgentSkillTag:
std.AgentSkillTagCommon uses: Task documentation, API references, workflow guides
- AgentPersona
Defines a role or personality for the AI agent to adopt during interaction.
Uses:
std.AgentPersonaTag:
std.AgentPersonaTagFields:
persona(str) - Description of the personaCommon uses: Domain expert roles, coding styles, interaction modes
- AgentTool
Specifies external tools or MCP servers that the agent can invoke.
Uses:
std.AgentToolTag:
std.AgentToolTagFields:
command(str),args(list),url(str)Common uses: External APIs, command-line tools, MCP servers
- AgentReference
Provides reference documentation or materials for the agent to consult.
Uses:
std.AgentReferenceTag:
std.AgentReferenceTagCommon uses: Project documentation, specifications, examples
All resource types (except AgentPersona) inherit from std.AgentResource which provides:
files(list) - List of files to include in the resourcecontent(str) - Inline content for the resourceurls(list) - URLs pointing to external resources
Example: Defining Agent Resources¶
AgentSkill Example:
package:
name: my_project
types:
- name: SimulationSkill
uses: std.AgentSkill
with:
files: ["docs/simulation_guide.md"]
content: |
# Simulation Guide
To run simulations, use the sim.SimRun task...
tasks:
- name: SimSkill
uses: SimulationSkill
AgentPersona Example:
tasks:
- name: HardwareExpertPersona
uses: std.AgentPersona
with:
persona: |
You are an expert hardware verification engineer with 20 years
of experience in RTL design and SystemVerilog. You prefer
structured, defensive coding practices.
AgentTool Example:
tasks:
- name: WaveformViewer
uses: std.AgentTool
with:
command: "gtkwave"
args: ["--script"]
AgentReference Example:
tasks:
- name: ProjectSpec
uses: std.AgentReference
with:
files: ["specs/project_requirements.md"]
urls: ["https://example.com/api-docs"]
Using Agent Resources¶
Once defined, use agent resources by referencing them in the dfm agent command:
# Launch agent with simulation skill and hardware expert persona
dfm agent SimSkill HardwareExpertPersona
# Include project specifications as reference
dfm agent SimSkill ProjectSpec
# Provide multiple resources
dfm agent SimSkill HardwareExpertPersona ProjectSpec WaveformViewer
The agent command will:
Resolve all task references
Execute tasks to evaluate their parameters
Load file contents and fetch URLs
Inject all context into the AI assistant’s system prompt
Launch an interactive session
This enables the AI assistant to have deep understanding of your project’s capabilities, constraints, and domain-specific knowledge.
Integration with AI Assistants¶
GitHub Copilot CLI¶
# Get skill path and read documentation
dfm --help
# Then read the skill.md file at the displayed path
# Use show commands for discovery
dfm show skills --json
dfm show packages --json
# Inside an Agent task, execute tasks via server
dfm run build_task
ChatGPT / Claude¶
When working with conversational AI:
Run
dfm --helpand share the skill.md contentUse
dfm show skillsto list available capabilitiesUse
dfm show task <name> --jsonfor specific task details
Example Agent Workflows¶
Project Initialization¶
User prompt: “Create a DFM flow for simulating my counter.sv with Verilator”
Agent workflow:
# 1. Get DFM skill path from help
dfm --help
# Read the skill.md file at the absolute path shown
# 2. Find Verilator package
dfm show packages --search vlt --json
# 3. Get SimImage details
dfm show task hdlsim.vlt.SimImage --json
# 4. Generate flow.yaml
# (Agent creates file)
# 5. Validate
dfm validate flow.yaml
# 6. Run
dfm run build
Generated flow.yaml:
package:
name: counter_sim
imports:
- name: hdlsim.vlt
as: sim
tasks:
- name: rtl
uses: std.FileSet
with:
type: systemVerilogSource
include: "counter.sv"
- name: build
uses: sim.SimImage
needs: [rtl]
with:
top: [counter]
- name: run
uses: sim.SimRun
needs: [build]
Dynamic Code Generation and Verification¶
User prompt: “Generate a counter module and verify it compiles”
Agent workflow (inside an Agent task):
# 1. Generate the RTL
cat > counter.sv << 'EOF'
module counter #(parameter WIDTH=8) (
input logic clk,
input logic rst_n,
output logic [WIDTH-1:0] count
);
always_ff @(posedge clk or negedge rst_n)
if (!rst_n) count <= '0;
else count <= count + 1'b1;
endmodule
EOF
# 2. Run compilation through parent session
dfm run hdlsim.vlt.SimImage -D top=counter
# 3. Check if compilation succeeded
# (Parse JSON response)
# 4. Write result file for Agent task
cat > result.json << 'EOF'
{
"status": 0,
"changed": true,
"output": [
{"type": "std.FileSet", "basedir": ".", "files": ["counter.sv"]}
],
"markers": []
}
EOF
Adding UVM Support¶
User prompt: “Add UVM support to my simulation”
Agent workflow:
# 1. Understand current project
dfm show project --json
# 2. Identify current simulator
# (Parse output to find hdlsim.vlt)
# 3. Get SimLibUVM info
dfm show task hdlsim.vlt.SimLibUVM --json
# 4. Modify flow.yaml to add SimLibUVM
# 5. Validate changes
dfm validate flow.yaml
Debugging Build Failures¶
User prompt: “My build is failing with ‘module not found’”
Agent workflow:
# 1. Get project structure
dfm show project --json
# 2. Get build task with dependencies
dfm show task build --needs --json
# 3. List files in rtl task
dfm show task rtl --json
# 4. Diagnose and fix
Common Parameter Override Errors¶
When using parameter overrides, you may encounter these errors:
Unknown Parameter:
$ dfm run build -D invalid_param=value
Error: Parameter 'invalid_param' not found in task 'myproject.build'
Available parameters: [include, type, base]
Solution: Check available parameters with dfm show task build
Type Mismatch:
$ dfm run build -D count=abc
Error: Cannot convert 'abc' to int for parameter 'count'
Solution: Provide correct type or use -P params.json for complex types
Complex Type from CLI:
$ dfm run build -D config={"key": "value"}
Error: Parameter 'config' requires complex type. Use -P/--param-file with JSON.
Solution: Use JSON parameter file for dicts, nested structures:
# params.json: {"tasks": {"build": {"config": {"key": "value"}}}}
dfm run build -P params.json
Best Practices¶
Start with help: Run
dfm --helpto get the skill.md pathUse skills for discovery:
dfm show skillslists package capabilitiesUse JSON output:
--jsonflag enables programmatic parsingUse context command:
dfm context --jsonprovides complete project stateParameter overrides: Use
-Dfor simple values,-Pfor complex typesVerify task parameters: Check available params with
dfm show task <name>Type awareness: Remember that single values become lists for list-type params
Verify suggestions: Always review AI-generated configurations
Report issues: If AI consistently misunderstands, the skill docs may need updates
Without proper context, AI assistants may suggest incorrect syntax or non-existent
features. With the skill.md documentation and dfm show skills, assistants can:
Understand DFM’s unique dataflow model
Suggest appropriate standard library tasks
Help with package organization
Debug flow definition issues
Propose DFM-specific best practices
Execute tasks dynamically inside Agent tasks
Override task parameters at runtime
Enabling LLM Support in Your Project¶
For detailed instructions on enabling LLM agent support in your own projects, including creating AGENTS.md files and defining custom skills, see the LLM Integration guide.