Skip to content

Conversation

@O1ahmad
Copy link
Owner

@O1ahmad O1ahmad commented Dec 9, 2025

Summary by CodeRabbit

Release Notes

  • New Features

    • Added comprehensive command-line interface metadata for 40+ development tools, including ansible, kubernetes, docker, and database utilities, enabling enhanced help text, autocompletion support, and integrated tool documentation.
  • Chores

    • Expanded CLI tool specification database with structured metadata for version information, options, and subcommands.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Dec 9, 2025

Walkthrough

A bulk addition of approximately 60+ static JSON data files documenting the command-line interface surface of various developer tools. Each file captures structured metadata including commands, subcommands, options, default values, and help text in a consistent JSON schema across cncf_cli_tools and dev_tools directories.

Changes

Cohort / File(s) Summary
CNCF CLI Tools
data/results/cncf_cli_tools/yq-latest.json
yq CLI metadata (v4.49.2) including root, completion, eval, and eval-all commands with extensive options and help text.
Ansible Tools
data/results/dev_tools/ansible-latest.json, data/results/dev_tools/ansible-config-latest.json, data/results/dev_tools/ansible-doc-latest.json, data/results/dev_tools/ansible-galaxy-latest.json, data/results/dev_tools/ansible-playbook-latest.json, data/results/dev_tools/ansible-vault-latest.json
Ansible CLI suite metadata (v2.18.6) for core tools, including subcommands, global options, help text, and version info across ansible, config, doc, galaxy, playbook, and vault.
Container & Infrastructure Tools
data/results/dev_tools/benthos-4.16-cgo.json, data/results/dev_tools/caddy-latest.json, data/results/dev_tools/dive-latest.json, data/results/dev_tools/skopeo-v1.20.0.json, data/results/dev_tools/syft-v1.34.1-debug.json
Container, proxy, and image inspection CLI metadata for benthos (v4.16.0), caddy (v2.10.2), dive (0.13.1), skopeo (1.20.0), and syft (1.34.1) with nested subcommands and comprehensive option definitions.
Security & Compliance Tools
data/results/dev_tools/checkov-latest.json, data/results/dev_tools/hadolint-latest.json, data/results/dev_tools/nuclei-v3.6-amd64.json, data/results/dev_tools/subfinder-v2.7.0.json, data/results/dev_tools/tfsec-v1.26.3-amd64.json
Security scanning and compliance CLI metadata for checkov (3.2.494), hadolint (2.14.0), nuclei (3.6.0), subfinder (2.7.0), and tfsec (1.26.3) with detailed flag and option documentation.
Programming Language & Runtime CLIs
data/results/dev_tools/go-latest.json, data/results/dev_tools/java-latest.json, data/results/dev_tools/node.js-25.2.1-alpine.json, data/results/dev_tools/php-8.5-rc-zts.json, data/results/dev_tools/python-latest.json, data/results/dev_tools/ruby-3.2-slim-trixie.json, data/results/dev_tools/rustc-latest.json
CLI metadata for programming runtimes: go, java (21.0.9), node.js (25.2.1), php (8.5 RC), python (3.14.1), ruby (3.2.9), and rustc (1.91.1) with comprehensive option and version information.
Build & Deployment Tools
data/results/dev_tools/infracost-ci-0.10.5.json, data/results/dev_tools/packer-latest.json
Build and infrastructure-as-code tool metadata for infracost (0.10.5) and packer (1.14.3) including nested subcommands and option specifications.
Database & Message Queue CLIs
data/results/dev_tools/elasticsearch-9.2.2.json, data/results/dev_tools/kibana-9.2.2.json, data/results/dev_tools/logstash-9.2.2.json, data/results/dev_tools/mariadb-latest.json, data/results/dev_tools/memcached-latest.json, data/results/dev_tools/minio-latest.json, data/results/dev_tools/mongosh-latest.json, data/results/dev_tools/mysql-latest.json, data/results/dev_tools/redis-cli-latest.json
Elastic Stack, database, cache, and object storage CLI metadata for elasticsearch/kibana/logstash (9.2.2), mariadb, memcached (1.6.39), minio, mongosh (2.5.9), mysql (9.5.0), and redis-cli (8.4.0).
Monitoring & Kubernetes Tools
data/results/dev_tools/grafana-server-latest.json, data/results/dev_tools/k9s-latest.json, data/results/dev_tools/promtail-main-581519e.json
Monitoring and Kubernetes CLI metadata for grafana-server (12.3.0), k9s (0.50.16), and promtail with nested subcommands and comprehensive option sets.
General-Purpose & Network Tools
data/results/dev_tools/curl-latest.json, data/results/dev_tools/lazydocker-latest.json, data/results/dev_tools/nginx-latest.json, data/results/dev_tools/nmap-7.95-r9-linux-i386.json, data/results/dev_tools/psql-latest.json
Utility CLI metadata for curl, lazydocker, nginx, nmap (7.95), and psql (18.1) with options, help text, and version information.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

Notes:

  • The changes consist of ~63 homogeneous static JSON data files following an identical schema (subcommands, options, name, raw_help_text, version).
  • Each file contains only declarative CLI metadata with no executable logic or functional code changes.
  • Review effort is minimized due to consistent structure and repetitive pattern across all additions.
  • Spot-check recommendations:
    • Verify a sample of files (5-10) for correct schema structure and version accuracy.
    • Ensure no unintended duplicates or file overwrites from previous versions.
    • Confirm that raw_help_text fields accurately reflect CLI output for each tool version.

Poem

🐰 CLI scrolls unfurl, a burrow of tools
JSON dreams dance through data pools
Sixty paths mapped, no logic to test
Metadata bounces—this diff's a sweet rest! 🎉

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: add initial parser v2 and example CLI parsing results' accurately summarizes the main changes: adding a new parser version (v2) and example CLI parsing results across multiple tool data files.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch ahmad/add_results_and_v2_parser

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

🧹 Nitpick comments (1)
data/results/dev_tools/nmap-7.95-r9-linux-i386.json (1)

1-450: Data structure is solid.

Well-organized CLI metadata. The aliases field on line 446 is present here but absent in some other files—consider standardizing the schema across all new data files for consistency.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 828dc2c and bd60cf6.

📒 Files selected for processing (43)
  • data/results/cncf_cli_tools/yq-latest.json (1 hunks)
  • data/results/dev_tools/ansible-config-latest.json (1 hunks)
  • data/results/dev_tools/ansible-doc-latest.json (1 hunks)
  • data/results/dev_tools/ansible-galaxy-latest.json (1 hunks)
  • data/results/dev_tools/ansible-latest.json (1 hunks)
  • data/results/dev_tools/ansible-playbook-latest.json (1 hunks)
  • data/results/dev_tools/ansible-vault-latest.json (1 hunks)
  • data/results/dev_tools/benthos-4.16-cgo.json (1 hunks)
  • data/results/dev_tools/caddy-latest.json (1 hunks)
  • data/results/dev_tools/checkov-latest.json (1 hunks)
  • data/results/dev_tools/curl-latest.json (1 hunks)
  • data/results/dev_tools/dive-latest.json (1 hunks)
  • data/results/dev_tools/elasticsearch-9.2.2.json (1 hunks)
  • data/results/dev_tools/go-latest.json (1 hunks)
  • data/results/dev_tools/grafana-server-latest.json (1 hunks)
  • data/results/dev_tools/hadolint-latest.json (1 hunks)
  • data/results/dev_tools/infracost-ci-0.10.5.json (1 hunks)
  • data/results/dev_tools/java-latest.json (1 hunks)
  • data/results/dev_tools/k9s-latest.json (1 hunks)
  • data/results/dev_tools/kibana-9.2.2.json (1 hunks)
  • data/results/dev_tools/lazydocker-latest.json (1 hunks)
  • data/results/dev_tools/logstash-9.2.2.json (1 hunks)
  • data/results/dev_tools/mariadb-latest.json (1 hunks)
  • data/results/dev_tools/memcached-latest.json (1 hunks)
  • data/results/dev_tools/minio-latest.json (1 hunks)
  • data/results/dev_tools/mongosh-latest.json (1 hunks)
  • data/results/dev_tools/mysql-latest.json (1 hunks)
  • data/results/dev_tools/nginx-latest.json (1 hunks)
  • data/results/dev_tools/nmap-7.95-r9-linux-i386.json (1 hunks)
  • data/results/dev_tools/node.js-25.2.1-alpine.json (1 hunks)
  • data/results/dev_tools/nuclei-v3.6-amd64.json (1 hunks)
  • data/results/dev_tools/packer-latest.json (1 hunks)
  • data/results/dev_tools/php-8.5-rc-zts.json (1 hunks)
  • data/results/dev_tools/promtail-main-581519e.json (1 hunks)
  • data/results/dev_tools/psql-latest.json (1 hunks)
  • data/results/dev_tools/python-latest.json (1 hunks)
  • data/results/dev_tools/redis-cli-latest.json (1 hunks)
  • data/results/dev_tools/ruby-3.2-slim-trixie.json (1 hunks)
  • data/results/dev_tools/rustc-latest.json (1 hunks)
  • data/results/dev_tools/skopeo-v1.20.0.json (1 hunks)
  • data/results/dev_tools/subfinder-v2.7.0.json (1 hunks)
  • data/results/dev_tools/syft-v1.34.1-debug.json (1 hunks)
  • data/results/dev_tools/tfsec-v1.26.3-amd64.json (1 hunks)
🧰 Additional context used
🪛 Checkov (3.2.334)
data/results/dev_tools/redis-cli-latest.json

[medium] 312-313: Basic Auth Credentials

(CKV_SECRET_4)


[medium] 360-361: Basic Auth Credentials

(CKV_SECRET_4)

🔇 Additional comments (23)
data/results/dev_tools/ansible-galaxy-latest.json (1)

1-29: Data file looks good.

Standard CLI metadata structure with consistent schema. No issues detected.

data/results/dev_tools/redis-cli-latest.json (1)

239-243: Static analysis warnings are false positives.

Lines 312–313 and 360–361 document the credential parameter format for redis-cli's -u URI option (part of the tool's public CLI interface). Checkov flags "Basic Auth Credentials" because the help text includes the format redis://user:password@host:port/dbnum, but this is documentation of a command-line parameter, not actual stored credentials. No action required.

Also applies to: 360-361

data/results/dev_tools/ansible-playbook-latest.json (1)

1-260: Data file is well-structured.

Standard CLI metadata schema applied consistently. All fields properly populated.

data/results/dev_tools/tfsec-v1.26.3-amd64.json (1)

1-261: Properly structured data file.

CLI metadata follows consistent pattern with aliases field included. All fields well-populated.

data/results/dev_tools/curl-latest.json (1)

91-103: Version field structure differs from other files.

The version field is an object with component versions (curl, libcurl, OpenSSL, etc.) rather than a simple string. This is semantically reasonable for curl but creates a schema inconsistency across the dataset. Verify that downstream parsing code handles both version field types (string vs. object) correctly.

data/results/dev_tools/dive-latest.json (1)

2-507: Nested subcommand structure is well-organized.

This file demonstrates the richer hierarchical schema needed for tools with subcommands. Each nested level (build, completion with bash/fish/powershell/zsh children, version) properly includes name, description, options, and subcommands fields. Structure is internally consistent and comprehensive.

data/results/dev_tools/mariadb-latest.json (1)

413-417: Version field uses object structure.

Like the curl file, the version field is an object containing component information (mariadb_version, client_version, commit_sha) rather than a simple version string. Ensure downstream tools handle this schema variant correctly.

data/results/dev_tools/hadolint-latest.json (1)

96-100: Option and shortcut appear reversed.

The option field contains -c while shortcut contains --config. Typically, the longer form should be the primary option and the shorter form should be the shortcut. This pattern may be consistent across the dataset, but verify it aligns with your schema intent.

data/results/dev_tools/memcached-latest.json (1)

1-1: Data file structure is well-formed.

The memcached CLI metadata is properly structured with consistent schema, comprehensive options, and aligned help text. No concerns identified.

data/results/dev_tools/php-8.5-rc-zts.json (1)

1-1: Data file is properly structured.

The PHP CLI metadata is well-formed with consistent schema, including the optional aliases field and null version (appropriate for RC builds). All options are properly documented.

data/results/dev_tools/psql-latest.json (1)

68-80: Multiple --help entries with different value variants.

Lines 68–80 show three entries for the --help option with different value and shortcut fields. This correctly represents psql's behavior where --help accepts optional arguments (options, commands, variables). If this pattern is intentional across your dataset, it's fine; otherwise, consider whether a single option with multiple variants is better modeled differently.

data/results/dev_tools/ruby-3.2-slim-trixie.json (1)

1-1: Data file is well-structured.

The Ruby CLI metadata is properly formatted with consistent schema across all options, including the optional aliases field. The raw_help_text is comprehensive and accurately represents the CLI surface.

data/results/dev_tools/python-latest.json (1)

1-1: Data file is well-formed.

The Python CLI metadata follows the schema correctly with all options properly documented and aligned with the raw help text.

data/results/dev_tools/benthos-4.16-cgo.json (1)

307-311: Template command appears incomplete or placeholder.

The benthos template subcommand (lines 307–311) has empty subcommands, empty options, and an empty description field. This may be intentional for a stub command, but verify that the help metadata was correctly captured or if this command requires placeholder treatment.

data/results/dev_tools/packer-latest.json (1)

14-22: Missing description field in packer build subcommand options.

Multiple options are missing the required description field (e.g., lines 14-16, 19-22). All options should include this field for consistency and to enable proper help/documentation generation.

         {
           "option": "--debug",
           "shortcut": "-debug",
+          "description": "Debug mode enabled for builds.",
+          "value": null,
+          "default": null
         },
         {
           "option": "--except",
           "shortcut": "-except",
           "description": "Run all builds and post-processors other than these.",
           "value": "foo,bar,baz"
+          "default": null
         },

Likely an incorrect or invalid review comment.

data/results/dev_tools/elasticsearch-9.2.2.json (1)

1-134: Approve: Well-structured data artifact.

The Elasticsearch CLI metadata is consistently formatted with proper field structure and descriptions that align with the raw help text. Global and subcommand options are appropriately separated, and the version metadata is clearly captured.

data/results/dev_tools/mysql-latest.json (1)

1-232: Approve: Comprehensive CLI metadata capture.

The MySQL client options are extensively documented with 227+ options properly catalogued. Each option includes description, value type, and default information where applicable. The data structure is consistent and the raw help text is comprehensive.

data/results/dev_tools/checkov-latest.json (1)

1-406: Approve: Well-organized tool with nested subcommand structure.

Checkov metadata is properly structured with root options and a well-defined checkov list subcommand. The 189+ options are comprehensively captured, with descriptions that match the usage documentation. Field variations (e.g., some options lack value or default fields) are appropriate for flag-style options that don't take values.

data/results/dev_tools/minio-latest.json (1)

1-129: Approve: Clean and accurate MinIO CLI metadata.

The MinIO server configuration is well-captured with root options and a subcommand structure. Default values (e.g., :9000 for address, /root/.minio/certs for certs-dir) are properly documented. The raw help text is comprehensive and aligns with the structured option data.

data/results/dev_tools/rustc-latest.json (1)

1-197: Approve: Accurate Rust compiler metadata.

Rustc options are comprehensively captured with proper value type information (e.g., <LINT>, <OPT>=<VALUE>). Both long and short form options are appropriately documented. The descriptions align well with the detailed raw help text, and the 1.91.1 version metadata is current.

data/results/dev_tools/nuclei-v3.6-amd64.json (1)

1-1016: Approve: Comprehensive security scanner metadata.

The Nuclei vulnerability scanner is extensively documented with 1000+ options covering all major functionality areas (targets, templates, filtering, output, configurations, etc.). The option structure is consistent and descriptions are clear. The raw help text includes helpful section organization that mirrors the tool's complexity.

data/results/dev_tools/node.js-25.2.1-alpine.json (1)

1-720: Approve structure with note on version metadata.

The Node.js CLI options are comprehensively documented with 708+ entries. The simpler field structure (option and description) is appropriate for this data format. Environment variable aliases (NO_COLOR, NODE_DISABLE_COLORS) are properly captured. However, the version field is set to null despite the filename indicating version 25.2.1, which suggests a data capture gap.

data/results/dev_tools/go-latest.json (1)

1-556: Approve: Well-structured Go CLI with excellent nested subcommand handling.

The Go tool is comprehensively represented with all 18 subcommands properly documented. The complex nested structure of go telemetry (with off/local/on variants and their combinations) is correctly captured, demonstrating good handling of hierarchical CLI designs. All options include consistent field structure with descriptions matching the raw help text. The version field is null, consistent with the version capture pattern across other files.

Comment on lines +1 to +910
{
"subcommands": [
{
"subcommands": [],
"options": [
{
"option": "-h",
"shortcut": "--help",
"description": "help for completion"
},
{
"option": "-C",
"shortcut": "--colors",
"description": "force print with colors"
},
{
"option": "",
"shortcut": "--csv-auto-parse",
"description": "parse CSV YAML/JSON values",
"default": "true"
},
{
"option": "",
"shortcut": "--csv-separator",
"description": "CSV Separator character",
"default": ","
},
{
"option": "",
"shortcut": "--debug-node-info",
"description": "debug node info"
},
{
"option": "-e",
"shortcut": "--exit-status",
"description": "set exit status if there are no matches or null or false is returned"
},
{
"option": "",
"shortcut": "--expression",
"description": "forcibly set the expression argument. Useful when yq argument detection thinks your expression is a file."
},
{
"option": "",
"shortcut": "--from-file",
"description": "Load expression from specified file."
},
{
"option": "-f",
"shortcut": "--front-matter",
"description": "(extract|process) first input as yaml front-matter. Extract will pull out the yaml content, process will run the expression against the yaml content, leaving the remaining data intact"
},
{
"option": "",
"shortcut": "--header-preprocess",
"description": "Slurp any header comments and separators before processing expression.",
"default": "true"
},
{
"option": "-I",
"shortcut": "--indent",
"description": "sets indent level for output",
"default": "2"
},
{
"option": "-i",
"shortcut": "--inplace",
"description": "update the file in place of first file given."
},
{
"option": "-p",
"shortcut": "--input-format",
"description": "[auto|a|yaml|y|json|j|props|p|csv|c|tsv|t|xml|x|base64|uri|toml|lua|l|ini|i] parse format for input.",
"default": "\"auto\""
},
{
"option": "",
"shortcut": "--lua-globals",
"description": "output keys as top-level global variables"
},
{
"option": "",
"shortcut": "--lua-prefix",
"description": "prefix",
"default": "\"return \""
},
{
"option": "",
"shortcut": "--lua-suffix",
"description": "suffix",
"default": "\";\n\""
},
{
"option": "",
"shortcut": "--lua-unquoted",
"description": "output unquoted string keys (e.g. {foo=\"bar\"})"
},
{
"option": "-M",
"shortcut": "--no-colors",
"description": "force print with no colors"
},
{
"option": "-N",
"shortcut": "--no-doc",
"description": "Don't print document separators (---)"
},
{
"option": "-0",
"shortcut": "--nul-output",
"description": "Use NUL char to separate values. If unwrap scalar is also set, fail if unwrapped scalar contains NUL char."
},
{
"option": "-n",
"shortcut": "--null-input",
"description": "Don't read input, simply evaluate the expression given. Useful for creating docs from scratch."
},
{
"option": "-o",
"shortcut": "--output-format",
"description": "[auto|a|yaml|y|json|j|props|p|csv|c|tsv|t|xml|x|base64|uri|toml|shell|s|lua|l|ini|i] output format type.",
"default": "\"auto\""
},
{
"option": "-P",
"shortcut": "--prettyPrint",
"description": "pretty print, shorthand for '... style = \"\"'"
},
{
"option": "",
"shortcut": "--properties-array-brackets",
"description": "use [x] in array paths (e.g. for SpringBoot)"
},
{
"option": "",
"shortcut": "--properties-separator",
"description": "separator to use between keys and values",
"default": "\" = \""
},
{
"option": "",
"shortcut": "--security-disable-env-ops",
"description": "Disable env related operations."
},
{
"option": "",
"shortcut": "--security-disable-file-ops",
"description": "Disable file related operations (e.g. load)"
},
{
"option": "-s",
"shortcut": "--split-exp",
"description": "print each result (or doc) into a file named (exp). [exp] argument must return a string. You can use $index in the expression as the result counter. The necessary directories will be created."
},
{
"option": "",
"shortcut": "--split-exp-file",
"description": "Use a file to specify the split-exp expression."
},
{
"option": "",
"shortcut": "--string-interpolation",
"description": "Toggles strings interpolation of \\(exp)",
"default": "true"
},
{
"option": "",
"shortcut": "--tsv-auto-parse",
"description": "parse TSV YAML/JSON values",
"default": "true"
},
{
"option": "-r",
"shortcut": "--unwrapScalar",
"description": "unwrap scalar, print the value with no quotes, colors or comments.",
"default": "true"
},
{
"option": "-v",
"shortcut": "--verbose",
"description": "verbose mode"
},
{
"option": "",
"shortcut": "--xml-attribute-prefix",
"description": "prefix for xml attributes",
"default": "\"+@\""
},
{
"option": "",
"shortcut": "--xml-content-name",
"description": "name for xml content (if no attribute name is present).",
"default": "\"+content\""
},
{
"option": "",
"shortcut": "--xml-directive-name",
"description": "name for xml directives (e.g. <!DOCTYPE thing cat>)",
"default": "\"+directive\""
},
{
"option": "",
"shortcut": "--xml-keep-namespace",
"description": "enables keeping namespace after parsing attributes",
"default": "true"
},
{
"option": "",
"shortcut": "--xml-proc-inst-prefix",
"description": "prefix for xml processing instructions (e.g. <?xml version=\"1\"?>)",
"default": "\"+p_\""
},
{
"option": "",
"shortcut": "--xml-raw-token",
"description": "enables using RawToken method instead Token. Commonly disables namespace translations. See https://pkg.go.dev/encoding/xml#Decoder.RawToken for details.",
"default": "true"
},
{
"option": "",
"shortcut": "--xml-skip-directives",
"description": "skip over directives (e.g. <!DOCTYPE thing cat>)"
},
{
"option": "",
"shortcut": "--xml-skip-proc-inst",
"description": "skip over process instructions (e.g. <?xml version=\"1\"?>)"
},
{
"option": "",
"shortcut": "--xml-strict-mode",
"description": "enables strict parsing of XML. See https://pkg.go.dev/encoding/xml for more details."
},
{
"option": "",
"shortcut": "--yaml-fix-merge-anchor-to-spec",
"description": "Fix merge anchor to match YAML spec.",
"default": "true"
}
],
"aliases": [
"completion",
"shell-completion"
],
"name": "yq completion",
"raw_help_text": "To load completions:\n\nBash:\n\n$ source <(yq completion bash)\n\n# To load completions for each session, execute once:\nLinux:\n $ yq completion bash > /etc/bash_completion.d/yq\nMacOS:\n $ yq completion bash > /usr/local/etc/bash_completion.d/yq\n\nZsh:\n\n# If shell completion is not already enabled in your environment you will need\n# to enable it. You can execute the following once:\n\n$ echo \"autoload -U compinit; compinit\" >> ~/.zshrc\n\n# To load completions for each session, execute once:\n$ yq completion zsh > \"${fpath[1]}/_yq\"\n\n# You will need to start a new shell for this setup to take effect.\n\nFish:\n\n$ yq completion fish | source\n\n# To load completions for each session, execute once:\n$ yq completion fish > ~/.config/fish/completions/yq.fish\n\nUsage:\n yq completion [bash|zsh|fish|powershell]\n\nAliases:\n completion, shell-completion\n\nFlags:\n -h, --help help for completion\n\nGlobal Flags:\n -C, --colors force print with colors\n --csv-auto-parse parse CSV YAML/JSON values (default true)\n --csv-separator char CSV Separator character (default ,)\n --debug-node-info debug node info\n -e, --exit-status set exit status if there are no matches or null or false is returned\n --expression string forcibly set the expression argument. Useful when yq argument detection thinks your expression is a file.\n --from-file string Load expression from specified file.\n -f, --front-matter string (extract|process) first input as yaml front-matter. Extract will pull out the yaml content, process will run the expression against the yaml content, leaving the remaining data intact\n --header-preprocess Slurp any header comments and separators before processing expression. (default true)\n -I, --indent int sets indent level for output (default 2)\n -i, --inplace update the file in place of first file given.\n -p, --input-format string [auto|a|yaml|y|json|j|props|p|csv|c|tsv|t|xml|x|base64|uri|toml|lua|l|ini|i] parse format for input. (default \"auto\")\n --lua-globals output keys as top-level global variables\n --lua-prefix string prefix (default \"return \")\n --lua-suffix string suffix (default \";\\n\")\n --lua-unquoted output unquoted string keys (e.g. {foo=\"bar\"})\n -M, --no-colors force print with no colors\n -N, --no-doc Don't print document separators (---)\n -0, --nul-output Use NUL char to separate values. If unwrap scalar is also set, fail if unwrapped scalar contains NUL char.\n -n, --null-input Don't read input, simply evaluate the expression given. Useful for creating docs from scratch.\n -o, --output-format string [auto|a|yaml|y|json|j|props|p|csv|c|tsv|t|xml|x|base64|uri|toml|shell|s|lua|l|ini|i] output format type. (default \"auto\")\n -P, --prettyPrint pretty print, shorthand for '... style = \"\"'\n --properties-array-brackets use [x] in array paths (e.g. for SpringBoot)\n --properties-separator string separator to use between keys and values (default \" = \")\n --security-disable-env-ops Disable env related operations.\n --security-disable-file-ops Disable file related operations (e.g. load)\n --shell-key-separator string separator for shell variable key paths (default \"_\")\n -s, --split-exp string print each result (or doc) into a file named (exp). [exp] argument must return a string. You can use $index in the expression as the result counter. The necessary directories will be created.\n --split-exp-file string Use a file to specify the split-exp expression.\n --string-interpolation Toggles strings interpolation of \\(exp) (default true)\n --tsv-auto-parse parse TSV YAML/JSON values (default true)\n -r, --unwrapScalar unwrap scalar, print the value with no quotes, colors or comments. Defaults to true for yaml (default true)\n -v, --verbose verbose mode\n --xml-attribute-prefix string prefix for xml attributes (default \"+@\")\n --xml-content-name string name for xml content (if no attribute name is present). (default \"+content\")\n --xml-directive-name string name for xml directives (e.g. <!DOCTYPE thing cat>) (default \"+directive\")\n --xml-keep-namespace enables keeping namespace after parsing attributes (default true)\n --xml-proc-inst-prefix string prefix for xml processing instructions (e.g. <?xml version=\"1\"?>) (default \"+p_\")\n --xml-raw-token enables using RawToken method instead Token. Commonly disables namespace translations. See https://pkg.go.dev/encoding/xml#Decoder.RawToken for details. (default true)\n --xml-skip-directives skip over directives (e.g. <!DOCTYPE thing cat>)\n --xml-skip-proc-inst skip over process instructions (e.g. <?xml version=\"1\"?>)\n --xml-strict-mode enables strict parsing of XML. See https://pkg.go.dev/encoding/xml for more details.\n --yaml-fix-merge-anchor-to-spec Fix merge anchor to match YAML spec. Will default to true in late 2025",
"description": "Generate the autocompletion script for the specified shell"
},
{
"subcommands": [],
"options": [
{
"option": "-h",
"shortcut": "--help",
"description": "help for eval"
},
{
"option": "-C",
"shortcut": "--colors",
"description": "force print with colors"
},
{
"option": "--csv-auto-parse",
"description": "parse CSV YAML/JSON values",
"default": "true"
},
{
"option": "--csv-separator",
"description": "CSV Separator character",
"default": ","
},
{
"option": "--debug-node-info",
"description": "debug node info"
},
{
"option": "-e",
"shortcut": "--exit-status",
"description": "set exit status if there are no matches or null or false is returned"
},
{
"option": "--expression",
"description": "forcibly set the expression argument. Useful when yq argument detection thinks your expression is a file."
},
{
"option": "--from-file",
"description": "Load expression from specified file."
},
{
"option": "-f",
"shortcut": "--front-matter",
"description": "(extract|process) first input as yaml front-matter. Extract will pull out the yaml content, process will run the expression against the yaml content, leaving the remaining data intact"
},
{
"option": "--header-preprocess",
"description": "Slurp any header comments and separators before processing expression.",
"default": "true"
},
{
"option": "-I",
"shortcut": "--indent",
"description": "sets indent level for output",
"default": "2"
},
{
"option": "-i",
"shortcut": "--inplace",
"description": "update the file in place of first file given."
},
{
"option": "-p",
"shortcut": "--input-format",
"description": "[auto|a|yaml|y|json|j|props|p|csv|c|tsv|t|xml|x|base64|uri|toml|lua|l|ini|i] parse format for input.",
"default": "auto"
},
{
"option": "--lua-globals",
"description": "output keys as top-level global variables"
},
{
"option": "--lua-prefix",
"description": "prefix",
"default": "return "
},
{
"option": "--lua-suffix",
"description": "suffix",
"default": ";\n"
},
{
"option": "--lua-unquoted",
"description": "output unquoted string keys (e.g. {foo=\"bar\"})"
},
{
"option": "-M",
"shortcut": "--no-colors",
"description": "force print with no colors"
},
{
"option": "-N",
"shortcut": "--no-doc",
"description": "Don't print document separators (---)"
},
{
"option": "-0",
"shortcut": "--nul-output",
"description": "Use NUL char to separate values. If unwrap scalar is also set, fail if unwrapped scalar contains NUL char."
},
{
"option": "-n",
"shortcut": "--null-input",
"description": "Don't read input, simply evaluate the expression given. Useful for creating docs from scratch."
},
{
"option": "-o",
"shortcut": "--output-format",
"description": "[auto|a|yaml|y|json|j|props|p|csv|c|tsv|t|xml|x|base64|uri|toml|shell|s|lua|l|ini|i] output format type.",
"default": "auto"
},
{
"option": "-P",
"shortcut": "--prettyPrint",
"description": "pretty print, shorthand for '... style = \"\"'"
},
{
"option": "--properties-array-brackets",
"description": "use [x] in array paths (e.g. for SpringBoot)"
},
{
"option": "--properties-separator",
"description": "separator to use between keys and values",
"default": " = "
},
{
"option": "--security-disable-env-ops",
"description": "Disable env related operations."
},
{
"option": "--security-disable-file-ops",
"description": "Disable file related operations (e.g. load)"
},
{
"option": "--shell-key-separator",
"description": "separator for shell variable key paths",
"default": "_"
},
{
"option": "-s",
"shortcut": "--split-exp",
"description": "print each result (or doc) into a file named (exp). [exp] argument must return a string. You can use $index in the expression as the result counter. The necessary directories will be created."
},
{
"option": "--split-exp-file",
"description": "Use a file to specify the split-exp expression."
},
{
"option": "--string-interpolation",
"description": "Toggles strings interpolation of \\(exp)",
"default": "true"
},
{
"option": "--tsv-auto-parse",
"description": "parse TSV YAML/JSON values",
"default": "true"
},
{
"option": "-r",
"shortcut": "--unwrapScalar",
"description": "unwrap scalar, print the value with no quotes, colors or comments.",
"default": "true"
},
{
"option": "-v",
"shortcut": "--verbose",
"description": "verbose mode"
},
{
"option": "--xml-attribute-prefix",
"description": "prefix for xml attributes",
"default": "+@"
},
{
"option": "--xml-content-name",
"description": "name for xml content (if no attribute name is present).",
"default": "+content"
},
{
"option": "--xml-directive-name",
"description": "name for xml directives (e.g. <!DOCTYPE thing cat>)",
"default": "+directive"
},
{
"option": "--xml-keep-namespace",
"description": "enables keeping namespace after parsing attributes",
"default": "true"
},
{
"option": "--xml-proc-inst-prefix",
"description": "prefix for xml processing instructions (e.g. <?xml version=\"1\"?>)",
"default": "+p_"
},
{
"option": "--xml-raw-token",
"description": "enables using RawToken method instead Token. Commonly disables namespace translations. See https://pkg.go.dev/encoding/xml#Decoder.RawToken for details.",
"default": "true"
},
{
"option": "--xml-skip-directives",
"description": "skip over directives (e.g. <!DOCTYPE thing cat>)"
},
{
"option": "--xml-skip-proc-inst",
"description": "skip over process instructions (e.g. <?xml version=\"1\"?>)"
},
{
"option": "--xml-strict-mode",
"description": "enables strict parsing of XML. See https://pkg.go.dev/encoding/xml for more details."
},
{
"option": "--yaml-fix-merge-anchor-to-spec",
"description": "Fix merge anchor to match YAML spec.",
"default": "true"
}
],
"aliases": [
"eval",
"e"
],
"name": "yq eval",
"raw_help_text": "yq is a portable command-line data file processor (https://github.com/mikefarah/yq/) \nSee https://mikefarah.gitbook.io/yq/ for detailed documentation and examples.\n\n## Evaluate Sequence ##\nThis command iterates over each yaml document from each given file, applies the \nexpression and prints the result in sequence.\n\nUsage:\n yq eval [expression] [yaml_file1]... [flags]\n\nAliases:\n eval, e\n\nExamples:\n\n# Reads field under the given path for each file\nyq e '.a.b' f1.yml f2.yml \n\n# Prints out the file\nyq e sample.yaml \n\n# Pipe from STDIN\n## use '-' as a filename to pipe from STDIN\ncat file2.yml | yq e '.a.b' file1.yml - file3.yml\n\n# Creates a new yaml document\n## Note that editing an empty file does not work.\nyq e -n '.a.b.c = \"cat\"' \n\n# Update a file in place\nyq e '.a.b = \"cool\"' -i file.yaml \n\n\nFlags:\n -h, --help help for eval\n\nGlobal Flags:\n -C, --colors force print with colors\n --csv-auto-parse parse CSV YAML/JSON values (default true)\n --csv-separator char CSV Separator character (default ,)\n --debug-node-info debug node info\n -e, --exit-status set exit status if there are no matches or null or false is returned\n --expression string forcibly set the expression argument. Useful when yq argument detection thinks your expression is a file.\n --from-file string Load expression from specified file.\n -f, --front-matter string (extract|process) first input as yaml front-matter. Extract will pull out the yaml content, process will run the expression against the yaml content, leaving the remaining data intact\n --header-preprocess Slurp any header comments and separators before processing expression. (default true)\n -I, --indent int sets indent level for output (default 2)\n -i, --inplace update the file in place of first file given.\n -p, --input-format string [auto|a|yaml|y|json|j|props|p|csv|c|tsv|t|xml|x|base64|uri|toml|lua|l|ini|i] parse format for input. (default \"auto\")\n --lua-globals output keys as top-level global variables\n --lua-prefix string prefix (default \"return \")\n --lua-suffix string suffix (default \";\\n\")\n --lua-unquoted output unquoted string keys (e.g. {foo=\"bar\"})\n -M, --no-colors force print with no colors\n -N, --no-doc Don't print document separators (---)\n -0, --nul-output Use NUL char to separate values. If unwrap scalar is also set, fail if unwrapped scalar contains NUL char.\n -n, --null-input Don't read input, simply evaluate the expression given. Useful for creating docs from scratch.\n -o, --output-format string [auto|a|yaml|y|json|j|props|p|csv|c|tsv|t|xml|x|base64|uri|toml|shell|s|lua|l|ini|i] output format type. (default \"auto\")\n -P, --prettyPrint pretty print, shorthand for '... style = \"\"'\n --properties-array-brackets use [x] in array paths (e.g. for SpringBoot)\n --properties-separator string separator to use between keys and values (default \" = \")\n --security-disable-env-ops Disable env related operations.\n --security-disable-file-ops Disable file related operations (e.g. load)\n --shell-key-separator string separator for shell variable key paths (default \"_\")\n -s, --split-exp string print each result (or doc) into a file named (exp). [exp] argument must return a string. You can use $index in the expression as the result counter. The necessary directories will be created.\n --split-exp-file string Use a file to specify the split-exp expression.\n --string-interpolation Toggles strings interpolation of \\(exp) (default true)\n --tsv-auto-parse parse TSV YAML/JSON values (default true)\n -r, --unwrapScalar unwrap scalar, print the value with no quotes, colors or comments. Defaults to true for yaml (default true)\n -v, --verbose verbose mode\n --xml-attribute-prefix string prefix for xml attributes (default \"+@\")\n --xml-content-name string name for xml content (if no attribute name is present). (default \"+content\")\n --xml-directive-name string name for xml directives (e.g. <!DOCTYPE thing cat>) (default \"+directive\")\n --xml-keep-namespace enables keeping namespace after parsing attributes (default true)\n --xml-proc-inst-prefix string prefix for xml processing instructions (e.g. <?xml version=\"1\"?>) (default \"+p_\")\n --xml-raw-token enables using RawToken method instead Token. Commonly disables namespace translations. See https://pkg.go.dev/encoding/xml#Decoder.RawToken for details. (default true)\n --xml-skip-directives skip over directives (e.g. <!DOCTYPE thing cat>)\n --xml-skip-proc-inst skip over process instructions (e.g. <?xml version=\"1\"?>)\n --xml-strict-mode enables strict parsing of XML. See https://pkg.go.dev/encoding/xml for more details.\n --yaml-fix-merge-anchor-to-spec Fix merge anchor to match YAML spec. Will default to true in late 2025",
"description": "(default) Apply the expression to each document in each yaml file in sequence"
},
{
"subcommands": [],
"options": [
{
"option": "--help",
"shortcut": "-h",
"description": "help for eval-all"
},
{
"option": "--colors",
"shortcut": "-C",
"description": "force print with colors"
},
{
"option": "--csv-auto-parse",
"description": "parse CSV YAML/JSON values (default true)"
},
{
"option": "--csv-separator",
"description": "CSV Separator character (default ,)",
"value": "char"
},
{
"option": "--debug-node-info",
"description": "debug node info"
},
{
"option": "--exit-status",
"shortcut": "-e",
"description": "set exit status if there are no matches or null or false is returned"
},
{
"option": "--expression",
"description": "forcibly set the expression argument. Useful when yq argument detection thinks your expression is a file.",
"value": "string"
},
{
"option": "--from-file",
"description": "Load expression from specified file.",
"value": "string"
},
{
"option": "--front-matter",
"shortcut": "-f",
"description": "(extract|process) first input as yaml front-matter. Extract will pull out the yaml content, process will run the expression against the yaml content, leaving the remaining data intact",
"value": "string"
},
{
"option": "--header-preprocess",
"description": "Slurp any header comments and separators before processing expression. (default true)"
},
{
"option": "--indent",
"shortcut": "-I",
"description": "sets indent level for output (default 2)",
"value": "int"
},
{
"option": "--inplace",
"shortcut": "-i",
"description": "update the file in place of first file given."
},
{
"option": "--input-format",
"shortcut": "-p",
"description": "[auto|a|yaml|y|json|j|props|p|csv|c|tsv|t|xml|x|base64|uri|toml|lua|l|ini|i] parse format for input. (default \"auto\")",
"value": "string"
},
{
"option": "--lua-globals",
"description": "output keys as top-level global variables"
},
{
"option": "--lua-prefix",
"description": "prefix (default \"return \")",
"value": "string"
},
{
"option": "--lua-suffix",
"description": "suffix (default \";\n\")",
"value": "string"
},
{
"option": "--lua-unquoted",
"description": "output unquoted string keys (e.g. {foo=\"bar\"})"
},
{
"option": "--no-colors",
"shortcut": "-M",
"description": "force print with no colors"
},
{
"option": "--no-doc",
"shortcut": "-N",
"description": "Don't print document separators (---)"
},
{
"option": "--nul-output",
"shortcut": "-0",
"description": "Use NUL char to separate values. If unwrap scalar is also set, fail if unwrapped scalar contains NUL char."
},
{
"option": "--null-input",
"shortcut": "-n",
"description": "Don't read input, simply evaluate the expression given. Useful for creating docs from scratch."
},
{
"option": "--output-format",
"shortcut": "-o",
"description": "[auto|a|yaml|y|json|j|props|p|csv|c|tsv|t|xml|x|base64|uri|toml|shell|s|lua|l|ini|i] output format type. (default \"auto\")",
"value": "string"
},
{
"option": "--prettyPrint",
"shortcut": "-P",
"description": "pretty print, shorthand for '... style = \"\"'"
},
{
"option": "--properties-array-brackets",
"description": "use [x] in array paths (e.g. for SpringBoot)"
},
{
"option": "--properties-separator",
"description": "separator to use between keys and values (default \" = \")",
"value": "string"
},
{
"option": "--security-disable-env-ops",
"description": "Disable env related operations."
},
{
"option": "--security-disable-file-ops",
"description": "Disable file related operations (e.g. load)"
},
{
"option": "--shell-key-separator",
"description": "separator for shell variable key paths (default \"_\")",
"value": "string"
},
{
"option": "--split-exp",
"shortcut": "-s",
"description": "print each result (or doc) into a file named (exp). [exp] argument must return a string. You can use $index in the expression as the result counter. The necessary directories will be created.",
"value": "string"
},
{
"option": "--split-exp-file",
"description": "Use a file to specify the split-exp expression.",
"value": "string"
},
{
"option": "--string-interpolation",
"description": "Toggles strings interpolation of \\(exp) (default true)"
},
{
"option": "--tsv-auto-parse",
"description": "parse TSV YAML/JSON values (default true)"
},
{
"option": "--unwrapScalar",
"shortcut": "-r",
"description": "unwrap scalar, print the value with no quotes, colors or comments. Defaults to true for yaml (default true)"
},
{
"option": "--verbose",
"shortcut": "-v",
"description": "verbose mode"
},
{
"option": "--xml-attribute-prefix",
"description": "prefix for xml attributes (default \"+@\")",
"value": "string"
},
{
"option": "--xml-content-name",
"description": "name for xml content (if no attribute name is present). (default \"+content\")",
"value": "string"
},
{
"option": "--xml-directive-name",
"description": "name for xml directives (e.g. <!DOCTYPE thing cat>) (default \"+directive\")",
"value": "string"
},
{
"option": "--xml-keep-namespace",
"description": "enables keeping namespace after parsing attributes (default true)"
},
{
"option": "--xml-proc-inst-prefix",
"description": "prefix for xml processing instructions (e.g. <?xml version=\"1\"?>) (default \"+p_\")",
"value": "string"
},
{
"option": "--xml-raw-token",
"description": "enables using RawToken method instead Token. Commonly disables namespace translations. See https://pkg.go.dev/encoding/xml#Decoder.RawToken for details. (default true)"
},
{
"option": "--xml-skip-directives",
"description": "skip over directives (e.g. <!DOCTYPE thing cat>)"
},
{
"option": "--xml-skip-proc-inst",
"description": "skip over process instructions (e.g. <?xml version=\"1\"?>)"
},
{
"option": "--xml-strict-mode",
"description": "enables strict parsing of XML. See https://pkg.go.dev/encoding/xml for more details."
},
{
"option": "--yaml-fix-merge-anchor-to-spec",
"description": "Fix merge anchor to match YAML spec. Will default to true in late 2025"
}
],
"aliases": [
"eval-all",
"ea"
],
"name": "yq eval-all",
"raw_help_text": "yq is a portable command-line data file processor (https://github.com/mikefarah/yq/) \nSee https://mikefarah.gitbook.io/yq/ for detailed documentation and examples.\n\n## Evaluate All ##\nThis command loads _all_ yaml documents of _all_ yaml files and runs expression once\nUseful when you need to run an expression across several yaml documents or files (like merge).\nNote that it consumes more memory than eval.\n\nUsage:\n yq eval-all [expression] [yaml_file1]... [flags]\n\nAliases:\n eval-all, ea\n\nExamples:\n\n# Merge f2.yml into f1.yml (in place)\nyq eval-all --inplace 'select(fileIndex == 0) * select(fileIndex == 1)' f1.yml f2.yml\n## the same command and expression using shortened names:\nyq ea -i 'select(fi == 0) * select(fi == 1)' f1.yml f2.yml\n\n\n# Merge all given files\nyq ea '. as $item ireduce ({}; . * $item )' file1.yml file2.yml ...\n\n# Pipe from STDIN\n## use '-' as a filename to pipe from STDIN\ncat file2.yml | yq ea '.a.b' file1.yml - file3.yml\n\n\nFlags:\n -h, --help help for eval-all\n\nGlobal Flags:\n -C, --colors force print with colors\n --csv-auto-parse parse CSV YAML/JSON values (default true)\n --csv-separator char CSV Separator character (default ,)\n --debug-node-info debug node info\n -e, --exit-status set exit status if there are no matches or null or false is returned\n --expression string forcibly set the expression argument. Useful when yq argument detection thinks your expression is a file.\n --from-file string Load expression from specified file.\n -f, --front-matter string (extract|process) first input as yaml front-matter. Extract will pull out the yaml content, process will run the expression against the yaml content, leaving the remaining data intact\n --header-preprocess Slurp any header comments and separators before processing expression. (default true)\n -I, --indent int sets indent level for output (default 2)\n -i, --inplace update the file in place of first file given.\n -p, --input-format string [auto|a|yaml|y|json|j|props|p|csv|c|tsv|t|xml|x|base64|uri|toml|lua|l|ini|i] parse format for input. (default \"auto\")\n --lua-globals output keys as top-level global variables\n --lua-prefix string prefix (default \"return \")\n --lua-suffix string suffix (default \";\\n\")\n --lua-unquoted output unquoted string keys (e.g. {foo=\"bar\"})\n -M, --no-colors force print with no colors\n -N, --no-doc Don't print document separators (---)\n -0, --nul-output Use NUL char to separate values. If unwrap scalar is also set, fail if unwrapped scalar contains NUL char.\n -n, --null-input Don't read input, simply evaluate the expression given. Useful for creating docs from scratch.\n -o, --output-format string [auto|a|yaml|y|json|j|props|p|csv|c|tsv|t|xml|x|base64|uri|toml|shell|s|lua|l|ini|i] output format type. (default \"auto\")\n -P, --prettyPrint pretty print, shorthand for '... style = \"\"'\n --properties-array-brackets use [x] in array paths (e.g. for SpringBoot)\n --properties-separator string separator to use between keys and values (default \" = \")\n --security-disable-env-ops Disable env related operations.\n --security-disable-file-ops Disable file related operations (e.g. load)\n --shell-key-separator string separator for shell variable key paths (default \"_\")\n -s, --split-exp string print each result (or doc) into a file named (exp). [exp] argument must return a string. You can use $index in the expression as the result counter. The necessary directories will be created.\n --split-exp-file string Use a file to specify the split-exp expression.\n --string-interpolation Toggles strings interpolation of \\(exp) (default true)\n --tsv-auto-parse parse TSV YAML/JSON values (default true)\n -r, --unwrapScalar unwrap scalar, print the value with no quotes, colors or comments. Defaults to true for yaml (default true)\n -v, --verbose verbose mode\n --xml-attribute-prefix string prefix for xml attributes (default \"+@\")\n --xml-content-name string name for xml content (if no attribute name is present). (default \"+content\")\n --xml-directive-name string name for xml directives (e.g. <!DOCTYPE thing cat>) (default \"+directive\")\n --xml-keep-namespace enables keeping namespace after parsing attributes (default true)\n --xml-proc-inst-prefix string prefix for xml processing instructions (e.g. <?xml version=\"1\"?>) (default \"+p_\")\n --xml-raw-token enables using RawToken method instead Token. Commonly disables namespace translations. See https://pkg.go.dev/encoding/xml#Decoder.RawToken for details. (default true)\n --xml-skip-directives skip over directives (e.g. <!DOCTYPE thing cat>)\n --xml-skip-proc-inst skip over process instructions (e.g. <?xml version=\"1\"?>)\n --xml-strict-mode enables strict parsing of XML. See https://pkg.go.dev/encoding/xml for more details.\n --yaml-fix-merge-anchor-to-spec Fix merge anchor to match YAML spec. Will default to true in late 2025",
"description": "Loads _all_ yaml documents of _all_ yaml files and runs expression once"
}
],
"options": [
{
"option": "-C",
"shortcut": "--colors",
"description": "force print with colors"
},
{
"option": "--csv-auto-parse",
"description": "parse CSV YAML/JSON values (default true)"
},
{
"option": "--csv-separator",
"description": "CSV Separator character (default ,)",
"value": "char"
},
{
"option": "--debug-node-info",
"description": "debug node info"
},
{
"option": "-e",
"shortcut": "--exit-status",
"description": "set exit status if there are no matches or null or false is returned"
},
{
"option": "--expression",
"description": "forcibly set the expression argument. Useful when yq argument detection thinks your expression is a file.",
"value": "string"
},
{
"option": "--from-file",
"description": "Load expression from specified file.",
"value": "string"
},
{
"option": "-f",
"shortcut": "--front-matter",
"description": "(extract|process) first input as yaml front-matter. Extract will pull out the yaml content, process will run the expression against the yaml content, leaving the remaining data intact",
"value": "string"
},
{
"option": "--header-preprocess",
"description": "Slurp any header comments and separators before processing expression. (default true)"
},
{
"option": "-h",
"shortcut": "--help",
"description": "help for yq"
},
{
"option": "-I",
"shortcut": "--indent",
"description": "sets indent level for output (default 2)",
"value": "int"
},
{
"option": "-i",
"shortcut": "--inplace",
"description": "update the file in place of first file given."
},
{
"option": "-p",
"shortcut": "--input-format",
"description": "[auto|a|yaml|y|json|j|props|p|csv|c|tsv|t|xml|x|base64|uri|toml|lua|l|ini|i] parse format for input. (default \"auto\")",
"value": "string"
},
{
"option": "--lua-globals",
"description": "output keys as top-level global variables"
},
{
"option": "--lua-prefix",
"description": "prefix (default \"return \")",
"value": "string"
},
{
"option": "--lua-suffix",
"description": "suffix (default \";\n\")",
"value": "string"
},
{
"option": "--lua-unquoted",
"description": "output unquoted string keys (e.g. {foo=\"bar\"})"
},
{
"option": "-M",
"shortcut": "--no-colors",
"description": "force print with no colors"
},
{
"option": "-N",
"shortcut": "--no-doc",
"description": "Don't print document separators (---)"
},
{
"option": "-0",
"shortcut": "--nul-output",
"description": "Use NUL char to separate values. If unwrap scalar is also set, fail if unwrapped scalar contains NUL char."
},
{
"option": "-n",
"shortcut": "--null-input",
"description": "Don't read input, simply evaluate the expression given. Useful for creating docs from scratch."
},
{
"option": "-o",
"shortcut": "--output-format",
"description": "[auto|a|yaml|y|json|j|props|p|csv|c|tsv|t|xml|x|base64|uri|toml|shell|s|lua|l|ini|i] output format type. (default \"auto\")",
"value": "string"
},
{
"option": "-P",
"shortcut": "--prettyPrint",
"description": "pretty print, shorthand for '... style = \"\"'"
},
{
"option": "--properties-array-brackets",
"description": "use [x] in array paths (e.g. for SpringBoot)"
},
{
"option": "--properties-separator",
"description": "separator to use between keys and values (default \" = \")",
"value": "string"
},
{
"option": "--security-disable-env-ops",
"description": "Disable env related operations."
},
{
"option": "--security-disable-file-ops",
"description": "Disable file related operations (e.g. load)"
},
{
"option": "-s",
"shortcut": "--split-exp",
"description": "print each result (or doc) into a file named (exp). [exp] argument must return a string. You can use $index in the expression as the result counter. The necessary directories will be created.",
"value": "string"
},
{
"option": "--split-exp-file",
"description": "Use a file to specify the split-exp expression.",
"value": "string"
},
{
"option": "--string-interpolation",
"description": "Toggles strings interpolation of \\(exp) (default true)"
},
{
"option": "--tsv-auto-parse",
"description": "parse TSV YAML/JSON values (default true)"
},
{
"option": "-r",
"shortcut": "--unwrapScalar",
"description": "unwrap scalar, print the value with no quotes, colors or comments. Defaults to true for yaml (default true)"
},
{
"option": "-v",
"shortcut": "--verbose",
"description": "verbose mode"
},
{
"option": "-V",
"shortcut": "--version",
"description": "Print version information and quit"
},
{
"option": "--xml-attribute-prefix",
"description": "prefix for xml attributes (default \"+@\")",
"value": "string"
},
{
"option": "--xml-content-name",
"description": "name for xml content (if no attribute name is present). (default \"+content\")",
"value": "string"
},
{
"option": "--xml-directive-name",
"description": "name for xml directives (e.g. <!DOCTYPE thing cat>) (default \"+directive\")",
"value": "string"
},
{
"option": "--xml-keep-namespace",
"description": "enables keeping namespace after parsing attributes (default true)"
},
{
"option": "--xml-proc-inst-prefix",
"description": "prefix for xml processing instructions (e.g. <?xml version=\"1\"?>) (default \"+p_\")",
"value": "string"
},
{
"option": "--xml-raw-token",
"description": "enables using RawToken method instead Token. Commonly disables namespace translations. See https://pkg.go.dev/encoding/xml#Decoder.RawToken for details. (default true)"
},
{
"option": "--xml-skip-directives",
"description": "skip over directives (e.g. <!DOCTYPE thing cat>)"
},
{
"option": "--xml-skip-proc-inst",
"description": "skip over process instructions (e.g. <?xml version=\"1\"?>)"
},
{
"option": "--xml-strict-mode",
"description": "enables strict parsing of XML. See https://pkg.go.dev/encoding/xml for more details."
},
{
"option": "--yaml-fix-merge-anchor-to-spec",
"description": "Fix merge anchor to match YAML spec. Will default to true in late 2025"
}
],
"aliases": [],
"name": "yq",
"raw_help_text": "yq is a portable command-line data file processor (https://github.com/mikefarah/yq/) \nSee https://mikefarah.gitbook.io/yq/ for detailed documentation and examples.\n\nUsage:\n yq [flags]\n yq [command]\n\nExamples:\n\n# yq tries to auto-detect the file format based off the extension, and defaults to YAML if it's unknown (or piping through STDIN)\n# Use the '-p/--input-format' flag to specify a format type.\ncat file.xml | yq -p xml\n\n# read the \"stuff\" node from \"myfile.yml\"\nyq '.stuff' < myfile.yml\n\n# update myfile.yml in place\nyq -i '.stuff = \"foo\"' myfile.yml\n\n# print contents of sample.json as idiomatic YAML\nyq -P -oy sample.json\n\n\nAvailable Commands:\n completion Generate the autocompletion script for the specified shell\n eval (default) Apply the expression to each document in each yaml file in sequence\n eval-all Loads _all_ yaml documents of _all_ yaml files and runs expression once\n help Help about any command\n\nFlags:\n -C, --colors force print with colors\n --csv-auto-parse parse CSV YAML/JSON values (default true)\n --csv-separator char CSV Separator character (default ,)\n --debug-node-info debug node info\n -e, --exit-status set exit status if there are no matches or null or false is returned\n --expression string forcibly set the expression argument. Useful when yq argument detection thinks your expression is a file.\n --from-file string Load expression from specified file.\n -f, --front-matter string (extract|process) first input as yaml front-matter. Extract will pull out the yaml content, process will run the expression against the yaml content, leaving the remaining data intact\n --header-preprocess Slurp any header comments and separators before processing expression. (default true)\n -h, --help help for yq\n -I, --indent int sets indent level for output (default 2)\n -i, --inplace update the file in place of first file given.\n -p, --input-format string [auto|a|yaml|y|json|j|props|p|csv|c|tsv|t|xml|x|base64|uri|toml|lua|l|ini|i] parse format for input. (default \"auto\")\n --lua-globals output keys as top-level global variables\n --lua-prefix string prefix (default \"return \")\n --lua-suffix string suffix (default \";\\n\")\n --lua-unquoted output unquoted string keys (e.g. {foo=\"bar\"})\n -M, --no-colors force print with no colors\n -N, --no-doc Don't print document separators (---)\n -0, --nul-output Use NUL char to separate values. If unwrap scalar is also set, fail if unwrapped scalar contains NUL char.\n -n, --null-input Don't read input, simply evaluate the expression given. Useful for creating docs from scratch.\n -o, --output-format string [auto|a|yaml|y|json|j|props|p|csv|c|tsv|t|xml|x|base64|uri|toml|shell|s|lua|l|ini|i] output format type. (default \"auto\")\n -P, --prettyPrint pretty print, shorthand for '... style = \"\"'\n --properties-array-brackets use [x] in array paths (e.g. for SpringBoot)\n --properties-separator string separator to use between keys and values (default \" = \")\n --security-disable-env-ops Disable env related operations.\n --security-disable-file-ops Disable file related operations (e.g. load)\n --shell-key-separator string separator for shell variable key paths (default \"_\")\n -s, --split-exp string print each result (or doc) into a file named (exp). [exp] argument must return a string. You can use $index in the expression as the result counter. The necessary directories will be created.\n --split-exp-file string Use a file to specify the split-exp expression.\n --string-interpolation Toggles strings interpolation of \\(exp) (default true)\n --tsv-auto-parse parse TSV YAML/JSON values (default true)\n -r, --unwrapScalar unwrap scalar, print the value with no quotes, colors or comments. Defaults to true for yaml (default true)\n -v, --verbose verbose mode\n -V, --version Print version information and quit\n --xml-attribute-prefix string prefix for xml attributes (default \"+@\")\n --xml-content-name string name for xml content (if no attribute name is present). (default \"+content\")\n --xml-directive-name string name for xml directives (e.g. <!DOCTYPE thing cat>) (default \"+directive\")\n --xml-keep-namespace enables keeping namespace after parsing attributes (default true)\n --xml-proc-inst-prefix string prefix for xml processing instructions (e.g. <?xml version=\"1\"?>) (default \"+p_\")\n --xml-raw-token enables using RawToken method instead Token. Commonly disables namespace translations. See https://pkg.go.dev/encoding/xml#Decoder.RawToken for details. (default true)\n --xml-skip-directives skip over directives (e.g. <!DOCTYPE thing cat>)\n --xml-skip-proc-inst skip over process instructions (e.g. <?xml version=\"1\"?>)\n --xml-strict-mode enables strict parsing of XML. See https://pkg.go.dev/encoding/xml for more details.\n --yaml-fix-merge-anchor-to-spec Fix merge anchor to match YAML spec. Will default to true in late 2025\n\nUse \"yq [command] --help\" for more information about a command.",
"version": "v4.49.2"
} No newline at end of file
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Schema inconsistency: Option structure varies between subcommands.

The yq-latest.json file exhibits inconsistent data structure across subcommands:

  1. Reversed option/shortcut mapping: The completion subcommand has cases where option is empty and the actual flag is in shortcut (e.g., lines 17, 23, 29). The eval and eval-all subcommands use opposite mapping (e.g., line 477: "option": "--help" vs. "shortcut": "-h").

  2. Missing value fields: Many options in eval and eval-all subcommands lack the value field entirely, while completion subcommand includes it.

  3. Inconsistent field presence: Some options have shortcut field; others don't (compare line 263 --csv-auto-parse in eval with line 17 --csv-auto-parse in completion).

This inconsistency violates schema uniformity and will break downstream consumers expecting a consistent structure. Ensure all options follow a uniform schema: {option, shortcut, description, value?, default?} with consistent field semantics across all subcommands.

🤖 Prompt for AI Agents
In data/results/cncf_cli_tools/yq-latest.json (around lines 1-910) the options
objects are inconsistent across subcommands (some entries put the long flag in
"shortcut" and leave "option" empty, many entries omit the "value" key, and
presence of "shortcut"/"default" varies); normalize every option object to the
schema { "option": "<long form or canonical flag>", "shortcut": "<short form or
empty/null if none>", "description": "<text>", "value": "<type or null,
optional>", "default": "<value or null, optional>" } by: 1) moving any long-form
flags currently in "shortcut" into "option" and ensure "shortcut" contains only
the single-char short flag (or null if none); 2) add missing "value" keys where
other subcommands include them (use null or the correct type string when none
applies) and ensure "default" is present when documented; 3) remove empty string
values for "option"/"shortcut" and replace with null; 4) apply this
normalization to all subcommand option lists, then validate the file against the
expected JSON schema and run a JSON linter to ensure uniformity.

Comment on lines +77 to +86
"options": [
{
"option": "--help",
"shortcut": "-h",
"description": "help for build-info"
}
],
"name": "caddy build-info",
"raw_help_text": "Prints information about this build\n\nUsage:\n caddy build-info [flags]\n\nFlags:\n -h, --help help for build-info\n\nFull documentation is available at:\nhttps://caddyserver.com/docs/command-line",
"description": "Prints information about this build"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Missing required fields in build-info help option.

Line 82–87 shows a help option for the caddy build-info subcommand that is missing value and default fields, whereas all other option entries include these fields (even if null). For consistency and schema compliance, these fields should be added.

Apply this diff to add missing fields:

        {
          "option": "--help",
          "shortcut": "-h",
-         "description": "help for build-info"
+         "description": "help for build-info",
+         "value": null,
+         "default": null
        }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"options": [
{
"option": "--help",
"shortcut": "-h",
"description": "help for build-info"
}
],
"name": "caddy build-info",
"raw_help_text": "Prints information about this build\n\nUsage:\n caddy build-info [flags]\n\nFlags:\n -h, --help help for build-info\n\nFull documentation is available at:\nhttps://caddyserver.com/docs/command-line",
"description": "Prints information about this build"
"options": [
{
"option": "--help",
"shortcut": "-h",
"description": "help for build-info",
"value": null,
"default": null
}
],
🤖 Prompt for AI Agents
In data/results/dev_tools/caddy-latest.json around lines 77 to 86, the "options"
entry for the "caddy build-info" subcommand is missing the required "value" and
"default" fields; update that option object to include "value": null and
"default": null (matching other option entries) so the JSON schema stays
consistent and other tools consuming this file don't fail.

Comment on lines +138 to +170
"shortcut": "",
"description": "Azure DevOps access token",
"value": "string",
"default": ""
},
{
"option": "--behavior",
"shortcut": "",
"description": "Behavior when posting comment, one of: update (default), new, delete-and-new",
"value": "string",
"default": "update"
},
{
"option": "--dry-run",
"shortcut": "",
"description": "Generate comment without actually posting to Azure Repos",
"value": "",
"default": ""
},
{
"option": "--help",
"shortcut": "-h",
"description": "help for azure-repos",
"value": "",
"default": ""
},
{
"option": "--path",
"shortcut": "-p",
"description": "Path to Infracost JSON files, glob patterns need quotes",
"value": "stringArray",
"default": ""
},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Inconsistent shortcut field representation in infracost comment subcommands.

Options use empty string "" for missing shortcuts (lines 138, 140, 147, etc.), whereas other files use null. This inconsistency could cause parsing issues. Standardize on null for missing shortcut values:

           {
             "option": "--azure-access-token",
-            "shortcut": "",
+            "shortcut": null,
             "description": "Azure DevOps access token",
             "value": "string",
-            "default": ""
+            "default": null
           },

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In data/results/dev_tools/infracost-ci-0.10.5.json around lines 138 to 170,
several option entries use the empty string "" for missing shortcut values
(e.g., lines 138, 140, 147) which is inconsistent with other files that use JSON
null; update those entries to use null (not the string "null") for shortcut when
there is no shortcut to standardize the representation and avoid parsing issues.

Comment on lines +858 to +887
"options": [
{
"option": "--log-level",
"shortcut": "-",
"description": "Log level (trace, debug, info, warn, error, fatal)",
"value": "string",
"default": null
},
{
"option": "--no-color",
"shortcut": null,
"description": "Turn off colored output",
"value": null,
"default": null
},
{
"option": "--help",
"shortcut": "-h",
"description": "help for infracost",
"value": null,
"default": null
},
{
"option": "--version",
"shortcut": "-v",
"description": "version for infracost",
"value": null,
"default": null
}
],
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Suspicious shortcut value at line 861.

The --log-level option has "shortcut": "-" which appears to be an error. A single dash is not a valid shortcut format; it should be either null or a proper short option like "-l". This likely needs correction:

     {
       "option": "--log-level",
-      "shortcut": "-",
+      "shortcut": null,
       "description": "Log level (trace, debug, info, warn, error, fatal)",
       "value": "string",
       "default": null
     },
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"options": [
{
"option": "--log-level",
"shortcut": "-",
"description": "Log level (trace, debug, info, warn, error, fatal)",
"value": "string",
"default": null
},
{
"option": "--no-color",
"shortcut": null,
"description": "Turn off colored output",
"value": null,
"default": null
},
{
"option": "--help",
"shortcut": "-h",
"description": "help for infracost",
"value": null,
"default": null
},
{
"option": "--version",
"shortcut": "-v",
"description": "version for infracost",
"value": null,
"default": null
}
],
"options": [
{
"option": "--log-level",
"shortcut": null,
"description": "Log level (trace, debug, info, warn, error, fatal)",
"value": "string",
"default": null
},
{
"option": "--no-color",
"shortcut": null,
"description": "Turn off colored output",
"value": null,
"default": null
},
{
"option": "--help",
"shortcut": "-h",
"description": "help for infracost",
"value": null,
"default": null
},
{
"option": "--version",
"shortcut": "-v",
"description": "version for infracost",
"value": null,
"default": null
}
],
🤖 Prompt for AI Agents
In data/results/dev_tools/infracost-ci-0.10.5.json around lines 858 to 887 the
`--log-level` option has an invalid `"shortcut": "-"`; replace that value with
either a proper short flag (e.g. `"-l"`) if a shortcut is intended, or set the
value to `null` if there should be no shortcut, and ensure the JSON remains
valid (update only the `"shortcut"` field for the `--log-level` entry).

Comment on lines +51 to +61
"options": [
{
"option": "--help",
"shortcut": "-h",
"description": "help for powershell"
},
{
"option": "--no-descriptions",
"shortcut": null,
"description": "disable completion descriptions"
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Inconsistent option schema in k9s completion powershell subcommand.

The options at lines 51-61 are missing value and default fields, whereas all other options in the file include these fields (set to null when not applicable). This schema inconsistency could cause downstream parsing issues.

         {
           "option": "--help",
           "shortcut": "-h",
           "description": "help for powershell"
+          "value": null,
+          "default": null
         },
         {
           "option": "--no-descriptions",
           "shortcut": null,
           "description": "disable completion descriptions"
+          "value": null,
+          "default": null
         }

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In data/results/dev_tools/k9s-latest.json around lines 51 to 61, the "options"
entries for the powershell completion subcommand are missing the "value" and
"default" fields which other option objects include; add "value": null and
"default": null to each of these option objects so they match the file's schema
and avoid downstream parsing errors.

Comment on lines +296 to +348
"subcommands": [],
"options": [
{
"option": "-except=foo,bar,baz",
"description": "Validate all builds other than these.",
"value": "foo,bar,baz"
},
{
"option": "-evaluate-datasources",
"description": "Evaluate data sources during validation (HCL2 only, may incur costs); Defaults to false.",
"value": null
},
{
"option": "-ignore-prerelease-plugins",
"description": "Disable the loading of prerelease plugin binaries (x.y.z-dev).",
"value": null
},
{
"option": "-machine-readable",
"description": "Produce machine-readable output.",
"value": null
},
{
"option": "-no-warn-undeclared-var",
"description": "Disable warnings for user variable files containing undeclared variables.",
"value": null
},
{
"option": "-syntax-only",
"description": "Only check syntax. Do not verify config of the template.",
"value": null
},
{
"option": "-use-sequential-evaluation",
"description": "Fallback to using a sequential approach for local/datasource evaluation.",
"value": null
},
{
"option": "-var 'key=value'",
"description": "Variable for templates, can be used multiple times.",
"value": "'key=value'"
},
{
"option": "-var-file=path",
"description": "JSON or HCL2 file containing user variables, can be used multiple times.",
"value": "path"
},
{
"option": "-only=foo,bar,baz",
"description": "Validate only these builds.",
"value": "foo,bar,baz"
}
],
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Inconsistent field presence in packer validate subcommand options.

Several options in the validate subcommand (lines 298-346) are missing the shortcut field, whereas other options include it. Additionally, some options lack value and default fields. Ensure all options follow the complete schema: option, shortcut, description, value, default.

🤖 Prompt for AI Agents
In data/results/dev_tools/packer-latest.json around lines 296 to 348, several
entries in the "options" array for the "validate" subcommand are missing the
required schema fields (shortcut, value, default) or have inconsistent presence
across options; update each option object in this block to include the full
schema: ensure every object has "option", "shortcut" (use null or appropriate
short flag if none), "description", "value" (use null or the existing value),
and "default" (use null or a sensible default boolean/string), keeping existing
textual values unchanged and only adding the missing fields to match the other
option objects.

Comment on lines +701 to +704
"version": {
"version_number": "main-581519e",
"commit_sha": "581519e3"
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Schema inconsistency: Version field type differs from other files.

The version field in promtail-main-581519e.json (lines 701–704) is an object {version_number, commit_sha}, while all other files in this PR use a string type (e.g., "2.18.6", "v4.49.2").

This type inconsistency will break downstream consumers expecting a uniform schema. Either:

  • Convert this to a string representation (e.g., "main-581519e"), or
  • Update all files to use a consistent object structure if metadata like commit SHA is needed.

For consistency with the rest of the PR, recommend using a string format.

🤖 Prompt for AI Agents
In data/results/dev_tools/promtail-main-581519e.json around lines 701–704, the
version field is an object with keys version_number and commit_sha while other
files use a string; change the version field to a string matching the pattern
used elsewhere (e.g., "main-581519e") so the schema is consistent—either replace
the object with the version_number string or, if commit SHA must be preserved,
move it to a separate metadata field but keep version as a plain string.

Comment on lines +6 to +19
{
"option": "--additional-tag",
"shortcut": "-a",
"description": "additional tags (supports docker-archive)",
"value": "strings",
"default": null
},
{
"option": "--all",
"shortcut": "-a",
"description": "Copy all images if SOURCE-IMAGE is a list",
"value": null,
"default": null
},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

skopeo copy command --additional-tag --all flags shortcut -a

💡 Result:

  • --additional-tag : add extra tag names for the destination (used e.g. with docker-archive). [1][2]

  • --all, -a: if the source is a manifest list (multi-arch/index), copy all images/instances in the list (and the list itself) instead of only the image that matches the current OS/arch. -a is the short alias for --all. [1][2]

Sources: skopeo copy man pages. [1] [2]


Fix incorrect shortcut assignment for --additional-tag.

The shortcut -a should only map to --all (line 18), not --additional-tag (line 9). According to skopeo documentation, -a is the short alias exclusively for the --all flag. Remove "shortcut": "-a" from the --additional-tag option definition (line 9).

🤖 Prompt for AI Agents
In data/results/dev_tools/skopeo-v1.20.0.json around lines 6 to 19, the
--additional-tag entry incorrectly lists "shortcut": "-a"; remove that shortcut
from the --additional-tag object (either delete the "shortcut" property or set
it to null) so that the "-a" short alias remains only on the --all option on
line 18.

Comment on lines +372 to +373
"option": "-h, --help",
"shortcut": "-h",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Inconsistent and reversed shortcut representations.

Several options show inconsistent or non-standard shortcut patterns:

  • Line 372-373: "option": "-h, --help" with "shortcut": "-h" (redundant)
  • Line 507-508: "option": "-f, --format" with "shortcut": "-f" (redundant)
  • Line 620-621: "option": "-h" with "shortcut": "--help" (reversed convention - short form in option, long form in shortcut)

The convention across all files should be: "option" contains the primary/long form, "shortcut" contains the short form. Standardize to avoid parser ambiguity.

Also applies to: 507-508, 620-621

🤖 Prompt for AI Agents
In data/results/dev_tools/skopeo-v1.20.0.json around lines 372-373 (and also fix
similar cases at 507-508 and 620-621), the "option" and "shortcut" fields are
inconsistent or reversed; update each entry so "option" holds the primary/long
form (e.g. "--help", "--format") and "shortcut" holds the short/single-dash form
(e.g. "-h", "-f"); ensure no duplication (do not repeat the short form inside
"option") and validate the three corrected entries follow this convention so
parsers consume long form from "option" and short form from "shortcut".

Comment on lines +1 to +813
{
"subcommands": [
{
"subcommands": [],
"options": [
{
"option": "--base-path",
"description": "base directory for scanning, no links will be followed above this directory, and all paths will be reported relative to this directory"
},
{
"option": "--enrich",
"description": "enable package data enrichment from local and online sources (options: all, golang, java, javascript)"
},
{
"option": "--exclude",
"description": "exclude paths from being scanned using a glob expression"
},
{
"option": "--from",
"description": "specify the source behavior to use (e.g. docker, registry, oci-dir, ...)"
},
{
"option": "-h, --help",
"description": "help for attest"
},
{
"option": "-k, --key",
"description": "the key to use for the attestation"
},
{
"option": "-o, --output",
"description": "report output format (<format>=<file> to output to a file), formats=[cyclonedx-json cyclonedx-xml github-json purls spdx-json spdx-tag-value syft-json syft-table syft-text template] (default [syft-json])"
},
{
"option": "--override-default-catalogers",
"description": "set the base set of catalogers to use (defaults to 'image' or 'directory' depending on the scan source)"
},
{
"option": "--parallelism",
"description": "number of cataloger workers to run in parallel"
},
{
"option": "--platform",
"description": "an optional platform specifier for container image sources (e.g. 'linux/arm64', 'linux/arm64/v8', 'arm64', 'linux')"
},
{
"option": "-s, --scope",
"description": "selection of layers to catalog, options=[squashed all-layers deep-squashed] (default \"squashed\")"
},
{
"option": "--select-catalogers",
"description": "add, remove, and filter the catalogers to be used"
},
{
"option": "--source-name",
"description": "set the name of the target being analyzed"
},
{
"option": "--source-supplier",
"description": "the organization that supplied the component, which often may be the manufacturer, distributor, or repackager"
},
{
"option": "--source-version",
"description": "set the version of the target being analyzed"
},
{
"option": "-c, --config",
"description": "syft configuration file(s) to use"
},
{
"option": "--profile",
"description": "configuration profiles to use"
},
{
"option": "-q, --quiet",
"description": "suppress all logging output"
},
{
"option": "-v, --verbose",
"description": "increase verbosity (-v = info, -vv = debug)"
}
],
"aliases": [],
"name": "syft attest",
"raw_help_text": "Generate a packaged-based Software Bill Of Materials (SBOM) from a container image as the predicate of an in-toto attestation that will be uploaded to the image registry\n\nUsage:\n syft attest --output [FORMAT] <IMAGE> [flags]\n\nExamples:\n syft attest --output [FORMAT] alpine:latest defaults to using images from a Docker daemon. If Docker is not present, the image is pulled directly from the registry\n\n You can also explicitly specify the scheme to use:\n syft attest docker:yourrepo/yourimage:tag explicitly use the Docker daemon\n syft attest podman:yourrepo/yourimage:tag explicitly use the Podman daemon\n syft attest registry:yourrepo/yourimage:tag pull image directly from a registry (no container runtime required)\n syft attest docker-archive:path/to/yourimage.tar use a tarball from disk for archives created from \"docker save\"\n syft attest oci-archive:path/to/yourimage.tar use a tarball from disk for OCI archives (from Skopeo or otherwise)\n syft attest oci-dir:path/to/yourimage read directly from a path on disk for OCI layout directories (from Skopeo or otherwise)\n syft attest singularity:path/to/yourimage.sif read directly from a Singularity Image Format (SIF) container on disk\n\n\nFlags:\n --base-path string base directory for scanning, no links will be followed above this directory, and all paths will be reported relative to this directory\n --enrich stringArray enable package data enrichment from local and online sources (options: all, golang, java, javascript)\n --exclude stringArray exclude paths from being scanned using a glob expression\n --from stringArray specify the source behavior to use (e.g. docker, registry, oci-dir, ...)\n -h, --help help for attest\n -k, --key string the key to use for the attestation\n -o, --output stringArray report output format (<format>=<file> to output to a file), formats=[cyclonedx-json cyclonedx-xml github-json purls spdx-json spdx-tag-value syft-json syft-table syft-text template] (default [syft-json])\n --override-default-catalogers stringArray set the base set of catalogers to use (defaults to 'image' or 'directory' depending on the scan source)\n --parallelism int number of cataloger workers to run in parallel\n --platform string an optional platform specifier for container image sources (e.g. 'linux/arm64', 'linux/arm64/v8', 'arm64', 'linux')\n -s, --scope string selection of layers to catalog, options=[squashed all-layers deep-squashed] (default \"squashed\")\n --select-catalogers stringArray add, remove, and filter the catalogers to be used\n --source-name string set the name of the target being analyzed\n --source-supplier string the organization that supplied the component, which often may be the manufacturer, distributor, or repackager\n --source-version string set the version of the target being analyzed\n\nGlobal Flags:\n -c, --config stringArray syft configuration file(s) to use\n --profile stringArray configuration profiles to use\n -q, --quiet suppress all logging output\n -v, --verbose count increase verbosity (-v = info, -vv = debug)",
"description": "Generate an SBOM as an attestation for the given [SOURCE] container image"
},
{
"subcommands": [
{
"subcommands": [],
"options": [
{
"option": "-h",
"shortcut": "--help",
"description": "help for list"
},
{
"option": "-o",
"shortcut": "--output",
"description": "format to output the cataloger list (available: table, json)",
"value": "string"
},
{
"option": "--override-default-catalogers",
"description": "override the default catalogers with an expression",
"value": "stringArray",
"default": "[all]"
},
{
"option": "--select-catalogers",
"description": "select catalogers with an expression",
"value": "stringArray"
},
{
"option": "-s",
"shortcut": "--show-hidden",
"description": "show catalogers that have been de-selected"
},
{
"option": "-c",
"shortcut": "--config",
"description": "syft configuration file(s) to use",
"value": "stringArray"
},
{
"option": "--profile",
"description": "configuration profiles to use",
"value": "stringArray"
},
{
"option": "-q",
"shortcut": "--quiet",
"description": "suppress all logging output"
},
{
"option": "-v",
"shortcut": "--verbose",
"description": "increase verbosity (-v = info, -vv = debug)",
"value": "count"
}
],
"aliases": [],
"name": "syft cataloger list",
"raw_help_text": "List available catalogers\n\nUsage:\n syft cataloger list [OPTIONS] [flags]\n\nFlags:\n -h, --help help for list\n -o, --output string format to output the cataloger list (available: table, json)\n --override-default-catalogers stringArray override the default catalogers with an expression (default [all])\n --select-catalogers stringArray select catalogers with an expression\n -s, --show-hidden show catalogers that have been de-selected\n\nGlobal Flags:\n -c, --config stringArray syft configuration file(s) to use\n --profile stringArray configuration profiles to use\n -q, --quiet suppress all logging output\n -v, --verbose count increase verbosity (-v = info, -vv = debug)",
"description": "List available catalogers"
}
],
"options": [
{
"option": "-h",
"shortcut": "--help",
"description": "help for cataloger"
},
{
"option": "-c",
"shortcut": "--config",
"description": "syft configuration file(s) to use",
"value": "stringArray"
},
{
"option": "--profile",
"shortcut": "",
"description": "configuration profiles to use",
"value": "stringArray"
},
{
"option": "-q",
"shortcut": "--quiet",
"description": "suppress all logging output"
},
{
"option": "-v",
"shortcut": "--verbose",
"description": "increase verbosity (-v = info, -vv = debug)",
"value": "count"
}
],
"aliases": [],
"name": "syft cataloger",
"raw_help_text": "Show available catalogers and configuration\n\nUsage:\n syft cataloger [command]\n\nAvailable Commands:\n list List available catalogers\n\nFlags:\n -h, --help help for cataloger\n\nGlobal Flags:\n -c, --config stringArray syft configuration file(s) to use\n --profile stringArray configuration profiles to use\n -q, --quiet suppress all logging output\n -v, --verbose count increase verbosity (-v = info, -vv = debug)\n\nUse \"syft cataloger [command] --help\" for more information about a command.",
"description": "Show available catalogers and configuration"
},
{
"subcommands": [
{
"subcommands": [],
"options": [
{
"option": "-h",
"shortcut": "--help",
"description": "help for bash"
},
{
"option": "--no-descriptions",
"description": "disable completion descriptions"
},
{
"option": "-c",
"shortcut": "--config",
"description": "syft configuration file(s) to use",
"value": "stringArray"
},
{
"option": "--profile",
"description": "configuration profiles to use",
"value": "stringArray"
},
{
"option": "-q",
"shortcut": "--quiet",
"description": "suppress all logging output"
},
{
"option": "-v",
"shortcut": "--verbose",
"description": "increase verbosity",
"value": "count"
}
],
"aliases": [],
"name": "syft completion bash",
"raw_help_text": "Generate the autocompletion script for the bash shell.\n\nThis script depends on the 'bash-completion' package.\nIf it is not installed already, you can install it via your OS's package manager.\n\nTo load completions in your current shell session:\n\n\tsource <(syft completion bash)\n\nTo load completions for every new session, execute once:\n\n#### Linux:\n\n\tsyft completion bash > /etc/bash_completion.d/syft\n\n#### macOS:\n\n\tsyft completion bash > $(brew --prefix)/etc/bash_completion.d/syft\n\nYou will need to start a new shell for this setup to take effect.\n\nUsage:\n syft completion bash\n\nFlags:\n -h, --help help for bash\n --no-descriptions disable completion descriptions\n\nGlobal Flags:\n -c, --config stringArray syft configuration file(s) to use\n --profile stringArray configuration profiles to use\n -q, --quiet suppress all logging output\n -v, --verbose count increase verbosity (-v = info, -vv = debug)",
"description": "Generate the autocompletion script for bash"
},
{
"subcommands": [],
"options": [
{
"option": "-h",
"shortcut": "--help",
"description": "help for fish"
},
{
"option": "--no-descriptions",
"description": "disable completion descriptions"
},
{
"option": "-c",
"shortcut": "--config",
"description": "syft configuration file(s) to use",
"value": "stringArray"
},
{
"option": "--profile",
"description": "configuration profiles to use",
"value": "stringArray"
},
{
"option": "-q",
"shortcut": "--quiet",
"description": "suppress all logging output"
},
{
"option": "-v",
"shortcut": "--verbose",
"description": "increase verbosity",
"value": "count"
}
],
"aliases": [],
"name": "syft completion fish",
"raw_help_text": "Generate the autocompletion script for the fish shell.\n\nTo load completions in your current shell session:\n\n\tsyft completion fish | source\n\nTo load completions for every new session, execute once:\n\n\tsyft completion fish > ~/.config/fish/completions/syft.fish\n\nYou will need to start a new shell for this setup to take effect.\n\nUsage:\n syft completion fish [flags]\n\nFlags:\n -h, --help help for fish\n --no-descriptions disable completion descriptions\n\nGlobal Flags:\n -c, --config stringArray syft configuration file(s) to use\n --profile stringArray configuration profiles to use\n -q, --quiet suppress all logging output\n -v, --verbose count increase verbosity (-v = info, -vv = debug)",
"description": "Generate the autocompletion script for fish"
},
{
"subcommands": [],
"options": [
{
"option": "-h",
"shortcut": "--help",
"description": "help for powershell"
},
{
"option": "--no-descriptions",
"description": "disable completion descriptions"
},
{
"option": "-c",
"shortcut": "--config",
"description": "syft configuration file(s) to use",
"value": "stringArray"
},
{
"option": "--profile",
"description": "configuration profiles to use",
"value": "stringArray"
},
{
"option": "-q",
"shortcut": "--quiet",
"description": "suppress all logging output"
},
{
"option": "-v",
"shortcut": "--verbose",
"description": "increase verbosity (-v = info, -vv = debug)",
"value": "count"
}
],
"aliases": [],
"name": "syft completion powershell",
"raw_help_text": "Generate the autocompletion script for powershell.\n\nTo load completions in your current shell session:\n\n\tsyft completion powershell | Out-String | Invoke-Expression\n\nTo load completions for every new session, add the output of the above command\nto your powershell profile.\n\nUsage:\n syft completion powershell [flags]\n\nFlags:\n -h, --help help for powershell\n --no-descriptions disable completion descriptions\n\nGlobal Flags:\n -c, --config stringArray syft configuration file(s) to use\n --profile stringArray configuration profiles to use\n -q, --quiet suppress all logging output\n -v, --verbose count increase verbosity (-v = info, -vv = debug)",
"description": "Generate the autocompletion script for powershell"
},
{
"subcommands": [],
"options": [
{
"option": "-h",
"shortcut": "--help",
"description": "help for zsh"
},
{
"option": "--no-descriptions",
"description": "disable completion descriptions"
},
{
"option": "-c",
"shortcut": "--config",
"description": "syft configuration file(s) to use",
"value": "stringArray"
},
{
"option": "--profile",
"description": "configuration profiles to use",
"value": "stringArray"
},
{
"option": "-q",
"shortcut": "--quiet",
"description": "suppress all logging output"
},
{
"option": "-v",
"shortcut": "--verbose",
"description": "increase verbosity",
"value": "count"
}
],
"aliases": [],
"name": "syft completion zsh",
"raw_help_text": "Generate the autocompletion script for the zsh shell.\n\nIf shell completion is not already enabled in your environment you will need\nto enable it. You can execute the following once:\n\n\techo \"autoload -U compinit; compinit\" >> ~/.zshrc\n\nTo load completions in your current shell session:\n\n\tsource <(syft completion zsh)\n\nTo load completions for every new session, execute once:\n\n#### Linux:\n\n\tsyft completion zsh > \"${fpath[1]}/_syft\"\n\n#### macOS:\n\n\tsyft completion zsh > $(brew --prefix)/share/zsh/site-functions/_syft\n\nYou will need to start a new shell for this setup to take effect.\n\nUsage:\n syft completion zsh [flags]\n\nFlags:\n -h, --help help for zsh\n --no-descriptions disable completion descriptions\n\nGlobal Flags:\n -c, --config stringArray syft configuration file(s) to use\n --profile stringArray configuration profiles to use\n -q, --quiet suppress all logging output\n -v, --verbose count increase verbosity (-v = info, -vv = debug)",
"description": "Generate the autocompletion script for zsh"
}
],
"options": [
{
"option": "-h",
"shortcut": "--help",
"description": "help for completion"
},
{
"option": "-c",
"shortcut": "--config",
"description": "syft configuration file(s) to use",
"value": "stringArray"
},
{
"option": "--profile",
"shortcut": null,
"description": "configuration profiles to use",
"value": "stringArray"
},
{
"option": "-q",
"shortcut": "--quiet",
"description": "suppress all logging output"
},
{
"option": "-v",
"shortcut": "--verbose",
"description": "increase verbosity",
"value": "count"
}
],
"aliases": [],
"name": "syft completion",
"raw_help_text": "Generate the autocompletion script for syft for the specified shell.\nSee each sub-command's help for details on how to use the generated script.\n\nUsage:\n syft completion [command]\n\nAvailable Commands:\n bash Generate the autocompletion script for bash\n fish Generate the autocompletion script for fish\n powershell Generate the autocompletion script for powershell\n zsh Generate the autocompletion script for zsh\n\nFlags:\n -h, --help help for completion\n\nGlobal Flags:\n -c, --config stringArray syft configuration file(s) to use\n --profile stringArray configuration profiles to use\n -q, --quiet suppress all logging output\n -v, --verbose count increase verbosity (-v = info, -vv = debug)\n\nUse \"syft completion [command] --help\" for more information about a command.",
"description": "Generate the autocompletion script for the specified shell"
},
{
"subcommands": [
{
"subcommands": [],
"options": [
{
"option": "--all",
"shortcut": null,
"description": "include every file extension supported",
"value": null,
"default": null
},
{
"option": "-h",
"shortcut": "--help",
"description": "help for locations",
"value": null,
"default": null
},
{
"option": "--config",
"shortcut": "-c",
"description": "syft configuration file(s) to use",
"value": "stringArray",
"default": null
},
{
"option": "--profile",
"shortcut": null,
"description": "configuration profiles to use",
"value": "stringArray",
"default": null
},
{
"option": "--quiet",
"shortcut": "-q",
"description": "suppress all logging output",
"value": null,
"default": null
},
{
"option": "--verbose",
"shortcut": "-v",
"description": "increase verbosity (-v = info, -vv = debug)",
"value": "count",
"default": null
}
],
"aliases": [],
"name": "syft config locations",
"raw_help_text": "shows all locations and the order in which syft will look for a configuration file\n\nUsage:\n syft config locations [flags]\n\nFlags:\n --all include every file extension supported\n -h, --help help for locations\n\nGlobal Flags:\n -c, --config stringArray syft configuration file(s) to use\n --profile stringArray configuration profiles to use\n -q, --quiet suppress all logging output\n -v, --verbose count increase verbosity (-v = info, -vv = debug)",
"description": "shows all locations and the order in which syft will look for a configuration file"
}
],
"options": [
{
"option": "-h",
"shortcut": "--help",
"description": "help for config"
},
{
"option": "--load",
"description": "load and validate the syft configuration"
},
{
"option": "-c",
"shortcut": "--config",
"description": "syft configuration file(s) to use",
"value": "stringArray"
},
{
"option": "--profile",
"description": "configuration profiles to use",
"value": "stringArray"
},
{
"option": "-q",
"shortcut": "--quiet",
"description": "suppress all logging output"
},
{
"option": "-v",
"shortcut": "--verbose",
"description": "increase verbosity",
"value": "count"
}
],
"aliases": [],
"name": "syft config",
"raw_help_text": "show the syft configuration\n\nUsage:\n syft config [flags]\n syft config [command]\n\nAvailable Commands:\n locations shows all locations and the order in which syft will look for a configuration file\n\nFlags:\n -h, --help help for config\n --load load and validate the syft configuration\n\nGlobal Flags:\n -c, --config stringArray syft configuration file(s) to use\n --profile stringArray configuration profiles to use\n -q, --quiet suppress all logging output\n -v, --verbose count increase verbosity (-v = info, -vv = debug)\n\nUse \"syft config [command] --help\" for more information about a command.",
"description": "show the syft configuration"
},
{
"subcommands": [],
"options": [
{
"option": "--file",
"shortcut": "",
"description": "file to write the default report output to (default is STDOUT)",
"value": "string",
"default": "STDOUT"
},
{
"option": "--help",
"shortcut": "-h",
"description": "help for convert",
"value": "",
"default": ""
},
{
"option": "--output",
"shortcut": "-o",
"description": "report output format (<format>=<file> to output to a file), formats=[cyclonedx-json cyclonedx-xml github-json purls spdx-json spdx-tag-value syft-json syft-table syft-text template] (default [syft-table])",
"value": "stringArray",
"default": "[syft-table]"
},
{
"option": "--template",
"shortcut": "-t",
"description": "specify the path to a Go template file",
"value": "string",
"default": ""
},
{
"option": "--config",
"shortcut": "-c",
"description": "syft configuration file(s) to use",
"value": "stringArray",
"default": ""
},
{
"option": "--profile",
"shortcut": "",
"description": "configuration profiles to use",
"value": "stringArray",
"default": ""
},
{
"option": "--quiet",
"shortcut": "-q",
"description": "suppress all logging output",
"value": "",
"default": ""
},
{
"option": "--verbose",
"shortcut": "-v",
"description": "increase verbosity (-v = info, -vv = debug)",
"value": "count",
"default": ""
}
],
"aliases": [],
"name": "syft convert",
"raw_help_text": "[Experimental] Convert SBOM files to, and from, SPDX, CycloneDX and Syft's format. For more info about data loss between formats see https://github.com/anchore/syft/wiki/format-conversion\n\nUsage:\n syft convert [SOURCE-SBOM] -o [FORMAT] [flags]\n\nExamples:\n syft convert img.syft.json -o spdx-json convert a syft SBOM to spdx-json, output goes to stdout\n syft convert img.syft.json -o cyclonedx-json=img.cdx.json convert a syft SBOM to CycloneDX, output is written to the file \"img.cdx.json\"\n syft convert - -o spdx-json convert an SBOM from STDIN to spdx-json\n\n\nFlags:\n --file string file to write the default report output to (default is STDOUT) (DEPRECATED: use: --output FORMAT=PATH)\n -h, --help help for convert\n -o, --output stringArray report output format (<format>=<file> to output to a file), formats=[cyclonedx-json cyclonedx-xml github-json purls spdx-json spdx-tag-value syft-json syft-table syft-text template] (default [syft-table])\n -t, --template string specify the path to a Go template file\n\nGlobal Flags:\n -c, --config stringArray syft configuration file(s) to use\n --profile stringArray configuration profiles to use\n -q, --quiet suppress all logging output\n -v, --verbose count increase verbosity (-v = info, -vv = debug)",
"description": "Convert between SBOM formats"
},
{
"subcommands": [],
"options": [
{
"option": "--help",
"shortcut": "-h",
"description": "help for login"
},
{
"option": "--password",
"shortcut": "-p",
"description": "Password",
"value": "string"
},
{
"option": "--password-stdin",
"description": "Take the password from stdin"
},
{
"option": "--username",
"shortcut": "-u",
"description": "Username",
"value": "string"
},
{
"option": "--config",
"shortcut": "-c",
"description": "syft configuration file(s) to use",
"value": "stringArray"
},
{
"option": "--profile",
"description": "configuration profiles to use",
"value": "stringArray"
},
{
"option": "--quiet",
"shortcut": "-q",
"description": "suppress all logging output"
},
{
"option": "--verbose",
"shortcut": "-v",
"description": "increase verbosity",
"value": "count"
}
],
"aliases": [],
"name": "syft login",
"raw_help_text": "Log in to a registry\n\nUsage:\n syft login [OPTIONS] [SERVER] [flags]\n\nExamples:\n # Log in to reg.example.com\n syft login reg.example.com -u AzureDiamond -p hunter2\n\nFlags:\n -h, --help help for login\n -p, --password string Password\n --password-stdin Take the password from stdin\n -u, --username string Username\n\nGlobal Flags:\n -c, --config stringArray syft configuration file(s) to use\n --profile stringArray configuration profiles to use\n -q, --quiet suppress all logging output\n -v, --verbose count increase verbosity (-v = info, -vv = debug)",
"description": "Log in to a registry"
},
{
"subcommands": [],
"options": [
{
"option": "--base-path",
"description": "base directory for scanning, no links will be followed above this directory, and all paths will be reported relative to this directory"
},
{
"option": "--enrich",
"description": "enable package data enrichment from local and online sources (options: all, golang, java, javascript)"
},
{
"option": "--exclude",
"description": "exclude paths from being scanned using a glob expression"
},
{
"option": "--file",
"description": "file to write the default report output to (default is STDOUT) (DEPRECATED: use: --output FORMAT=PATH)"
},
{
"option": "--from",
"description": "specify the source behavior to use (e.g. docker, registry, oci-dir, ...)"
},
{
"option": "-h, --help",
"description": "help for scan"
},
{
"option": "-o, --output",
"description": "report output format (<format>=<file> to output to a file), formats=[cyclonedx-json cyclonedx-xml github-json purls spdx-json spdx-tag-value syft-json syft-table syft-text template] (default [syft-table])"
},
{
"option": "--override-default-catalogers",
"description": "set the base set of catalogers to use (defaults to 'image' or 'directory' depending on the scan source)"
},
{
"option": "--parallelism",
"description": "number of cataloger workers to run in parallel"
},
{
"option": "--platform",
"description": "an optional platform specifier for container image sources (e.g. 'linux/arm64', 'linux/arm64/v8', 'arm64', 'linux')"
},
{
"option": "-s, --scope",
"description": "selection of layers to catalog, options=[squashed all-layers deep-squashed] (default \"squashed\")"
},
{
"option": "--select-catalogers",
"description": "add, remove, and filter the catalogers to be used"
},
{
"option": "--source-name",
"description": "set the name of the target being analyzed"
},
{
"option": "--source-supplier",
"description": "the organization that supplied the component, which often may be the manufacturer, distributor, or repackager"
},
{
"option": "--source-version",
"description": "set the version of the target being analyzed"
},
{
"option": "-t, --template",
"description": "specify the path to a Go template file"
},
{
"option": "-c, --config",
"description": "syft configuration file(s) to use"
},
{
"option": "--profile",
"description": "configuration profiles to use"
},
{
"option": "-q, --quiet",
"description": "suppress all logging output"
},
{
"option": "-v, --verbose",
"description": "increase verbosity (-v = info, -vv = debug)"
}
],
"aliases": [],
"name": "syft scan",
"raw_help_text": "Generate a packaged-based Software Bill Of Materials (SBOM) from container images and filesystems\n\nUsage:\n syft scan [SOURCE] [flags]\n\nExamples:\n syft scan alpine:latest a summary of discovered packages\n syft scan alpine:latest -o json show all possible cataloging details\n syft scan alpine:latest -o cyclonedx show a CycloneDX formatted SBOM\n syft scan alpine:latest -o cyclonedx-json show a CycloneDX JSON formatted SBOM\n syft scan alpine:latest -o spdx show a SPDX 2.3 Tag-Value formatted SBOM\n syft scan alpine:latest -o spdx@2.2 show a SPDX 2.2 Tag-Value formatted SBOM\n syft scan alpine:latest -o spdx-json show a SPDX 2.3 JSON formatted SBOM\n syft scan alpine:latest -o spdx-json@2.2 show a SPDX 2.2 JSON formatted SBOM\n syft scan alpine:latest -vv show verbose debug information\n syft scan alpine:latest -o template -t my_format.tmpl show a SBOM formatted according to given template file\n\n Supports the following image sources:\n syft scan yourrepo/yourimage:tag defaults to using images from a Docker daemon. If Docker is not present, the image is pulled directly from the registry.\n syft scan path/to/a/file/or/dir a Docker tar, OCI tar, OCI directory, SIF container, or generic filesystem directory\n\n You can also explicitly specify the scheme to use:\n syft scan docker:yourrepo/yourimage:tag explicitly use the Docker daemon\n syft scan podman:yourrepo/yourimage:tag explicitly use the Podman daemon\n syft scan registry:yourrepo/yourimage:tag pull image directly from a registry (no container runtime required)\n syft scan docker-archive:path/to/yourimage.tar use a tarball from disk for archives created from \"docker save\"\n syft scan oci-archive:path/to/yourimage.tar use a tarball from disk for OCI archives (from Skopeo or otherwise)\n syft scan oci-dir:path/to/yourimage read directly from a path on disk for OCI layout directories (from Skopeo or otherwise)\n syft scan singularity:path/to/yourimage.sif read directly from a Singularity Image Format (SIF) container on disk\n syft scan dir:path/to/yourproject read directly from a path on disk (any directory)\n syft scan file:path/to/yourproject/file read directly from a path on disk (any single file)\n\n\nFlags:\n --base-path string base directory for scanning, no links will be followed above this directory, and all paths will be reported relative to this directory\n --enrich stringArray enable package data enrichment from local and online sources (options: all, golang, java, javascript)\n --exclude stringArray exclude paths from being scanned using a glob expression\n --file string file to write the default report output to (default is STDOUT) (DEPRECATED: use: --output FORMAT=PATH)\n --from stringArray specify the source behavior to use (e.g. docker, registry, oci-dir, ...)\n -h, --help help for scan\n -o, --output stringArray report output format (<format>=<file> to output to a file), formats=[cyclonedx-json cyclonedx-xml github-json purls spdx-json spdx-tag-value syft-json syft-table syft-text template] (default [syft-table])\n --override-default-catalogers stringArray set the base set of catalogers to use (defaults to 'image' or 'directory' depending on the scan source)\n --parallelism int number of cataloger workers to run in parallel\n --platform string an optional platform specifier for container image sources (e.g. 'linux/arm64', 'linux/arm64/v8', 'arm64', 'linux')\n -s, --scope string selection of layers to catalog, options=[squashed all-layers deep-squashed] (default \"squashed\")\n --select-catalogers stringArray add, remove, and filter the catalogers to be used\n --source-name string set the name of the target being analyzed\n --source-supplier string the organization that supplied the component, which often may be the manufacturer, distributor, or repackager\n --source-version string set the version of the target being analyzed\n -t, --template string specify the path to a Go template file\n\nGlobal Flags:\n -c, --config stringArray syft configuration file(s) to use\n --profile stringArray configuration profiles to use\n -q, --quiet suppress all logging output\n -v, --verbose count increase verbosity (-v = info, -vv = debug)",
"description": "Generate an SBOM"
},
{
"subcommands": [],
"options": [
{
"option": "--help",
"shortcut": "-h",
"description": "help for version"
},
{
"option": "--output",
"shortcut": "-o",
"description": "the format to show the results (allowable: [text json])",
"value": "string",
"default": "text"
},
{
"option": "--config",
"shortcut": "-c",
"description": "syft configuration file(s) to use",
"value": "stringArray"
},
{
"option": "--profile",
"description": "configuration profiles to use",
"value": "stringArray"
},
{
"option": "--quiet",
"shortcut": "-q",
"description": "suppress all logging output"
},
{
"option": "--verbose",
"shortcut": "-v",
"description": "increase verbosity (-v = info, -vv = debug)",
"value": "count"
}
],
"aliases": [],
"name": "syft version",
"raw_help_text": "show version information\n\nUsage:\n syft version [flags]\n\nFlags:\n -h, --help help for version\n -o, --output string the format to show the results (allowable: [text json]) (default \"text\")\n\nGlobal Flags:\n -c, --config stringArray syft configuration file(s) to use\n --profile stringArray configuration profiles to use\n -q, --quiet suppress all logging output\n -v, --verbose count increase verbosity (-v = info, -vv = debug)",
"description": "show version information"
}
],
"options": [
{
"option": "--base-path",
"description": "base directory for scanning, no links will be followed above this directory, and all paths will be reported relative to this directory"
},
{
"option": "-c, --config",
"description": "syft configuration file(s) to use"
},
{
"option": "--enrich",
"description": "enable package data enrichment from local and online sources (options: all, golang, java, javascript)"
},
{
"option": "--exclude",
"description": "exclude paths from being scanned using a glob expression"
},
{
"option": "--file",
"description": "file to write the default report output to (default is STDOUT) (DEPRECATED: use: --output FORMAT=PATH)"
},
{
"option": "--from",
"description": "specify the source behavior to use (e.g. docker, registry, oci-dir, ...)"
},
{
"option": "-h, --help",
"description": "help for syft"
},
{
"option": "-o, --output",
"description": "report output format (<format>=<file> to output to a file), formats=[cyclonedx-json cyclonedx-xml github-json purls spdx-json spdx-tag-value syft-json syft-table syft-text template] (default [syft-table])"
},
{
"option": "--override-default-catalogers",
"description": "set the base set of catalogers to use (defaults to 'image' or 'directory' depending on the scan source)"
},
{
"option": "--parallelism",
"description": "number of cataloger workers to run in parallel"
},
{
"option": "--platform",
"description": "an optional platform specifier for container image sources (e.g. 'linux/arm64', 'linux/arm64/v8', 'arm64', 'linux')"
},
{
"option": "--profile",
"description": "configuration profiles to use"
},
{
"option": "-q, --quiet",
"description": "suppress all logging output"
},
{
"option": "-s, --scope",
"description": "selection of layers to catalog, options=[squashed all-layers deep-squashed] (default \"squashed\")"
},
{
"option": "--select-catalogers",
"description": "add, remove, and filter the catalogers to be used"
},
{
"option": "--source-name",
"description": "set the name of the target being analyzed"
},
{
"option": "--source-supplier",
"description": "the organization that supplied the component, which often may be the manufacturer, distributor, or repackager"
},
{
"option": "--source-version",
"description": "set the version of the target being analyzed"
},
{
"option": "-t, --template",
"description": "specify the path to a Go template file"
},
{
"option": "-v, --verbose",
"description": "increase verbosity (-v = info, -vv = debug)"
},
{
"option": "--version",
"description": "version for syft"
}
],
"aliases": [],
"name": "syft",
"raw_help_text": "Generate a packaged-based Software Bill Of Materials (SBOM) from container images and filesystems\n\nUsage:\n syft [SOURCE] [flags]\n syft [command]\n\nExamples:\n syft scan alpine:latest a summary of discovered packages\n syft scan alpine:latest -o json show all possible cataloging details\n syft scan alpine:latest -o cyclonedx show a CycloneDX formatted SBOM\n syft scan alpine:latest -o cyclonedx-json show a CycloneDX JSON formatted SBOM\n syft scan alpine:latest -o spdx show a SPDX 2.3 Tag-Value formatted SBOM\n syft scan alpine:latest -o spdx@2.2 show a SPDX 2.2 Tag-Value formatted SBOM\n syft scan alpine:latest -o spdx-json show a SPDX 2.3 JSON formatted SBOM\n syft scan alpine:latest -o spdx-json@2.2 show a SPDX 2.2 JSON formatted SBOM\n syft scan alpine:latest -vv show verbose debug information\n syft scan alpine:latest -o template -t my_format.tmpl show a SBOM formatted according to given template file\n\n Supports the following image sources:\n syft scan yourrepo/yourimage:tag defaults to using images from a Docker daemon. If Docker is not present, the image is pulled directly from the registry.\n syft scan path/to/a/file/or/dir a Docker tar, OCI tar, OCI directory, SIF container, or generic filesystem directory\n\n You can also explicitly specify the scheme to use:\n syft scan docker:yourrepo/yourimage:tag explicitly use the Docker daemon\n syft scan podman:yourrepo/yourimage:tag explicitly use the Podman daemon\n syft scan registry:yourrepo/yourimage:tag pull image directly from a registry (no container runtime required)\n syft scan docker-archive:path/to/yourimage.tar use a tarball from disk for archives created from \"docker save\"\n syft scan oci-archive:path/to/yourimage.tar use a tarball from disk for OCI archives (from Skopeo or otherwise)\n syft scan oci-dir:path/to/yourimage read directly from a path on disk for OCI layout directories (from Skopeo or otherwise)\n syft scan singularity:path/to/yourimage.sif read directly from a Singularity Image Format (SIF) container on disk\n syft scan dir:path/to/yourproject read directly from a path on disk (any directory)\n syft scan file:path/to/yourproject/file read directly from a path on disk (any single file)\n\n\nAvailable Commands:\n attest Generate an SBOM as an attestation for the given [SOURCE] container image\n cataloger Show available catalogers and configuration\n completion Generate the autocompletion script for the specified shell\n config show the syft configuration\n convert Convert between SBOM formats\n help Help about any command\n login Log in to a registry\n scan Generate an SBOM\n version show version information\n\nFlags:\n --base-path string base directory for scanning, no links will be followed above this directory, and all paths will be reported relative to this directory\n -c, --config stringArray syft configuration file(s) to use\n --enrich stringArray enable package data enrichment from local and online sources (options: all, golang, java, javascript)\n --exclude stringArray exclude paths from being scanned using a glob expression\n --file string file to write the default report output to (default is STDOUT) (DEPRECATED: use: --output FORMAT=PATH)\n --from stringArray specify the source behavior to use (e.g. docker, registry, oci-dir, ...)\n -h, --help help for syft\n -o, --output stringArray report output format (<format>=<file> to output to a file), formats=[cyclonedx-json cyclonedx-xml github-json purls spdx-json spdx-tag-value syft-json syft-table syft-text template] (default [syft-table])\n --override-default-catalogers stringArray set the base set of catalogers to use (defaults to 'image' or 'directory' depending on the scan source)\n --parallelism int number of cataloger workers to run in parallel\n --platform string an optional platform specifier for container image sources (e.g. 'linux/arm64', 'linux/arm64/v8', 'arm64', 'linux')\n --profile stringArray configuration profiles to use\n -q, --quiet suppress all logging output\n -s, --scope string selection of layers to catalog, options=[squashed all-layers deep-squashed] (default \"squashed\")\n --select-catalogers stringArray add, remove, and filter the catalogers to be used\n --source-name string set the name of the target being analyzed\n --source-supplier string the organization that supplied the component, which often may be the manufacturer, distributor, or repackager\n --source-version string set the version of the target being analyzed\n -t, --template string specify the path to a Go template file\n -v, --verbose count increase verbosity (-v = info, -vv = debug)\n --version version for syft\n\nUse \"syft [command] --help\" for more information about a command.",
"version": "1.34.1"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Standardize JSON schema for option objects across all commands.

The option objects in this file use inconsistent schemas that vary across different commands:

  • Simple format (68 occurrences): Only "option" and "description" fields
  • Full format (25 occurrences): "option", "shortcut", "value", "default", and "description"
  • Partial format (4 occurrences): Combinations of the above

Additionally, the "shortcut" field has inconsistent value representations: null, empty strings, populated strings, or completely absent. Option names also vary between combined forms (e.g., "-h, --help") and separated forms (option + shortcut fields).

This schema inconsistency complicates programmatic processing. Establish a canonical option object schema and apply it consistently across all commands—either always include optional fields with null defaults, or always omit them entirely. Separate combined option forms into distinct fields.

🤖 Prompt for AI Agents
In data/results/dev_tools/syft-v1.34.1-debug.json lines 1-813, option objects
use multiple inconsistent schemas (combined "-h, --help" forms, missing
shortcut/value/default fields, null vs empty string) which breaks programmatic
parsing; normalize every option object to a canonical schema with the fields:
"option" (primary long form as string), "shortcut" (single short form or null),
"description" (string), "value" (string or null), and "default" (string or
null); split any combined option strings (e.g. "-h, --help") into
"option":"--help" and "shortcut":"-h", and for objects missing optional fields
add them with null values, ensuring all option objects across the file follow
this exact structure.

@O1ahmad O1ahmad merged commit cc44ede into main Dec 9, 2025
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants