Skip to content

[FEATURE]: Optimize prompts using principle constraints and DSL compression #6249

@d0lwl0b

Description

@d0lwl0b

Feature hasn't been suggested before.

  • I have verified this feature I'm about to request hasn't been suggested before.

Describe the enhancement you want to request

Problem

OpenCode's prompts are verbose and sometimes contradictory, causing:

  • Unnecessary token consumption
  • Unpredictable LLM outputs
  • Difficulty maintaining consistency

Core Solution: Two Complementary Approaches

1. Principle-Based Constraints

Instead of describing behaviors explicitly, reference established design principles from LLM training data:

  • UNIX Philosophy (for modular, single-purpose components)
  • KISS Principle (for simplicity)
  • YAGNI (to avoid over-engineering)
  • SOLID (for object-oriented design)

Example:
Instead of: "Make it simple, don't add unnecessary features, focus on one thing..."
Use: "Apply KISS and YAGNI principles."

Benefits:

  • Reduces token count by 60-80%
  • Eliminates instruction conflicts via internally consistent frameworks
  • Leverages LLM's existing knowledge

2. DSL Context Compression

Create structured templates to filter noise in extended conversations:

[CONTEXT_SUMMARY]
CORE_ISSUE:: <main problem>
KEY_POINTS:: <bullet points>
ACTION_ITEMS:: <next steps>
[END_SUMMARY]

Information Theory Rationale:

  • Acts as entropy-reducing encoder
  • Implements lossy compression preserving semantic essence
  • Filters low-information noise

Why This Works

Both methods address root causes:

  • Principles compress complex concepts into single references
  • DSL templates enforce structure, eliminating ambiguity
  • Together they create concise, predictable prompts

Suggested First Steps

  1. Audit current prompts for most redundant sections
  2. Replace verbose descriptions with principle references
  3. Design 2-3 DSL templates for common workflows
  4. Test with critical paths

This approach transforms prompt engineering from art to science—reducing costs while improving output quality.


Additional Context

The specific principles, paradigms, or conventions mentioned are not exhaustive or exclusive—they are examples of a broader pattern. You can search for and adopt any well-established, widely recognized design principles, methodologies, or standards relevant to your domain (e.g., "separation of concerns," "immutable architecture," "12-factor app"). The key is leveraging consensus-based constraints that exist within the LLM's training corpus. This approach reduces rule conflicts inherent in human language and minimizes token waste caused by over-explanation.

Additionally, custom DSLs can be co-designed with the LLM itself. After several rounds of discussion, you can instruct the LLM to summarize the conversation using a mutually agreed-upon DSL format. This practice effectively filters noise and compresses context, saving tokens for subsequent interactions.

Metadata

Metadata

Assignees

Labels

discussionUsed for feature requests, proposals, ideas, etc. Open discussion

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions