-
Notifications
You must be signed in to change notification settings - Fork 3.7k
Description
Feature hasn't been suggested before.
- I have verified this feature I'm about to request hasn't been suggested before.
Describe the enhancement you want to request
Problem
OpenCode's prompts are verbose and sometimes contradictory, causing:
- Unnecessary token consumption
- Unpredictable LLM outputs
- Difficulty maintaining consistency
Core Solution: Two Complementary Approaches
1. Principle-Based Constraints
Instead of describing behaviors explicitly, reference established design principles from LLM training data:
- UNIX Philosophy (for modular, single-purpose components)
- KISS Principle (for simplicity)
- YAGNI (to avoid over-engineering)
- SOLID (for object-oriented design)
Example:
Instead of: "Make it simple, don't add unnecessary features, focus on one thing..."
Use: "Apply KISS and YAGNI principles."
Benefits:
- Reduces token count by 60-80%
- Eliminates instruction conflicts via internally consistent frameworks
- Leverages LLM's existing knowledge
2. DSL Context Compression
Create structured templates to filter noise in extended conversations:
[CONTEXT_SUMMARY]
CORE_ISSUE:: <main problem>
KEY_POINTS:: <bullet points>
ACTION_ITEMS:: <next steps>
[END_SUMMARY]
Information Theory Rationale:
- Acts as entropy-reducing encoder
- Implements lossy compression preserving semantic essence
- Filters low-information noise
Why This Works
Both methods address root causes:
- Principles compress complex concepts into single references
- DSL templates enforce structure, eliminating ambiguity
- Together they create concise, predictable prompts
Suggested First Steps
- Audit current prompts for most redundant sections
- Replace verbose descriptions with principle references
- Design 2-3 DSL templates for common workflows
- Test with critical paths
This approach transforms prompt engineering from art to science—reducing costs while improving output quality.
Additional Context
The specific principles, paradigms, or conventions mentioned are not exhaustive or exclusive—they are examples of a broader pattern. You can search for and adopt any well-established, widely recognized design principles, methodologies, or standards relevant to your domain (e.g., "separation of concerns," "immutable architecture," "12-factor app"). The key is leveraging consensus-based constraints that exist within the LLM's training corpus. This approach reduces rule conflicts inherent in human language and minimizes token waste caused by over-explanation.
Additionally, custom DSLs can be co-designed with the LLM itself. After several rounds of discussion, you can instruct the LLM to summarize the conversation using a mutually agreed-upon DSL format. This practice effectively filters noise and compresses context, saving tokens for subsequent interactions.