Author: Marco Siccardi (MSiccDev Software Development)
Purpose: A structured instruction framework for maintaining consistent, context-aware AI collaboration across different LLM providers, projects, and development phases.
This repository provides a comprehensive AI instruction and workspace configuration system designed to enable consistent, context-aware AI collaboration across multiple projects and platforms.
What began as a way to extract and reuse prompts across AI providers has evolved into a sophisticated instruction-based architecture for AI collaboration:
- Not just prompts – These are persistent instruction sets that define working context
- Layered architecture – Personal user context + project-specific instructions create complete AI workspace configurations
- Provider-agnostic – Works seamlessly across different LLM environments
This framework consists of:
- Personal user context instructions – Your professional identity, skills, preferences, and working style
- Project-specific instructions – Scope, tech stack, roles, objectives, and guidelines per project
- Session specification – How AI assistants should maintain and adapt context during work sessions
- Templates – For creating new instruction sets quickly and consistently
All components work seamlessly across different LLM environments (Anthropic Claude, GitHub Copilot, Mistral, Gemini, LM Studio, Ollama, etc.), ensuring that every AI assistant understands your background, working style, and project context without repeated explanations. Please note that results my vary based on platform and you may need to adjust your instuctions accordingly.
ai-context-kit/
│
├── README.md # This file
├── LICENSE.md # MIT License file
│
├── projects/
│ └── project1_project.instructions.md # Example project-specific instructions
│
├── prompts/ # Provider-agnostic prompt files
│ ├── create-usercontext-instructions.prompt.md # Generate user context instruction files
│ ├── create-project-instructions.prompt.md # Generate project instruction files
│ ├── validate-usercontext-instructions.prompt.md # Validate user context files
│ └── validate-project-instructions.prompt.md # Validate project files
│
├── specs/
│ └── context_aware_ai_session_spec.md # Specification for AI session management
│
└── templates/
├── usercontext_template.instructions.md # Canonical v1.2 user context template (authoritative)
└── project_template.instructions.md # Canonical v1.2 project template (authoritative)
Instructions are persistent context and guidelines that define:
- WHO you are (user context)
- WHAT the project is (project context)
- HOW the AI should behave (roles, phases, preferences)
Prompts are your day-to-day requests within that instructed environment:
- "Create a new API endpoint"
- "Review this code for security issues"
- "Switch to Developer Mode and implement this feature"
This repository provides the instruction layer that makes your prompts more effective.
It contains also a prompt system for creating and validating those instruction files, ensuring they are complete and compliant with the specification.
The templates located in /templates are the single authoritative source for instruction structure:
templates/usercontext_template.instructions.mdtemplates/project_template.instructions.md
These templates are:
- Fully aligned with the Context-Aware AI Session Flow Specification v1.2
- The exact structures generated by the creation prompts
- The exact structures enforced by the validation prompts
There are no alternate or “light” templates.
If a file validates successfully, it is structurally correct by definition.
Your foundational AI context that includes:
- Professional background and current role
- Technical skills and expertise areas
- Active projects and goals
- Preferred working style and communication preferences
- Constraints and limitations
Purpose: Serves as the base instruction layer that AI assistants load first to understand who you are and how you work globally.
Project-specific instruction sets that define:
- Project scope and objectives
- Technology stack and architecture
- Recommended AI roles (Architect, Developer, Designer, etc.)
- Default work phases (Planning, Implementation, Debugging, Review)
- Project-specific constraints and guidelines
Purpose: Provides focused, project-specific instructions that layer on top of your user context to create complete working context.
A structured approach to AI collaboration that manages context dynamically:
| Element | Description | Example Values |
|---|---|---|
| User Context | Your identity, skills, and preferences | Defined in your user context instructions |
| Project | Active domain or codebase | "Mobile UI app", "Backend API" |
| Role/Mode | AI's cognitive stance | Architect, Developer, Designer, Reviewer |
| Phase | Current work stage | Planning, Implementation, Debugging, Review |
| Output Style | Response verbosity | Step-by-step, Minimal code, Annotated |
| Tone | Communication voice | Analytical, Direct, Encouraging |
| Interaction Mode | AI proactivity level | Advisory, Pair-programming, Driver |
Purpose: Ensures AI behavior adapts appropriately as you move through different stages of work.
This repository relies on stable, predictable file paths so that instructions, specifications, prompts, and validators can reference each other safely.
The following paths are considered canonical:
templates/- Canonical instruction templates (spec v1.2)
prompts/- Instruction creation and validation prompts
specs/context_aware_ai_session_spec.md- Authoritative specification (v1.2+)
projects/- Project-specific instruction files
- Root
README.md- Human-facing entry point and workflow documentation
- Do not rename or move these directories without updating:
- README references
- specification cross-references
- validation prompts
- Instruction files should reference the specification by relative path, not URL
- Validators and generators assume these paths by convention
If paths must change, update the specification and README first, then adjust prompts and validators accordingly.
-
Create Your User Context:
- Manual (canonical): Copy
templates/usercontext_template.instructions.md(spec v1.2), fill in your details, and save asyourname_usercontext.instructions.md - AI-Assisted: Use
prompts/create-usercontext-instructions.prompt.mdwith your preferred AI assistant for guided creation
- Manual (canonical): Copy
-
Create Project Instructions:
- Manual (canonical): Copy
templates/project_template.instructions.md(spec v1.2) and define your project - AI-Assisted: Use
prompts/create-project-instructions.prompt.mdfor guided project setup - Save in
projects/folder with descriptive names (e.g.,projectname_project.instructions.md)
- Manual (canonical): Copy
-
Validate Your Instructions (Optional but Recommended):
- Use
prompts/validate-usercontext-instructions.prompt.mdto check your user context file - Use
prompts/validate-project-instructions.prompt.mdto check your project files - Validation creates a
.validation.mdreport with scoring and recommendations
- Use
-
Load Into Your AI Environment:
- See platform-specific instructions below
- Load your user context instructions as the base context
- Add the relevant project instructions on top
- The AI will maintain state across your work session
If you manage your instruction files centrally in this repository, you can link them into a project repo using symlinks.
Example:
mkdir -p /path/to/your-project/.github/instructions
ln -s /path/to/your-instructions/projects/your_project.instructions.md \
/path/to/your-project/.github/instructions/your_project.instructions.md
ln -s /path/to/your-instructions/projects/your_project.validation.md \
/path/to/your-project/.github/instructions/your_project.validation.mdUse absolute paths to keep the links stable.
| Platform | Method |
|---|---|
| Anthropic Claude Projects | Paste user context + project instructions into Project Instructions and/or add to Project Knowledge |
| GitHub Copilot (VS Code/IDE) | Create .github/instructions/copilot-instructions.md; Copilot reads it automatically |
| LM Studio / Ollama | Save .instructions.md files as system prompts or instruction presets |
| OpenAI ChatGPT | Paste into Custom Instructions (user context) and upload project instructions as file |
| Gemini | Paste into chat or use a system instruction (Gemini API / AI Studio) |
| Local scripts / APIs | Concatenate user context + project instructions when initializing conversations |
| IDE integrations | Reference .instructions.md files in config or load via custom extensions |
You can modify session state dynamically using:
- Natural language: "Switch to Developer Mode" or "Move to Implementation Phase"
- Commands:
/ack.mode developer,/ack.phase implementation,/ack.context(shows current state) - Command namespace: Projects define a namespace prefix to avoid collisions (e.g.,
/ack.context,/ack.mode developer) - Project defaults: Each project can define typical starting configurations
- Determinism: Same context + same query = consistent responses
- Explicitness: AI confirms context changes rather than assuming
- Continuity: Session state persists across conversation turns
- Reversibility: All context changes can be undone
- Transparency: Current context is always visible on request
The user context file includes both a human-readable system instructions and machine-readable JSON metadata, ensuring compatibility with various AI platforms.
Key sections to customize:
- About (role, location, ecosystem preferences)
- Projects (with platforms and status)
- Skills (categorized by domain)
- Goals and Constraints
- Preferred working style
- Current focus areas
Each project instruction set should define:
- Project description and tech stack
- Supported AI roles for this project
- Default role and phase
- Phase-specific guidelines
- Output preferences
- Constraints and special considerations
This repository includes a comprehensive prompt system for creating and validating instruction files:
-
prompts/create-usercontext-instructions.prompt.md- 6-phase guided workflow for creating personal user context files
- Covers 15 required sections including professional background, technical skills, projects, and preferences
- Supports complex scenarios: dual professional contexts, 10+ projects, certifications, open source goals
- Generates both markdown and JSON metadata
- Privacy-conscious with placeholder support
-
prompts/create-project-instructions.prompt.md- 7-phase guided workflow for creating project instruction files
- Covers 17 required sections per Context-Aware AI Session Flow Specification v1.2
- Includes session state model (6 elements) and command reference (7 commands)
- Defines AI roles, phases, and example task patterns
- Ensures spec compliance from the start
-
prompts/validate-usercontext-instructions.prompt.md- 5-phase validation workflow with 100-point scoring system
- Validates YAML frontmatter, all 15 required sections, content completeness, and spec v1.2 compliance
- Generates
.validation.mdreport with pass/fail status, issues, and recommendations - Perfect for self-validation, CI/CD integration, and quality assurance
-
prompts/validate-project-instructions.prompt.md- 5-phase validation workflow with 100-point scoring system
- Validates YAML frontmatter, all 17 required sections, session state model, and role definitions
- Generates
.validation.mdreport with detailed findings and example fixes - Includes common validation scenarios and troubleshooting
- Copy the prompt content into your preferred AI assistant (Claude, GPT, Gemini, Mistral, etc.)
- Follow the guided workflow - the AI will ask questions and gather information
- Review the generated output - creation prompts produce complete instruction files
- Validate your work - use validation prompts to check for completeness and compliance
- Iterate as needed - validation reports provide specific recommendations
Validation prompts create persistent .validation.md files alongside your instruction files:
- Location: Same directory as the validated file
- Naming:
[filename].validation.md(e.g.,name_surname_usercontext.validation.md) - Overwrite: Each validation replaces the previous report
- Format: Comprehensive markdown report with scoring, issues, and recommendations
- Benefits: Easy review for both humans and LLMs, enables chunk-based processing
-
File format: UTF-8 Markdown
.instructions.mdfor both user context and project files (persistent context and guidelines)- Actual prompts/queries are what you ask the AI day-to-day within this instructed environment
-
Naming: lowercase with underscores (e.g.,
yourname_usercontext.instructions.md,projectname_project.instructions.md) -
Structure: Consistent headings and sections across all files
-
Languages: Technical content in English; adapt as needed
-
Versioning: Update user context when skills/preferences evolve; update project instructions when phases change
-
Discoverability: Semantic file extensions help AI tools identify and load the appropriate instructions automatically
-
Canonical structure: The templates in
/templatesdefine the only supported instruction structure for spec v1.2
- Same quality of AI assistance across different platforms
- No need to re-explain your background repeatedly
- AI understands your context from the start
- Faster onboarding when switching projects
- Less cognitive overhead managing AI interactions
- AI behavior adjusts to your current work phase
- Easy to switch between different roles (planning, coding, reviewing)
- Context evolves with your projects
- Works across multiple LLM providers
- Can be versioned and backed up
- Shareable with team members (with appropriate redactions)
This is a GitHub template repository. Here's how to use it:
-
Use the Template:
- Click the green "Use this template" button on GitHub
- Choose "Create a new repository"
- Give it a name (e.g.,
my-ai-instructionsorai-workspace-config) - Make it Private (recommended - contains personal information!)
- GitHub will create a fresh copy for you
-
Customize Your Instance:
- Clone your new repository locally
- Start with
templates/usercontext_template.instructions.md - Fill in your professional details, skills, and preferences
- Save as
yourname_usercontext.instructions.mdin the root - Create project instructions from
templates/project_template.instructions.md - Save in
projects/folder
-
Keep It Updated:
- Update your usercontext as your skills evolve
- Add new projects as you start them
- Version control tracks your AI workspace evolution
- Clean history: Your repository starts fresh without this template's history
- Private by default: Easily make your instance private (recommended)
- No upstream confusion: It's your repository, not a fork
- Your data, your control: Personal instructions stay in your private repo
When the template repository gets improvements, here's how to pull them into your instance:
Option 1: Manual Updates (Recommended)
# Add the template as a remote (one-time setup)
git remote add template https://github.com/MSiccDev/ai-context-kit.git
# Fetch template updates
git fetch template
# Review what changed in the template
git log template/main
# Cherry-pick specific improvements you want
git cherry-pick <commit-hash>
# Or merge specific files manually
git checkout template/main -- README.md
git checkout template/main -- specs/context_aware_ai_session_spec.md
git checkout template/main -- templates/Option 2: Automated Merge (Use with Caution)
# Merge all template changes
git merge template/main --allow-unrelated-histories
# Resolve conflicts (protect your personal files!)
# Commit the mergeBest Practice:
- Watch/star the template repository to get notified of updates
- Review the CHANGELOG or commit history before updating
- Only pull updates that add value to your workflow
- Always protect your personal instruction files - never overwrite them
What to Update:
- ✅ Template files in
templates/ - ✅ Specification documents in
specs/ - ✅ README improvements
- ❌ Your personal
*_usercontext.instructions.md - ❌ Your project files in
projects/
Found a bug or have an improvement to the template itself?
- Create an issue in the original template repository
- Submit a pull request with improvements to:
- Template structure
- Documentation clarity
- Specification enhancements
- Example improvements
Note: Never contribute your personal user context or project files - keep those private!
This project is licensed under the MIT License - see the LICENSE file for details.
Note: While the templates and specifications are open source, your personal user context files should remain private and not be shared without redacting sensitive information.
Marco Siccardi – MSiccDev Software Development
This instruction-based system evolved from the challenge of maintaining consistent AI collaboration across multiple platforms and projects.
The Evolution:
- Phase 1: Started as a way to extract and reuse prompts across AI providers
- Phase 2: Evolved into structured, persistent context management
- Phase 3: Matured into a complete instruction-based architecture for AI workspace configuration
What makes this approach powerful is the shift from treating every AI interaction as isolated to creating persistent, layered instruction sets that transform how AI assistants understand and support your work.
This represents lessons learned from extensive work with various LLM providers (Anthropic Claude, GitHub Copilot, OpenAI, Mistral, Gemini, and local models) and real-world development workflows across multiple projects and domains.
Traditional approach:
- User sends isolated prompts
- AI has no continuity between sessions
- Constant re-explanation of context
- Inconsistent results across providers
Instruction-based approach:
- User loads instruction sets once
- AI maintains persistent understanding
- Context builds and evolves naturally
- Consistent collaboration regardless of provider
This isn't just about efficiency—it's about creating a fundamentally different relationship between developers and AI assistants, where the AI becomes a true collaborative partner rather than a stateless tool.