-
Notifications
You must be signed in to change notification settings - Fork 457
Description
🧭 Type of Feature
Please select the most appropriate category:
- Enhancement to existing functionality
- New feature or capability
- New MCP-compliant server
- New component or integration
- Developer tooling or test improvement
- Packaging, automation and deployment (ex: pypi, docker, quay.io, kubernetes, terraform)
- Other (please describe below)
🧭 Epic
Title: Add Kubernetes scheduling controls to Helm chart deployments
Goal: Allow operators to control pod placement and scheduling by configuring nodeSelector, tolerations, affinity, and antiAffinity in all Helm-managed Deployments.
Why now: Many production Kubernetes clusters rely on strict node separation, taints, and topology rules. Without native Helm support, teams are forced into fragile workarounds that undermine reliability and operational discipline.
🙋♂️ User Story 1
As a: Platform / Kubernetes operator
I want: To configure nodeSelector and tolerations via values.yaml
So that: I can schedule workloads onto specific node pools and tolerate tainted nodes in a predictable, policy-compliant way.
✅ Acceptance Criteria
Scenario: Configure nodeSelector through Helm values
Given a Helm chart with nodeSelector support
When I set nodeSelector values in values.yaml
Then the rendered Deployment includes the specified nodeSelector
Scenario: Configure tolerations through Helm values
Given a Helm chart with tolerations support
When I define tolerations in values.yaml
Then the rendered Deployment includes the specified tolerations
🙋♂️ User Story 2
As a: Cluster administrator
I want: To define pod affinity and anti-affinity rules via Helm values
So that: I can enforce workload co-location or separation for availability, performance, and fault tolerance.
✅ Acceptance Criteria
Scenario: Configure pod affinity rules
Given a Helm chart with affinity support
When I define podAffinity rules in values.yaml
Then the rendered Deployment includes the configured affinity rules
Scenario: Configure pod anti-affinity rules
Given a Helm chart with anti-affinity support
When I define podAntiAffinity rules in values.yaml
Then the rendered Deployment enforces pod separation as specified
📐 Design Sketch (optional)
flowchart TD
A[values.yaml] --> B[Helm Template Rendering]
B --> C[Deployment Manifest]
C --> D[nodeSelector & tolerations]
C --> E[affinity & antiAffinity]
🧩 Proposed values.yaml Additions (Example)
nodeSelector: {}
tolerations: []
affinity: {}
These values should be passed through verbatim to .spec.template.spec for all relevant workload resources (e.g., Deployment, StatefulSet, Job, where applicable).
🔗 MCP Standards Check
- Change adheres to current MCP specifications
- No breaking changes to existing MCP-compliant integrations
- Feature is additive and fully backward-compatible
🔄 Alternatives Considered
- Maintaining a private fork of the Helm chart
- Using Helm post-renderers or Kustomize overlays
- Manually patching resources after deployment
All alternatives increase long-term maintenance burden and weaken alignment with upstream releases.
📓 Additional Context
This request reflects widely adopted Kubernetes best practices in mature production environments and aligns with expectations for configurable, operator-friendly Helm charts.