Skip to content

Conversation

@arikalon1
Copy link
Contributor

Summary

Fixes #495 - SSL certificate verification fails when using Teleport Kubernetes proxy

Problem

When using KRR with Kubernetes clusters accessed through Teleport or similar proxies, SSL certificate verification fails with:

SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: 
unable to get local issuer certificate (_ssl.c:1006)'))

This happens because Teleport kubeconfigs use tls-server-name to specify the SNI hostname for TLS negotiation, which differs from the server URL:

clusters:
- cluster:
    certificate-authority-data: <data>
    server: https://company.teleport.sh:443
    tls-server-name: kube-teleport-proxy-alpn.company.teleport.sh
  name: company.teleport.sh

The CA certificate is valid for the tls-server-name (SNI) hostname, not the server URL hostname.

Solution

Extended the existing config_patch.py (which already handles proxy-url) to also support tls-server-name:

  1. Read tls-server-name from the cluster configuration in kubeconfig
  2. Pass it to the kubernetes client's Configuration as tls_server_name
  3. The kubernetes client then uses this value for SNI during TLS handshake

This follows the same pattern already established for proxy-url support.

Changes

  • robusta_krr/core/integrations/kubernetes/config_patch.py:
    • Added extraction of tls-server-name from kubeconfig cluster config
    • Extended Configuration class to accept tls_server_name parameter
    • Updated _set_config to iterate over both proxy and tls_server_name keys

Testing

Users with Teleport access can test by:

  1. Configure Kubernetes access via Teleport (tsh kube login)
  2. Verify kubectl get pods works
  3. Run krr simple -p <prometheus-url>
  4. Confirm no SSL errors occur

cc @prein (issue reporter who offered to help test)

This fix adds support for the `tls-server-name` field from kubeconfig,
which is required when connecting to Kubernetes clusters through
proxies like Teleport that use SNI-based routing.

When Teleport is used, the server URL hostname differs from the TLS
certificate's expected hostname. The `tls-server-name` field tells
the client which hostname to use for SNI during TLS negotiation.

Without this fix, SSL certificate verification fails because the
client uses the server URL hostname instead of the required SNI name.

Fixes #495
@coderabbitai
Copy link

coderabbitai bot commented Jan 22, 2026

Walkthrough

Added support for reading and propagating the tls-server-name field from Kubernetes kubeconfig to the client configuration. The Configuration class now accepts and stores a tls_server_name parameter, which is loaded from cluster configuration and applied alongside proxy settings.

Changes

Cohort / File(s) Summary
TLS server name configuration support
robusta_krr/core/integrations/kubernetes/config_patch.py
Extended Configuration.__init__ to accept tls_server_name parameter and added initialization of self.tls_server_name. Modified _load_cluster_info to extract tls-server-name from cluster dict. Updated _set_config to propagate both proxy and tls_server_name to client config via unified loop pattern.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The PR title clearly describes the main change: adding tls-server-name support for Teleport proxies, which is the primary objective of the changeset.
Description check ✅ Passed The PR description is comprehensive and directly related to the changeset, explaining the problem, solution, and changes made to fix SSL certificate verification with Teleport proxies.
Linked Issues check ✅ Passed The PR directly addresses issue #495 by implementing tls-server-name support in config_patch.py, allowing the kubernetes client to use the correct SNI hostname during TLS negotiation.
Out of Scope Changes check ✅ Passed All changes are scoped to the config_patch.py file and directly address the linked issue; no out-of-scope modifications are present.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/tls-server-name-support

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@prein
Copy link

prein commented Jan 23, 2026

Thanks for looking into this. I've tested it in my setup and it is not working. I'm still getting SSL errors. I have run it through my local coding agent to get some explanation for you. See below

Summary of the Fix Issue

The current fix in commit 252fb24 is incomplete. Here's why:

What the fix does

  1. Reads tls-server-name from kubeconfig in _load_cluster_info()
  2. Stores it as tls_server_name on the Configuration object

What's missing

The kubernetes-client v26.1.0 bundled with KRR doesn't use tls_server_name anywhere. The fix stores the value but
nothing reads it.

For tls_server_name to work, it needs to be passed to urllib3 as server_hostname when creating the SSL connection.
This happens in kubernetes/client/rest.py where urllib3.PoolManager is created - but v26.1.0 doesn't pass any
server_hostname parameter.

What would make it work

Option A: Upgrade kubernetes dependency to v28+ which has native tls-server-name support (see lines 570-583 in
newer versions)

Option B: Patch RESTClientObject in addition to KubeConfigLoader . The REST client would need to:

  1. Accept tls_server_name from configuration
  2. Pass it as server_hostname to urllib3's PoolManager

Verification

  # v26.1.0 has no tls_server_name support:
  grep -n "tls_server_name" .venv/lib/python3.12/site-packages/kubernetes/config/kube_config.py
  # Returns: nothing

  # v28+ has it:
  # Lines 570-571: if 'tls-server-name' in self._cluster:
  #                    self.tls_server_name = self._cluster['tls-server-name']

Recommendation

Upgrading kubernetes-client to v28+ is the cleanest solution - it has native support and the config_patch.py changes
would then work correctly since the base class would handle passing tls_server_name to the REST client.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

SSL certificate verification fails when using Teleport Kubernetes proxy

3 participants