Skip to content

Conversation

@MadLittleMods
Copy link
Contributor

@MadLittleMods MadLittleMods commented Dec 31, 2025

Add Prometheus HTTP service discovery endpoint for easy discovery of all workers in Docker image.

Follow-up to #19324

Spawning from wanting to run a load test against the Complement Docker image of Synapse and see metrics from the homeserver.

GET http://<synapse_container>:9469/metrics/service_discovery

[
  {
    "targets": [ "<host>", ... ],
    "labels": {
      "<labelname>": "<labelvalue>", ...
    }
  },
  ...
]

The metrics from each worker can also be accessed via http://<synapse_container>:9469/metrics/worker/<worker_name> which is what the service discovery response points to behind the scenes. This way, you only need to expose a single port (9469) to access all metrics.

Real HTTP service discovery response
[
    {
        "targets": [
            "host.docker.internal:9469"
        ],
        "labels": {
            "instance": "host.docker.internal:19091",
            "job": "event_persister",
            "__metrics_path__": "/metrics/worker/event_persister1"
        }
    },
    {
        "targets": [
            "host.docker.internal:9469"
        ],
        "labels": {
            "instance": "host.docker.internal:19092",
            "job": "event_persister",
            "__metrics_path__": "/metrics/worker/event_persister2"
        }
    },
    {
        "targets": [
            "host.docker.internal:9469"
        ],
        "labels": {
            "instance": "host.docker.internal:19093",
            "job": "background_worker",
            "__metrics_path__": "/metrics/worker/background_worker1"
        }
    },
    {
        "targets": [
            "host.docker.internal:9469"
        ],
        "labels": {
            "instance": "host.docker.internal:19094",
            "job": "event_creator",
            "__metrics_path__": "/metrics/worker/event_creator1"
        }
    },
    {
        "targets": [
            "host.docker.internal:9469"
        ],
        "labels": {
            "instance": "host.docker.internal:19095",
            "job": "user_dir",
            "__metrics_path__": "/metrics/worker/user_dir1"
        }
    },
    {
        "targets": [
            "host.docker.internal:9469"
        ],
        "labels": {
            "instance": "host.docker.internal:19096",
            "job": "media_repository",
            "__metrics_path__": "/metrics/worker/media_repository1"
        }
    },
    {
        "targets": [
            "host.docker.internal:9469"
        ],
        "labels": {
            "instance": "host.docker.internal:19097",
            "job": "federation_inbound",
            "__metrics_path__": "/metrics/worker/federation_inbound1"
        }
    },
    {
        "targets": [
            "host.docker.internal:9469"
        ],
        "labels": {
            "instance": "host.docker.internal:19098",
            "job": "federation_reader",
            "__metrics_path__": "/metrics/worker/federation_reader1"
        }
    },
    {
        "targets": [
            "host.docker.internal:9469"
        ],
        "labels": {
            "instance": "host.docker.internal:19099",
            "job": "federation_sender",
            "__metrics_path__": "/metrics/worker/federation_sender1"
        }
    },
    {
        "targets": [
            "host.docker.internal:9469"
        ],
        "labels": {
            "instance": "host.docker.internal:19100",
            "job": "synchrotron",
            "__metrics_path__": "/metrics/worker/synchrotron1"
        }
    },
    {
        "targets": [
            "host.docker.internal:9469"
        ],
        "labels": {
            "instance": "host.docker.internal:19101",
            "job": "client_reader",
            "__metrics_path__": "/metrics/worker/client_reader1"
        }
    },
    {
        "targets": [
            "host.docker.internal:9469"
        ],
        "labels": {
            "instance": "host.docker.internal:19102",
            "job": "appservice",
            "__metrics_path__": "/metrics/worker/appservice1"
        }
    },
    {
        "targets": [
            "host.docker.internal:9469"
        ],
        "labels": {
            "instance": "host.docker.internal:19103",
            "job": "pusher",
            "__metrics_path__": "/metrics/worker/pusher1"
        }
    },
    {
        "targets": [
            "host.docker.internal:9469"
        ],
        "labels": {
            "instance": "host.docker.internal:19104",
            "job": "device_lists",
            "__metrics_path__": "/metrics/worker/device_lists1"
        }
    },
    {
        "targets": [
            "host.docker.internal:9469"
        ],
        "labels": {
            "instance": "host.docker.internal:19105",
            "job": "device_lists",
            "__metrics_path__": "/metrics/worker/device_lists2"
        }
    },
    {
        "targets": [
            "host.docker.internal:9469"
        ],
        "labels": {
            "instance": "host.docker.internal:19106",
            "job": "stream_writers",
            "__metrics_path__": "/metrics/worker/stream_writers1"
        }
    },
    {
        "targets": [
            "host.docker.internal:9469"
        ],
        "labels": {
            "instance": "host.docker.internal:19090",
            "job": "main",
            "__metrics_path__": "/metrics/worker/main"
        }
    }
]

And how it ends up as targets in Prometheus (http://localhost:9090/targets):

Prometheus /targets

Testing strategy

  1. Make sure your firewall allows the Docker containers to communicate to the host (host.docker.internal) so they can access exposed ports of other Docker containers. We want to allow Synapse to access the Prometheus container and Grafana to access to the Prometheus container.
    • sudo ufw allow in on docker0 comment "Allow traffic from the default Docker network to the host machine (host.docker.internal)"
    • sudo ufw allow in on br-+ comment "(from Matrix Complement testing) Allow traffic from custom Docker networks to the host machine (host.docker.internal)"
    • Complement firewall docs
  2. Build the Docker image for Synapse: docker build -t matrixdotorg/synapse -f docker/Dockerfile . && docker build -t matrixdotorg/synapse-workers -f docker/Dockerfile-workers . (docs)
  3. Start Synapse:
    docker run -d --name synapse \
       --mount type=volume,src=synapse-data,dst=/data \
       -e SYNAPSE_SERVER_NAME=my.docker.synapse.server \
       -e SYNAPSE_REPORT_STATS=no \
       -e SYNAPSE_ENABLE_METRICS=1 \
       -p 8008:8008 \
       -p 9469:9469 \
       matrixdotorg/synapse-workers:latest
    
    • Also try with workers:
      docker run -d --name synapse \
         --mount type=volume,src=synapse-data,dst=/data \
         -e SYNAPSE_SERVER_NAME=my.docker.synapse.server \
         -e SYNAPSE_REPORT_STATS=no \
         -e SYNAPSE_ENABLE_METRICS=1 \
         -e SYNAPSE_WORKER_TYPES="\
             event_persister:2, \
             background_worker, \
             event_creator, \
             user_dir, \
             media_repository, \
             federation_inbound, \
             federation_reader, \
             federation_sender, \
             synchrotron, \
             client_reader, \
             appservice, \
             pusher, \
             device_lists:2, \
             stream_writers=account_data+presence+receipts+to_device+typing" \
         -p 8008:8008 \
         -p 9469:9469 \
         matrixdotorg/synapse-workers:latest
      
  4. You should be able to see Prometheus service discovery endpoint at http://localhost:9469/metrics/service_discovery
  5. Create a Prometheus config (prometheus.yml)
    global:
      scrape_interval: 15s
      scrape_timeout: 15s
      evaluation_interval: 15s
    
    scrape_configs:
      - job_name: synapse
        scrape_interval: 15s
        metrics_path: /_synapse/metrics
        scheme: http
        # We set `honor_labels` so that each service can set their own `job` label
        #
        # > honor_labels controls how Prometheus handles conflicts between labels that are
        # > already present in scraped data and labels that Prometheus would attach
        # > server-side ("job" and "instance" labels, manually configured target
        # > labels, and labels generated by service discovery implementations).
        # >
        # > *-- https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config*
        honor_labels: true
        # Use HTTP service discovery
        #
        # Reference:
        #  - https://prometheus.io/docs/prometheus/latest/http_sd/
        #  - https://prometheus.io/docs/prometheus/latest/configuration/configuration/#http_sd_config
        http_sd_configs:
          - url: 'http://localhost:9469/metrics/service_discovery'
  6. Start Prometheus (update the volume bind mount to the config you just saved somewhere):
    docker run \
        --detach \
        --name=prometheus \
        --add-host host.docker.internal:host-gateway \
        -p 9090:9090 \
        -v ~/Documents/code/random/prometheus-config/prometheus.yml:/etc/prometheus/prometheus.yml \
        prom/prometheus
    
  7. Make sure you're seeing some data in Prometheus. On http://localhost:9090/query, search for synapse_build_info
  8. Start Grafana
    docker run -d --name=grafana --add-host host.docker.internal:host-gateway -p 3000:3000 grafana/grafana
    
  9. Visit the Grafana dashboard, http://localhost:3000/ (Credentials: admin/admin)
  10. Connections -> Data Sources -> Add data source -> Prometheus
    • Prometheus server URL: http://host.docker.internal:9090
  11. Import the Synapse dashboard: https://github.com/element-hq/synapse/blob/develop/contrib/grafana/synapse.json

Dev notes

HTTP service discovery references:

Talking about job and instance labels:

Being able to specify metrics_path in service discovery targets:


Pull Request Checklist

  • Pull request is based on the develop branch
  • Pull request includes a changelog file. The entry should:
    • Be a short description of your change which makes sense to users. "Fixed a bug that prevented receiving messages from other servers." instead of "Moved X method from EventStore to EventWorkerStore.".
    • Use markdown where necessary, mostly for code blocks.
    • End with either a period (.) or an exclamation mark (!).
    • Start with a capital letter.
    • Feel free to credit yourself, by adding a sentence "Contributed by @github_username." or "Contributed by [Your Name]." to the end of the entry.
  • Code style is correct (run the linters)

Change is now being done in #19325
…rate-config`

This is necessary as the Docker image actually uses `--generate-config`
to generate the main homeserver config. It's only in worker mode
that it uses the other route.
Let `ServerConfig.generate_config_section(...)` figure it out
Comment on lines +346 to +361
NGINX_LOCATION_REGEX_CONFIG_BLOCK = """
location ~* {endpoint} {{
proxy_pass {upstream};
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
}}
"""
NGINX_LOCATION_EXACT_CONFIG_BLOCK = """
location = {endpoint} {{
proxy_pass {upstream};
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
}}
"""
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Separating regex match vs exact match is necessary because we can't use a URI path in proxy_pass http://localhost:19090/_synapse/metrics with the regex version.

nginx | 2025/12/31 22:58:34 [emerg] 21#21: "proxy_pass" cannot have URI part in location given by regular expression, or inside named location, or inside "if" statement, or inside "limit_except" block in /etc/nginx/conf.d/matrix-synapse.conf:732

Comment on lines 685 to +687
def parse_worker_types(
requested_worker_types: list[str],
) -> dict[str, set[str]]:
) -> list[Worker]:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Previously, this was using a dictionary mapping worker_name -> worker_types

I refactored these functions to use the parsed Worker type as I now need to access the worker_base_name in the new code in order to set the Prometheus job label correctly.

Comment on lines +1076 to +1084
# This allows us to change the `metrics_path` on a per-target basis.
# We want to grab the metrics from our nginx proxied location (setup
# below).
#
# While there doesn't seem to be official docs on these special
# labels (`__metrics_path__`, `__scheme__`, `__scrape_interval__`,
# `__scrape_timeout__`), this discussion best summarizes how this
# works: https://github.com/prometheus/prometheus/discussions/13217
"__metrics_path__": f"/metrics/worker/{worker.worker_name}",
Copy link
Contributor Author

@MadLittleMods MadLittleMods Jan 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Made a PR to update the Prometheus docs to make this more clear: prometheus/prometheus#17765

Comment on lines 1117 to 1139
# Proxy all of the Synapse metrics endpoints through a central place so that
# people only need to expose the single 9469 port and service discovery can take
# care of the rest: `/metrics/worker/<worker_name>` ->
# http://localhost:19090/_synapse/metrics
#
# Build the nginx location config blocks
metrics_proxy_locations = ""
for worker in requested_workers:
metrics_proxy_locations += NGINX_LOCATION_EXACT_CONFIG_BLOCK.format(
endpoint=f"/metrics/worker/{worker.worker_name}",
upstream=f"http://localhost:{worker_name_to_metrics_port_map[worker.worker_name]}/_synapse/metrics",
)
# Add the main Synapse process as well
metrics_proxy_locations += NGINX_LOCATION_EXACT_CONFIG_BLOCK.format(
endpoint="/metrics/worker/main",
upstream="http://localhost:19090/_synapse/metrics",
)

# Add a nginx server/location to serve the JSON file
nginx_prometheus_metrics_service_discovery = NGINX_PROMETHEUS_METRICS_SERVICE_DISCOVERY.format(
service_discovery_file_path=PROMETHEUS_METRICS_SERVICE_DISCOVERY_FILE_PATH,
metrics_proxy_locations=metrics_proxy_locations,
)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you want to inspect the result of all of this,

  1. Start your Synapse container per the Testing strategy instructions in the readme
  2. Jump into the container: docker exec -it synapse /bin/bash
  3. cat /etc/nginx/conf.d/matrix-synapse.conf
/etc/nginx/conf.d/matrix-synapse.conf
docker exec -it synapse /bin/bash
root@ab1f580e63ac:/# cat /etc/nginx/conf.d/matrix-synapse.conf
# This file contains the base config for the reverse proxy, as part of ../Dockerfile-workers.
# configure_workers_and_start.py uses and amends to this file depending on the workers
# that have been selected.


upstream events {
    server localhost:18009;
    server localhost:18010;

}

upstream event_persister {
    server localhost:18009;
    server localhost:18010;

}

upstream background_worker {
    server localhost:18011;

}

upstream event_creator {
    server localhost:18012;

}

upstream user_dir {
    server localhost:18013;

}

upstream media_repository {
    server localhost:18014;

}

upstream federation_inbound {
    server localhost:18015;

}

upstream federation_reader {
    server localhost:18016;

}

upstream federation_sender {
    server localhost:18017;

}

upstream synchrotron {
    server localhost:18018;

}

upstream client_reader {
    server localhost:18019;

}

upstream appservice {
    server localhost:18020;

}

upstream pusher {
    server localhost:18021;

}

upstream device_lists {
    server localhost:18022;
    server localhost:18023;

}

upstream typing {
    server localhost:18024;

}

upstream to_device {
    server localhost:18024;

}

upstream presence {
    server localhost:18024;

}

upstream account_data {
    server localhost:18024;

}

upstream receipts {
    server localhost:18024;

}


server {
    # Listen on an unoccupied port number
    listen 8008;
    listen [::]:8008;



    server_name localhost;

    # Nginx by default only allows file uploads up to 1M in size
    # Increase client_max_body_size to match max_upload_size defined in homeserver.yaml
    client_max_body_size 100M;


    location ~* ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/typing {
        proxy_pass http://typing;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(r0|v3|unstable)/sendToDevice/ {
        proxy_pass http://to_device;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|r0|v3|unstable)/presence/ {
        proxy_pass http://presence;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/media/ {
        proxy_pass http://media_repository;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_synapse/admin/v1/purge_media_cache$ {
        proxy_pass http://media_repository;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_synapse/admin/v1/room/.*/media.*$ {
        proxy_pass http://media_repository;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_synapse/admin/v1/user/.*/media.*$ {
        proxy_pass http://media_repository;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_synapse/admin/v1/media/.*$ {
        proxy_pass http://media_repository;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_synapse/admin/v1/quarantine_media/.*$ {
        proxy_pass http://media_repository;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/v1/media/.*$ {
        proxy_pass http://media_repository;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/federation/v1/media/.*$ {
        proxy_pass http://media_repository;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* /_matrix/federation/(v1|v2)/send/ {
        proxy_pass http://federation_inbound;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/federation/v1/version$ {
        proxy_pass http://federation_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/federation/(v1|v2)/event/ {
        proxy_pass http://federation_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/federation/(v1|v2)/state/ {
        proxy_pass http://federation_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/federation/(v1|v2)/state_ids/ {
        proxy_pass http://federation_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/federation/(v1|v2)/backfill/ {
        proxy_pass http://federation_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/federation/(v1|v2)/get_missing_events/ {
        proxy_pass http://federation_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/federation/(v1|v2)/publicRooms {
        proxy_pass http://federation_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/federation/(v1|v2)/query/ {
        proxy_pass http://federation_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/federation/(v1|v2)/make_join/ {
        proxy_pass http://federation_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/federation/(v1|v2)/make_leave/ {
        proxy_pass http://federation_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/federation/(v1|v2)/send_join/ {
        proxy_pass http://federation_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/federation/(v1|v2)/send_leave/ {
        proxy_pass http://federation_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/federation/v1/make_knock/ {
        proxy_pass http://federation_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/federation/v1/send_knock/ {
        proxy_pass http://federation_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/federation/(v1|v2)/invite/ {
        proxy_pass http://federation_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/federation/(v1|v2)/query_auth/ {
        proxy_pass http://federation_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/federation/(v1|v2)/event_auth/ {
        proxy_pass http://federation_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/federation/v1/timestamp_to_event/ {
        proxy_pass http://federation_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/federation/(v1|v2)/exchange_third_party_invite/ {
        proxy_pass http://federation_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/federation/(v1|v2)/user/devices/ {
        proxy_pass http://federation_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/federation/(v1|v2)/get_groups_publicised$ {
        proxy_pass http://federation_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/key/v2/query {
        proxy_pass http://federation_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|r0|v3|unstable)/user_directory/search$ {
        proxy_pass http://user_dir;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|r0|v3|unstable)/publicRooms$ {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/joined_members$ {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/context/.*$ {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/members$ {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/state$ {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/v1/rooms/.*/hierarchy$ {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(v1|unstable)/rooms/.*/relations/ {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/v1/rooms/.*/threads$ {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|r0|v3|unstable)/login$ {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|r0|v3|unstable)/account/3pid$ {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|r0|v3|unstable)/account/whoami$ {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|r0|v3|unstable)/account/deactivate$ {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|r0|v3|unstable)/devices(/|$) {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(r0|v3)/delete_devices$ {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/versions$ {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|r0|v3|unstable)/voip/turnServer$ {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(r0|v3|unstable)/register$ {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(r0|v3|unstable)/register/available$ {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(r0|v3|unstable)/auth/.*/fallback/web$ {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/messages$ {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/event {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|r0|v3|unstable)/joined_rooms {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|r0|v3|unstable/.*)/rooms/.*/aliases {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/v1/rooms/.*/timestamp_to_event$ {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|r0|v3|unstable)/search {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(r0|v3|unstable)/user/.*/filter(/|$) {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(r0|v3|unstable)/password_policy$ {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|r0|v3|unstable)/directory/room/.*$ {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(r0|v3|unstable)/capabilities$ {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(r0|v3|unstable)/notifications$ {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|r0|v3|unstable)/keys/upload {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|r0|v3|unstable)/keys/device_signing/upload$ {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|r0|v3|unstable)/keys/signatures/upload$ {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/unstable/org.matrix.msc4140/delayed_events(/.*/restart)?$ {
        proxy_pass http://client_reader;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(r0|v3|unstable)/.*/tags {
        proxy_pass http://account_data;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(r0|v3|unstable)/.*/account_data {
        proxy_pass http://account_data;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(v2_alpha|r0|v3)/sync$ {
        proxy_pass http://synchrotron;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|v2_alpha|r0|v3)/events$ {
        proxy_pass http://synchrotron;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|r0|v3)/initialSync$ {
        proxy_pass http://synchrotron;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync$ {
        proxy_pass http://synchrotron;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(r0|v3|unstable)/rooms/.*/receipt {
        proxy_pass http://receipts;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(r0|v3|unstable)/rooms/.*/read_markers {
        proxy_pass http://receipts;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/redact {
        proxy_pass http://event_creator;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/send {
        proxy_pass http://event_creator;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$ {
        proxy_pass http://event_creator;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|r0|v3|unstable)/join/ {
        proxy_pass http://event_creator;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|r0|v3|unstable)/knock/ {
        proxy_pass http://event_creator;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location ~* ^/_matrix/client/(api/v1|r0|v3|unstable)/profile/ {
        proxy_pass http://event_creator;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }


    # Send all other traffic to the main process
    location ~* ^(\\/_matrix|\\/_synapse) {

        # note: do not add a path (even a single /) after the port in `proxy_pass`,
        # otherwise nginx will canonicalise the URI and cause signature verification
        # errors.
        proxy_pass http://localhost:8080;

        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host:$server_port;
    }
}


server {
    listen 9469;
    location = /metrics/service_discovery {
        alias /data/prometheus_service_discovery.json;
        default_type application/json;
    }

    # Make the service discovery endpoint easy to find; redirect to the correct spot.
    location = / {
        return 302 /metrics/service_discovery;
    }


    location = /metrics/worker/event_persister1 {
        proxy_pass http://localhost:19091/_synapse/metrics;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location = /metrics/worker/event_persister2 {
        proxy_pass http://localhost:19092/_synapse/metrics;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location = /metrics/worker/background_worker1 {
        proxy_pass http://localhost:19093/_synapse/metrics;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location = /metrics/worker/event_creator1 {
        proxy_pass http://localhost:19094/_synapse/metrics;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location = /metrics/worker/user_dir1 {
        proxy_pass http://localhost:19095/_synapse/metrics;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location = /metrics/worker/media_repository1 {
        proxy_pass http://localhost:19096/_synapse/metrics;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location = /metrics/worker/federation_inbound1 {
        proxy_pass http://localhost:19097/_synapse/metrics;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location = /metrics/worker/federation_reader1 {
        proxy_pass http://localhost:19098/_synapse/metrics;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location = /metrics/worker/federation_sender1 {
        proxy_pass http://localhost:19099/_synapse/metrics;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location = /metrics/worker/synchrotron1 {
        proxy_pass http://localhost:19100/_synapse/metrics;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location = /metrics/worker/client_reader1 {
        proxy_pass http://localhost:19101/_synapse/metrics;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location = /metrics/worker/appservice1 {
        proxy_pass http://localhost:19102/_synapse/metrics;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location = /metrics/worker/pusher1 {
        proxy_pass http://localhost:19103/_synapse/metrics;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location = /metrics/worker/device_lists1 {
        proxy_pass http://localhost:19104/_synapse/metrics;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location = /metrics/worker/device_lists2 {
        proxy_pass http://localhost:19105/_synapse/metrics;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location = /metrics/worker/stream_writers1 {
        proxy_pass http://localhost:19106/_synapse/metrics;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

    location = /metrics/worker/main {
        proxy_pass http://localhost:19090/_synapse/metrics;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }

}

Comment on lines +1117 to +1120
# Proxy all of the Synapse metrics endpoints through a central place so that
# people only need to expose the single 9469 port and service discovery can take
# care of the rest: `/metrics/worker/<worker_name>` ->
# http://localhost:19090/_synapse/metrics
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Something for a follow-up)

Instead of serving this endpoint from the random nginx inside the Docker container, it would be interesting to have this as an built-in endpoint in Synapse itself (/_synapse/metrics/service_discovery).

I think the main problem with this is that the workers don't really know about each other at the moment so we couldn't populate the response properly. We don't require the instance_map to be exhaustive.

Showing off how instance_map isn't exhaustive
$ docker exec -it synapse /bin/bash
root@ab1f580e63ac:/# cat /conf/workers/shared.yaml
# This file contains the base for the shared homeserver config file between Synapse workers,
# as part of ./Dockerfile-workers.
# configure_workers_and_start.py uses and amends to this file depending on the workers
# that have been selected.


redis:
    enabled: true








enable_media_repo: false
enable_metrics: true
federation_sender_instances:
- federation_sender1
instance_map:
  device_lists1:
    host: localhost
    port: 18022
  device_lists2:
    host: localhost
    port: 18023
  event_persister1:
    host: localhost
    port: 18009
  event_persister2:
    host: localhost
    port: 18010
  main:
    host: 127.0.0.1
    port: 9093
  stream_writers1:
    host: localhost
    port: 18024
listeners:
- bind_address: 127.0.0.1
  port: 9093
  resources:
  - names:
    - replication
  type: http
- bind_addresses:
  - '::'
  port: 8080
  resources:
  - compress: true
    names:
    - client
  - compress: false
    names:
    - federation
  tls: false
  type: http
  x_forwarded: false
- port: 19090
  type: metrics
log_config: /conf/workers/master.log.config
media_instance_running_background_jobs: media_repository1
notify_appservices_from_worker: appservice1
pusher_instances:
- pusher1
run_background_tasks_on: background_worker1
stream_writers:
  account_data:
  - stream_writers1
  device_lists:
  - device_lists1
  - device_lists2
  events:
  - event_persister1
  - event_persister2
  presence:
  - stream_writers1
  receipts:
  - stream_writers1
  to_device:
  - stream_writers1
  typing:
  - stream_writers1
update_user_directory_from_worker: user_dir1

And we'd also have to make some assumptions on worker naming in order to set the job correctly (strip off number, etc). It works great here because we are generating the worker naming and the config in the same place here.

Base automatically changed from madlittlemods/docker-metrics to develop January 1, 2026 20:00
…ervice-discovery

Conflicts:
	docker/README-testing.md
	docker/configure_workers_and_start.py
Discovered while playing with element-hq/synapse-rust-apps#397.

Homerunner will map a random port to 9469 (`0.0.0.0:32939->9469/tcp`)
which means Prometheus won't be able find it on port 9469 as our
service discovery response says it can. Instead, we just reflect
back whatever host was used to access to the service discovery endpoint;
as if they can access the service discovery endpoint with that host,
they can also access all of the metrics on the same host.

This is also better in terms of not having to hard-code `host.docker.internal`
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants