-
Notifications
You must be signed in to change notification settings - Fork 1.1k
[WIP/DONOTMERGE] Feature: Embedded TSDB Subsystem #5322
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: v4.0
Are you sure you want to change the base?
Conversation
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Comment |
Summary of ChangesHello @renecannao, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances ProxySQL's observability capabilities by embedding a Time Series Database directly into the application. This new subsystem allows ProxySQL to autonomously collect, store, and visualize critical runtime metrics and backend health data. The goal is to provide a 'batteries-included' monitoring solution, particularly beneficial for smaller deployments, by reducing the reliance on external monitoring infrastructure while offering a native UI and API for data access. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces an embedded Time Series Database (TSDB) subsystem to ProxySQL, including documentation, API endpoints, and a basic UI. The TSDB collects metrics from Prometheus and query digests, storing them in raw files. It also includes backend monitoring via TCP probes. However, the write_queue and writer_loop for asynchronous writing are not utilized, leading to a redundant writer thread. The compactor_loop for data retention and compaction is a placeholder, and get_status() returns hardcoded zero values. The include for curl/curl.h is commented out, potentially disabling version checking.
| std::queue<tsdb_write_request_t> write_queue; | ||
| std::mutex queue_mutex; | ||
| std::condition_variable queue_cv; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The write_queue is declared here, and writer_loop is designed to consume from it. However, in lib/ProxySQL_TSDB.cpp, the ProxySQL_TSDB::write method (which is called by sampler_loop and monitor_loop) directly writes to disk and does not push any requests to this queue. This makes the writer_thread effectively idle and the queue-based asynchronous writing mechanism non-functional for actual metric ingestion.
| void ProxySQL_TSDB::write(const std::string& name, const std::map<std::string, std::string>& labels, long long timestamp, double value) { | ||
| if (!config.enabled) return; | ||
| std::lock_guard<std::mutex> lock(write_mutex); | ||
|
|
||
| // Ensure data directory exists | ||
| struct stat st; | ||
| if (stat(config.data_dir.c_str(), &st) == -1) { | ||
| mkdir(config.data_dir.c_str(), 0755); | ||
| } | ||
|
|
||
| // Basic append-only storage | ||
| std::string filename = config.data_dir + "/raw_" + std::to_string(timestamp / (config.raw_window_minutes * 60 * 1000)) + ".tsdb"; | ||
| std::ofstream ofs(filename, std::ios::app | std::ios::binary); | ||
| if (ofs.is_open()) { | ||
| // Simple binary format: [timestamp:8][value:8][name_len:2][name:N][labels_json_len:2][labels_json:M] | ||
| ofs.write((char*)×tamp, 8); | ||
| ofs.write((char*)&value, 8); | ||
| uint16_t nlen = name.length(); | ||
| ofs.write((char*)&nlen, 2); | ||
| ofs.write(name.c_str(), nlen); | ||
|
|
||
| json j_labels = labels; | ||
| std::string s_labels = j_labels.dump(); | ||
| uint16_t llen = s_labels.length(); | ||
| ofs.write((char*)&llen, 2); | ||
| ofs.write(s_labels.c_str(), llen); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The write method directly writes data to a raw_XYZ.tsdb file. This bypasses the write_queue and writer_loop mechanism that is set up in ProxySQL_TSDB.hpp and ProxySQL_TSDB.cpp (lines 207-218). This design choice makes the writer_thread redundant and prevents the intended asynchronous processing of write requests, potentially impacting performance and data consistency if not properly managed.
| void ProxySQL_TSDB::writer_loop() { | ||
| while (!stop_threads) { | ||
| tsdb_write_request_t req; | ||
| { | ||
| std::unique_lock<std::mutex> lock(queue_mutex); | ||
| queue_cv.wait(lock, [this] { return !write_queue.empty() || stop_threads; }); | ||
| if (stop_threads && write_queue.empty()) break; | ||
| req = write_queue.front(); | ||
| write_queue.pop(); | ||
| } | ||
| persist_point(req); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The writer_loop is designed to process tsdb_write_request_t from write_queue. However, the ProxySQL_TSDB::write method (which is the primary ingestion point for metrics from sampler_loop and monitor_loop) does not push to this queue. As a result, this thread will perpetually wait on queue_cv and never process any data, rendering it ineffective.
| void ProxySQL_TSDB::compactor_loop() { | ||
| while (!stop_threads) { | ||
| std::this_thread::sleep_for(std::chrono::minutes(10)); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The compactor_loop is currently implemented as a placeholder that simply sleeps. This indicates that the critical functionality for data retention enforcement and file compaction is not yet implemented. Without this, the TSDB could consume excessive disk space and fail to manage historical data effectively.
| ProxySQL_TSDB::status_t ProxySQL_TSDB::get_status() { | ||
| return {0, 0, 0}; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| #include "platform.h" | ||
| #include "microhttpd.h" | ||
| #include "curl/curl.h" | ||
| //#include "curl/curl.h" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This pull request introduces an embedded Time Series Database (TSDB) subsystem to ProxySQL, providing a self-contained observability solution for storing and visualizing metrics without requiring external infrastructure. The PR is marked as WIP/DONOTMERGE, indicating it is not yet ready for production use.
Changes:
- Implementation of core TSDB functionality including metric collection, storage, and querying capabilities
- Integration with ProxySQL's admin interface for configuration management and status reporting
- Addition of HTTP API endpoints and a web-based dashboard UI for data visualization
- Build system updates to compile new TSDB components and supporting documentation
Reviewed changes
Copilot reviewed 12 out of 12 changed files in this pull request and generated 28 comments.
Show a summary per file
| File | Description |
|---|---|
| lib/ProxySQL_TSDB.cpp | Core TSDB implementation with writer, sampler, monitor, and compactor threads |
| include/ProxySQL_TSDB.hpp | Header file defining TSDB classes, structs, and configuration |
| lib/TSDB_UI_html.cpp | Embedded HTML/JavaScript dashboard UI for metric visualization |
| lib/ProxySQL_HTTP_Server.cpp | HTTP API endpoints for TSDB status and query operations |
| lib/ProxySQL_Admin.cpp | Admin interface functions for TSDB variable management |
| lib/Admin_Handler.cpp | Command handlers for LOAD/SAVE TSDB variables |
| include/proxysql_admin.h | Header additions for TSDB admin methods |
| lib/Makefile | Build system updates to include TSDB object files |
| doc/tsdb/*.md | Documentation covering overview, quickstart, metrics catalog, and API endpoints |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| std::vector<ProxySQL_TSDB::query_result_t> ProxySQL_TSDB::query(const std::string& metric, const std::map<std::string, std::string>& labels, long long from, long long to, int step, const std::string& agg) { | ||
| std::vector<ProxySQL_TSDB::query_result_t> results; | ||
| std::string key = get_series_key(metric, labels); | ||
| std::string file_path = config.data_dir + "/" + key + ".data"; | ||
| std::ifstream ifs(file_path, std::ios::binary); | ||
| if (ifs) { | ||
| query_result_t res; | ||
| res.labels = labels; | ||
| tsdb_point_t pt; | ||
| while (ifs.read(reinterpret_cast<char*>(&pt.timestamp), sizeof(pt.timestamp))) { | ||
| ifs.read(reinterpret_cast<char*>(&pt.value), sizeof(pt.value)); | ||
| if (pt.timestamp >= from && pt.timestamp <= to) { | ||
| res.points.push_back(pt); | ||
| } | ||
| } | ||
| results.push_back(res); | ||
| } | ||
| return results; | ||
| } |
Copilot
AI
Jan 23, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The query function doesn't implement the step parameter for downsampling, and the agg parameter for aggregation is ignored. The function signature accepts these parameters but they're unused. Either implement the functionality or remove the unused parameters to avoid misleading the API consumers.
| "monitor_enabled", | ||
| "monitor_interval_seconds", | ||
| "monitor_connect_timeout_ms", | ||
| "monitor_ping_enabled", | ||
| "monitor_max_concurrent_probes", | ||
| // UI | ||
| "ui_enabled", | ||
| "ui_read_only", |
Copilot
AI
Jan 23, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent variable naming between has_variable() and get/set_variable(). The tsdb_variable_names array uses underscores (e.g., "monitor_enabled", "ui_enabled") but the get_variable and set_variable methods check for hyphenated versions (e.g., "monitor-enabled", "ui-enabled"). This means has_variable("monitor_enabled") returns true but get_variable("monitor_enabled") returns NULL. The variable names in tsdb_variable_names should use hyphens to match the get/set implementations.
| "monitor_enabled", | |
| "monitor_interval_seconds", | |
| "monitor_connect_timeout_ms", | |
| "monitor_ping_enabled", | |
| "monitor_max_concurrent_probes", | |
| // UI | |
| "ui_enabled", | |
| "ui_read_only", | |
| "monitor-enabled", | |
| "monitor-interval-seconds", | |
| "monitor-connect-timeout-ms", | |
| "monitor-ping-enabled", | |
| "monitor-max-concurrent-probes", | |
| // UI | |
| "ui-enabled", | |
| "ui-read-only", |
| new Chart(ctx, { | ||
| type: 'line', | ||
| data: { | ||
| labels: points.map(p => new Date(p[0]).toLocaleTimeString()), | ||
| datasets: [{ | ||
| label: label, | ||
| data: points.map(p => p[1]), | ||
| borderColor: '#2969a5', | ||
| fill: false | ||
| }] | ||
| }, | ||
| options: { scales: { yAxes: [{ ticks: { beginAtZero: true } }] } } | ||
| }); |
Copilot
AI
Jan 23, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The HTML creates new Chart instances every time fetchData is called, but never clears or destroys previous chart instances. If fetchData is called multiple times (e.g., for refresh/polling), this will create memory leaks as old Chart.js instances accumulate. Store chart instances globally and update existing charts rather than creating new ones, or destroy old instances before creating new ones.
| long long from = atoll(from_s); | ||
| long long to = atoll(to_s); | ||
| std::map<std::string, std::string> labels; // TODO: parse labels from query params | ||
| auto query_results = GloTSDB->query(metric, labels, from, to, 0, ""); |
Copilot
AI
Jan 23, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing null pointer check before calling GloTSDB->query(). If GloTSDB is NULL, this will cause a segmentation fault. Add a null check similar to the one used in the /api/tsdb/status endpoint (line 452).
| char *query = (char *)malloc(1024); | ||
| for (int i=0; tsdb_vars[i] != NULL; i++) { | ||
| char *val = GloTSDB->get_variable(tsdb_vars[i]); | ||
| if (val) { | ||
| const char *prefix = (i < 11) ? "tsdb-" : ""; | ||
| sprintf(query, "INSERT OR REPLACE INTO global_variables (variable_name, variable_value) VALUES ('%s%s', '%s')", prefix, tsdb_vars[i], val); | ||
| db->execute(query); |
Copilot
AI
Jan 23, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential SQL injection vulnerability. The value from get_variable is directly interpolated into the SQL query using sprintf without escaping. If the value contains single quotes or other SQL special characters, it could break the query or allow injection. Use parameterized queries or properly escape the value before insertion.
| char *query = (char *)malloc(1024); | |
| for (int i=0; tsdb_vars[i] != NULL; i++) { | |
| char *val = GloTSDB->get_variable(tsdb_vars[i]); | |
| if (val) { | |
| const char *prefix = (i < 11) ? "tsdb-" : ""; | |
| sprintf(query, "INSERT OR REPLACE INTO global_variables (variable_name, variable_value) VALUES ('%s%s', '%s')", prefix, tsdb_vars[i], val); | |
| db->execute(query); | |
| /* Helper to escape single quotes in SQL string literals by doubling them. */ | |
| auto escape_sql_string = [](const char *input) -> char * { | |
| if (input == NULL) { | |
| return NULL; | |
| } | |
| size_t len = strlen(input); | |
| size_t extra = 0; | |
| for (size_t i = 0; i < len; i++) { | |
| if (input[i] == '\'') { | |
| extra++; | |
| } | |
| } | |
| char *out = (char *)malloc(len + extra + 1); | |
| if (out == NULL) { | |
| return NULL; | |
| } | |
| size_t j = 0; | |
| for (size_t i = 0; i < len; i++) { | |
| out[j++] = input[i]; | |
| if (input[i] == '\'') { | |
| out[j++] = '\''; | |
| } | |
| } | |
| out[j] = '\0'; | |
| return out; | |
| }; | |
| char *query = (char *)malloc(1024); | |
| for (int i=0; tsdb_vars[i] != NULL; i++) { | |
| char *val = GloTSDB->get_variable(tsdb_vars[i]); | |
| if (val) { | |
| const char *prefix = (i < 11) ? "tsdb-" : ""; | |
| char *escaped_val = escape_sql_string(val); | |
| if (escaped_val) { | |
| sprintf(query, "INSERT OR REPLACE INTO global_variables (variable_name, variable_value) VALUES ('%s%s', '%s')", prefix, tsdb_vars[i], escaped_val); | |
| db->execute(query); | |
| free(escaped_val); | |
| } |
| * Currently waits on a condition variable for requests. | ||
| */ | ||
| void ProxySQL_TSDB::writer_loop() { | ||
| while (!stop_threads) { | ||
| tsdb_write_request_t req; | ||
| { | ||
| std::unique_lock<std::mutex> lock(queue_mutex); | ||
| queue_cv.wait(lock, [this] { return !write_queue.empty() || stop_threads; }); | ||
| if (stop_threads && write_queue.empty()) break; | ||
| req = write_queue.front(); | ||
| write_queue.pop(); | ||
| } | ||
| persist_point(req); | ||
| } |
Copilot
AI
Jan 23, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The writer_loop thread waits on write_queue but nothing in the codebase pushes to this queue. The write() method writes directly to disk instead of enqueuing requests. This makes the writer_thread effectively a no-op that will block indefinitely. Either write() should enqueue to write_queue instead of writing directly, or writer_loop should be removed if direct writes are intended.
| * Currently waits on a condition variable for requests. | |
| */ | |
| void ProxySQL_TSDB::writer_loop() { | |
| while (!stop_threads) { | |
| tsdb_write_request_t req; | |
| { | |
| std::unique_lock<std::mutex> lock(queue_mutex); | |
| queue_cv.wait(lock, [this] { return !write_queue.empty() || stop_threads; }); | |
| if (stop_threads && write_queue.empty()) break; | |
| req = write_queue.front(); | |
| write_queue.pop(); | |
| } | |
| persist_point(req); | |
| } | |
| * | |
| * Currently, all writes are performed synchronously by ProxySQL_TSDB::write() | |
| * and nothing enqueues to write_queue. To avoid a thread blocking indefinitely | |
| * on an empty queue, this loop is a no-op and exits immediately. | |
| */ | |
| void ProxySQL_TSDB::writer_loop() { | |
| // No queued writes are used in the current implementation; exit thread. | |
| return; |
| const char* tsdb_vars[] = { | ||
| "enabled", "data_dir", "retention_hours", "sample_interval_seconds", | ||
| "raw_window_minutes", "rollup_interval_seconds", "max_series", "max_disk_mb", | ||
| "fsync_mode", "digest_mode", "digest_topk", | ||
| "monitor-enabled", "monitor-interval_seconds", "monitor-connect_timeout_ms", | ||
| "monitor-ping_enabled", "monitor-max_concurrent_probes", | ||
| "ui-enabled", "ui-read_only", | ||
| NULL | ||
| }; | ||
| char *query = (char *)malloc(1024); | ||
| for (int i=0; tsdb_vars[i] != NULL; i++) { | ||
| char *val = GloTSDB->get_variable(tsdb_vars[i]); | ||
| if (val) { | ||
| const char *prefix = (i < 11) ? "tsdb-" : ""; | ||
| sprintf(query, "INSERT OR REPLACE INTO global_variables (variable_name, variable_value) VALUES ('%s%s', '%s')", prefix, tsdb_vars[i], val); |
Copilot
AI
Jan 23, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic to determine which variables get the "tsdb-" prefix is fragile and error-prone. It assumes the first 11 variables in the array need the prefix, but this creates a maintenance issue if the array is reordered. Consider using a more explicit approach, such as a struct with name and prefix, or checking the actual variable name to determine if it should have a prefix.
| const char* tsdb_vars[] = { | |
| "enabled", "data_dir", "retention_hours", "sample_interval_seconds", | |
| "raw_window_minutes", "rollup_interval_seconds", "max_series", "max_disk_mb", | |
| "fsync_mode", "digest_mode", "digest_topk", | |
| "monitor-enabled", "monitor-interval_seconds", "monitor-connect_timeout_ms", | |
| "monitor-ping_enabled", "monitor-max_concurrent_probes", | |
| "ui-enabled", "ui-read_only", | |
| NULL | |
| }; | |
| char *query = (char *)malloc(1024); | |
| for (int i=0; tsdb_vars[i] != NULL; i++) { | |
| char *val = GloTSDB->get_variable(tsdb_vars[i]); | |
| if (val) { | |
| const char *prefix = (i < 11) ? "tsdb-" : ""; | |
| sprintf(query, "INSERT OR REPLACE INTO global_variables (variable_name, variable_value) VALUES ('%s%s', '%s')", prefix, tsdb_vars[i], val); | |
| struct TsdbVarSpec { | |
| const char* name; // internal TSDB variable name | |
| const char* db_prefix; // prefix to use for the DB variable_name | |
| }; | |
| const TsdbVarSpec tsdb_vars[] = { | |
| // TSDB variables stored in DB as "tsdb-<name>" | |
| {"enabled", "tsdb-"}, | |
| {"data_dir", "tsdb-"}, | |
| {"retention_hours", "tsdb-"}, | |
| {"sample_interval_seconds", "tsdb-"}, | |
| {"raw_window_minutes", "tsdb-"}, | |
| {"rollup_interval_seconds", "tsdb-"}, | |
| {"max_series", "tsdb-"}, | |
| {"max_disk_mb", "tsdb-"}, | |
| {"fsync_mode", "tsdb-"}, | |
| {"digest_mode", "tsdb-"}, | |
| {"digest_topk", "tsdb-"}, | |
| // Monitor variables stored in DB without extra prefix | |
| {"monitor-enabled", ""}, | |
| {"monitor-interval_seconds", ""}, | |
| {"monitor-connect_timeout_ms", ""}, | |
| {"monitor-ping_enabled", ""}, | |
| {"monitor-max_concurrent_probes",""}, | |
| // UI variables stored in DB without extra prefix | |
| {"ui-enabled", ""}, | |
| {"ui-read_only", ""}, | |
| {nullptr, nullptr} | |
| }; | |
| char *query = (char *)malloc(1024); | |
| for (int i = 0; tsdb_vars[i].name != nullptr; i++) { | |
| char *val = GloTSDB->get_variable(tsdb_vars[i].name); | |
| if (val) { | |
| const char *prefix = tsdb_vars[i].db_prefix ? tsdb_vars[i].db_prefix : ""; | |
| sprintf(query, "INSERT OR REPLACE INTO global_variables (variable_name, variable_value) VALUES ('%s%s', '%s')", prefix, tsdb_vars[i].name, val); |
| if (metric.counter.value != 0) val = metric.counter.value; | ||
| else if (metric.gauge.value != 0) val = metric.gauge.value; | ||
| else if (metric.untyped.value != 0) val = metric.untyped.value; | ||
|
|
||
| if (val != 0 || metric.gauge.value == 0) { // Keep 0 for gauges |
Copilot
AI
Jan 23, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Logic error in value collection. The condition val != 0 || metric.gauge.value == 0 will always be true when val is 0 and it's a gauge (because the second part evaluates to true). This means all zero-valued counters and untypeds will be skipped. The intent appears to be to keep zeros for gauges only, so the condition should be val != 0 || (val == 0 && metric has gauge type) with proper type checking.
| if (metric.counter.value != 0) val = metric.counter.value; | |
| else if (metric.gauge.value != 0) val = metric.gauge.value; | |
| else if (metric.untyped.value != 0) val = metric.untyped.value; | |
| if (val != 0 || metric.gauge.value == 0) { // Keep 0 for gauges | |
| bool is_gauge = false; | |
| // Determine metric type and value. Keep 0 only for gauges. | |
| if (metric.has_counter()) { | |
| val = metric.counter.value; | |
| } else if (metric.has_gauge()) { | |
| is_gauge = true; | |
| val = metric.gauge.value; // may legitimately be 0 | |
| } else if (metric.has_untyped()) { | |
| val = metric.untyped.value; | |
| } | |
| if (val != 0 || (val == 0 && is_gauge)) { // Keep 0 for gauges only |
| if (ofs) { | ||
| ofs.write(reinterpret_cast<const char*>(&req.timestamp), sizeof(req.timestamp)); | ||
| ofs.write(reinterpret_cast<const char*>(&req.value), sizeof(req.value)); | ||
| } |
Copilot
AI
Jan 23, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The persist_point function also lacks error handling for file operations. Add checks to verify the ofstream opened successfully and that writes complete without errors.
| if (ofs) { | |
| ofs.write(reinterpret_cast<const char*>(&req.timestamp), sizeof(req.timestamp)); | |
| ofs.write(reinterpret_cast<const char*>(&req.value), sizeof(req.value)); | |
| } | |
| if (!ofs.is_open()) { | |
| std::cerr << "ProxySQL_TSDB: failed to open file for persisting point: " << file_path << std::endl; | |
| return; | |
| } | |
| ofs.write(reinterpret_cast<const char*>(&req.timestamp), sizeof(req.timestamp)); | |
| ofs.write(reinterpret_cast<const char*>(&req.value), sizeof(req.value)); | |
| if (!ofs) { | |
| std::cerr << "ProxySQL_TSDB: write error while persisting point to file: " << file_path << std::endl; | |
| } |
| fetch(`/api/tsdb/query?metric=${metric}&from=${from}&to=${to}`) | ||
| .then(r => r.json()) | ||
| .then(data => { | ||
| if (!data.series || data.series.length === 0) return; | ||
| const ctx = document.getElementById(elementId).getContext('2d'); | ||
| const points = data.series[0].points; | ||
| new Chart(ctx, { | ||
| type: 'line', | ||
| data: { | ||
| labels: points.map(p => new Date(p[0]).toLocaleTimeString()), | ||
| datasets: [{ | ||
| label: label, | ||
| data: points.map(p => p[1]), | ||
| borderColor: '#2969a5', | ||
| fill: false | ||
| }] | ||
| }, | ||
| options: { scales: { yAxes: [{ ticks: { beginAtZero: true } }] } } | ||
| }); | ||
| }); |
Copilot
AI
Jan 23, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No error handling in the JavaScript fetch calls. If the API request fails or returns an error, the user will see no data with no indication of what went wrong. Add .catch() handlers to display error messages to the user.


This PR introduces the embedded Time Series Database (TSDB) subsystem to ProxySQL.
Key changes:
ProxySQL_TSDB).This work is based on the
v4.0-tsdbbranch with additional fixes and refinements.