Skip to content

Conversation

@renecannao
Copy link
Contributor

This PR introduces the embedded Time Series Database (TSDB) subsystem to ProxySQL.

Key changes:

  • Implementation of the core TSDB classes and logic (ProxySQL_TSDB).
  • Integration with ProxySQL statistics and global variables.
  • Added Doxygen documentation for the new subsystem.
  • Includes necessary build system updates and dependency handling.

This work is based on the v4.0-tsdb branch with additional fixes and refinements.

@coderabbitai
Copy link

coderabbitai bot commented Jan 23, 2026

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link

Summary of Changes

Hello @renecannao, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances ProxySQL's observability capabilities by embedding a Time Series Database directly into the application. This new subsystem allows ProxySQL to autonomously collect, store, and visualize critical runtime metrics and backend health data. The goal is to provide a 'batteries-included' monitoring solution, particularly beneficial for smaller deployments, by reducing the reliance on external monitoring infrastructure while offering a native UI and API for data access.

Highlights

  • Embedded Time Series Database (TSDB): Introduces a new, integrated Time Series Database subsystem within ProxySQL, designed for local metric persistence and observability without external dependencies.
  • Comprehensive Metric Collection: The TSDB collects various ProxySQL metrics, including traffic, latency, connections, backend health (via probes), and ProxySQL's own health, with support for Prometheus registry and Query Digest sampling.
  • Built-in Observability UI and API: Provides a minimal web-based dashboard and a JSON HTTP API for querying time series data, allowing for historical visualization directly from ProxySQL.
  • Admin Interface Integration: New admin commands (LOAD TSDB VARIABLES, SAVE TSDB VARIABLES, TSDB STATUS) and global variables (tsdb-*, monitor-*, ui-*) are added for configuring and managing the TSDB subsystem.
  • Modular Architecture: The TSDB is structured with distinct components: Sampler, Backend Monitor Probes, Storage Engine, Query Engine, and a Compactor, all running as background threads.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@renecannao renecannao changed the title Feature: Embedded TSDB Subsystem [WIP/DONOTMERGE] Feature: Embedded TSDB Subsystem Jan 23, 2026
@renecannao renecannao requested a review from Copilot January 23, 2026 19:44
@sonarqubecloud
Copy link

Quality Gate Failed Quality Gate failed

Failed conditions
14 Security Hotspots

See analysis details on SonarQube Cloud

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces an embedded Time Series Database (TSDB) subsystem to ProxySQL, including documentation, API endpoints, and a basic UI. The TSDB collects metrics from Prometheus and query digests, storing them in raw files. It also includes backend monitoring via TCP probes. However, the write_queue and writer_loop for asynchronous writing are not utilized, leading to a redundant writer thread. The compactor_loop for data retention and compaction is a placeholder, and get_status() returns hardcoded zero values. The include for curl/curl.h is commented out, potentially disabling version checking.

Comment on lines +82 to +84
std::queue<tsdb_write_request_t> write_queue;
std::mutex queue_mutex;
std::condition_variable queue_cv;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The write_queue is declared here, and writer_loop is designed to consume from it. However, in lib/ProxySQL_TSDB.cpp, the ProxySQL_TSDB::write method (which is called by sampler_loop and monitor_loop) directly writes to disk and does not push any requests to this queue. This makes the writer_thread effectively idle and the queue-based asynchronous writing mechanism non-functional for actual metric ingestion.

Comment on lines +174 to +201
void ProxySQL_TSDB::write(const std::string& name, const std::map<std::string, std::string>& labels, long long timestamp, double value) {
if (!config.enabled) return;
std::lock_guard<std::mutex> lock(write_mutex);

// Ensure data directory exists
struct stat st;
if (stat(config.data_dir.c_str(), &st) == -1) {
mkdir(config.data_dir.c_str(), 0755);
}

// Basic append-only storage
std::string filename = config.data_dir + "/raw_" + std::to_string(timestamp / (config.raw_window_minutes * 60 * 1000)) + ".tsdb";
std::ofstream ofs(filename, std::ios::app | std::ios::binary);
if (ofs.is_open()) {
// Simple binary format: [timestamp:8][value:8][name_len:2][name:N][labels_json_len:2][labels_json:M]
ofs.write((char*)&timestamp, 8);
ofs.write((char*)&value, 8);
uint16_t nlen = name.length();
ofs.write((char*)&nlen, 2);
ofs.write(name.c_str(), nlen);

json j_labels = labels;
std::string s_labels = j_labels.dump();
uint16_t llen = s_labels.length();
ofs.write((char*)&llen, 2);
ofs.write(s_labels.c_str(), llen);
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The write method directly writes data to a raw_XYZ.tsdb file. This bypasses the write_queue and writer_loop mechanism that is set up in ProxySQL_TSDB.hpp and ProxySQL_TSDB.cpp (lines 207-218). This design choice makes the writer_thread redundant and prevents the intended asynchronous processing of write requests, potentially impacting performance and data consistency if not properly managed.

Comment on lines +207 to +218
void ProxySQL_TSDB::writer_loop() {
while (!stop_threads) {
tsdb_write_request_t req;
{
std::unique_lock<std::mutex> lock(queue_mutex);
queue_cv.wait(lock, [this] { return !write_queue.empty() || stop_threads; });
if (stop_threads && write_queue.empty()) break;
req = write_queue.front();
write_queue.pop();
}
persist_point(req);
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The writer_loop is designed to process tsdb_write_request_t from write_queue. However, the ProxySQL_TSDB::write method (which is the primary ingestion point for metrics from sampler_loop and monitor_loop) does not push to this queue. As a result, this thread will perpetually wait on queue_cv and never process any data, rendering it ineffective.

Comment on lines +401 to +404
void ProxySQL_TSDB::compactor_loop() {
while (!stop_threads) {
std::this_thread::sleep_for(std::chrono::minutes(10));
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The compactor_loop is currently implemented as a placeholder that simply sleeps. This indicates that the critical functionality for data retention enforcement and file compaction is not yet implemented. Without this, the TSDB could consume excessive disk space and fail to manage historical data effectively.

Comment on lines +444 to +446
ProxySQL_TSDB::status_t ProxySQL_TSDB::get_status() {
return {0, 0, 0};
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The get_status() method returns hardcoded zero values for series_count, disk_usage_bytes, and last_compaction_ts. This means the TSDB STATUS command in the admin interface will not provide any meaningful operational statistics, hindering troubleshooting and monitoring efforts.

#include "platform.h"
#include "microhttpd.h"
#include "curl/curl.h"
//#include "curl/curl.h"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The include for curl/curl.h is commented out. If check_latest_version_http is intentionally disabled, then this is fine. However, if curl is used elsewhere or intended to be used, this could lead to compilation issues or missing functionality.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request introduces an embedded Time Series Database (TSDB) subsystem to ProxySQL, providing a self-contained observability solution for storing and visualizing metrics without requiring external infrastructure. The PR is marked as WIP/DONOTMERGE, indicating it is not yet ready for production use.

Changes:

  • Implementation of core TSDB functionality including metric collection, storage, and querying capabilities
  • Integration with ProxySQL's admin interface for configuration management and status reporting
  • Addition of HTTP API endpoints and a web-based dashboard UI for data visualization
  • Build system updates to compile new TSDB components and supporting documentation

Reviewed changes

Copilot reviewed 12 out of 12 changed files in this pull request and generated 28 comments.

Show a summary per file
File Description
lib/ProxySQL_TSDB.cpp Core TSDB implementation with writer, sampler, monitor, and compactor threads
include/ProxySQL_TSDB.hpp Header file defining TSDB classes, structs, and configuration
lib/TSDB_UI_html.cpp Embedded HTML/JavaScript dashboard UI for metric visualization
lib/ProxySQL_HTTP_Server.cpp HTTP API endpoints for TSDB status and query operations
lib/ProxySQL_Admin.cpp Admin interface functions for TSDB variable management
lib/Admin_Handler.cpp Command handlers for LOAD/SAVE TSDB variables
include/proxysql_admin.h Header additions for TSDB admin methods
lib/Makefile Build system updates to include TSDB object files
doc/tsdb/*.md Documentation covering overview, quickstart, metrics catalog, and API endpoints

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +419 to +437
std::vector<ProxySQL_TSDB::query_result_t> ProxySQL_TSDB::query(const std::string& metric, const std::map<std::string, std::string>& labels, long long from, long long to, int step, const std::string& agg) {
std::vector<ProxySQL_TSDB::query_result_t> results;
std::string key = get_series_key(metric, labels);
std::string file_path = config.data_dir + "/" + key + ".data";
std::ifstream ifs(file_path, std::ios::binary);
if (ifs) {
query_result_t res;
res.labels = labels;
tsdb_point_t pt;
while (ifs.read(reinterpret_cast<char*>(&pt.timestamp), sizeof(pt.timestamp))) {
ifs.read(reinterpret_cast<char*>(&pt.value), sizeof(pt.value));
if (pt.timestamp >= from && pt.timestamp <= to) {
res.points.push_back(pt);
}
}
results.push_back(res);
}
return results;
}
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The query function doesn't implement the step parameter for downsampling, and the agg parameter for aggregation is ignored. The function signature accepts these parameters but they're unused. Either implement the functionality or remove the unused parameters to avoid misleading the API consumers.

Copilot uses AI. Check for mistakes.
Comment on lines +43 to +50
"monitor_enabled",
"monitor_interval_seconds",
"monitor_connect_timeout_ms",
"monitor_ping_enabled",
"monitor_max_concurrent_probes",
// UI
"ui_enabled",
"ui_read_only",
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inconsistent variable naming between has_variable() and get/set_variable(). The tsdb_variable_names array uses underscores (e.g., "monitor_enabled", "ui_enabled") but the get_variable and set_variable methods check for hyphenated versions (e.g., "monitor-enabled", "ui-enabled"). This means has_variable("monitor_enabled") returns true but get_variable("monitor_enabled") returns NULL. The variable names in tsdb_variable_names should use hyphens to match the get/set implementations.

Suggested change
"monitor_enabled",
"monitor_interval_seconds",
"monitor_connect_timeout_ms",
"monitor_ping_enabled",
"monitor_max_concurrent_probes",
// UI
"ui_enabled",
"ui_read_only",
"monitor-enabled",
"monitor-interval-seconds",
"monitor-connect-timeout-ms",
"monitor-ping-enabled",
"monitor-max-concurrent-probes",
// UI
"ui-enabled",
"ui-read-only",

Copilot uses AI. Check for mistakes.
Comment on lines +49 to +61
new Chart(ctx, {
type: 'line',
data: {
labels: points.map(p => new Date(p[0]).toLocaleTimeString()),
datasets: [{
label: label,
data: points.map(p => p[1]),
borderColor: '#2969a5',
fill: false
}]
},
options: { scales: { yAxes: [{ ticks: { beginAtZero: true } }] } }
});
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The HTML creates new Chart instances every time fetchData is called, but never clears or destroys previous chart instances. If fetchData is called multiple times (e.g., for refresh/polling), this will create memory leaks as old Chart.js instances accumulate. Store chart instances globally and update existing charts rather than creating new ones, or destroy old instances before creating new ones.

Copilot uses AI. Check for mistakes.
long long from = atoll(from_s);
long long to = atoll(to_s);
std::map<std::string, std::string> labels; // TODO: parse labels from query params
auto query_results = GloTSDB->query(metric, labels, from, to, 0, "");
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing null pointer check before calling GloTSDB->query(). If GloTSDB is NULL, this will cause a segmentation fault. Add a null check similar to the one used in the /api/tsdb/status endpoint (line 452).

Copilot uses AI. Check for mistakes.
Comment on lines +8753 to +8759
char *query = (char *)malloc(1024);
for (int i=0; tsdb_vars[i] != NULL; i++) {
char *val = GloTSDB->get_variable(tsdb_vars[i]);
if (val) {
const char *prefix = (i < 11) ? "tsdb-" : "";
sprintf(query, "INSERT OR REPLACE INTO global_variables (variable_name, variable_value) VALUES ('%s%s', '%s')", prefix, tsdb_vars[i], val);
db->execute(query);
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potential SQL injection vulnerability. The value from get_variable is directly interpolated into the SQL query using sprintf without escaping. If the value contains single quotes or other SQL special characters, it could break the query or allow injection. Use parameterized queries or properly escape the value before insertion.

Suggested change
char *query = (char *)malloc(1024);
for (int i=0; tsdb_vars[i] != NULL; i++) {
char *val = GloTSDB->get_variable(tsdb_vars[i]);
if (val) {
const char *prefix = (i < 11) ? "tsdb-" : "";
sprintf(query, "INSERT OR REPLACE INTO global_variables (variable_name, variable_value) VALUES ('%s%s', '%s')", prefix, tsdb_vars[i], val);
db->execute(query);
/* Helper to escape single quotes in SQL string literals by doubling them. */
auto escape_sql_string = [](const char *input) -> char * {
if (input == NULL) {
return NULL;
}
size_t len = strlen(input);
size_t extra = 0;
for (size_t i = 0; i < len; i++) {
if (input[i] == '\'') {
extra++;
}
}
char *out = (char *)malloc(len + extra + 1);
if (out == NULL) {
return NULL;
}
size_t j = 0;
for (size_t i = 0; i < len; i++) {
out[j++] = input[i];
if (input[i] == '\'') {
out[j++] = '\'';
}
}
out[j] = '\0';
return out;
};
char *query = (char *)malloc(1024);
for (int i=0; tsdb_vars[i] != NULL; i++) {
char *val = GloTSDB->get_variable(tsdb_vars[i]);
if (val) {
const char *prefix = (i < 11) ? "tsdb-" : "";
char *escaped_val = escape_sql_string(val);
if (escaped_val) {
sprintf(query, "INSERT OR REPLACE INTO global_variables (variable_name, variable_value) VALUES ('%s%s', '%s')", prefix, tsdb_vars[i], escaped_val);
db->execute(query);
free(escaped_val);
}

Copilot uses AI. Check for mistakes.
Comment on lines +205 to +218
* Currently waits on a condition variable for requests.
*/
void ProxySQL_TSDB::writer_loop() {
while (!stop_threads) {
tsdb_write_request_t req;
{
std::unique_lock<std::mutex> lock(queue_mutex);
queue_cv.wait(lock, [this] { return !write_queue.empty() || stop_threads; });
if (stop_threads && write_queue.empty()) break;
req = write_queue.front();
write_queue.pop();
}
persist_point(req);
}
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The writer_loop thread waits on write_queue but nothing in the codebase pushes to this queue. The write() method writes directly to disk instead of enqueuing requests. This makes the writer_thread effectively a no-op that will block indefinitely. Either write() should enqueue to write_queue instead of writing directly, or writer_loop should be removed if direct writes are intended.

Suggested change
* Currently waits on a condition variable for requests.
*/
void ProxySQL_TSDB::writer_loop() {
while (!stop_threads) {
tsdb_write_request_t req;
{
std::unique_lock<std::mutex> lock(queue_mutex);
queue_cv.wait(lock, [this] { return !write_queue.empty() || stop_threads; });
if (stop_threads && write_queue.empty()) break;
req = write_queue.front();
write_queue.pop();
}
persist_point(req);
}
*
* Currently, all writes are performed synchronously by ProxySQL_TSDB::write()
* and nothing enqueues to write_queue. To avoid a thread blocking indefinitely
* on an empty queue, this loop is a no-op and exits immediately.
*/
void ProxySQL_TSDB::writer_loop() {
// No queued writes are used in the current implementation; exit thread.
return;

Copilot uses AI. Check for mistakes.
Comment on lines +8744 to +8758
const char* tsdb_vars[] = {
"enabled", "data_dir", "retention_hours", "sample_interval_seconds",
"raw_window_minutes", "rollup_interval_seconds", "max_series", "max_disk_mb",
"fsync_mode", "digest_mode", "digest_topk",
"monitor-enabled", "monitor-interval_seconds", "monitor-connect_timeout_ms",
"monitor-ping_enabled", "monitor-max_concurrent_probes",
"ui-enabled", "ui-read_only",
NULL
};
char *query = (char *)malloc(1024);
for (int i=0; tsdb_vars[i] != NULL; i++) {
char *val = GloTSDB->get_variable(tsdb_vars[i]);
if (val) {
const char *prefix = (i < 11) ? "tsdb-" : "";
sprintf(query, "INSERT OR REPLACE INTO global_variables (variable_name, variable_value) VALUES ('%s%s', '%s')", prefix, tsdb_vars[i], val);
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The logic to determine which variables get the "tsdb-" prefix is fragile and error-prone. It assumes the first 11 variables in the array need the prefix, but this creates a maintenance issue if the array is reordered. Consider using a more explicit approach, such as a struct with name and prefix, or checking the actual variable name to determine if it should have a prefix.

Suggested change
const char* tsdb_vars[] = {
"enabled", "data_dir", "retention_hours", "sample_interval_seconds",
"raw_window_minutes", "rollup_interval_seconds", "max_series", "max_disk_mb",
"fsync_mode", "digest_mode", "digest_topk",
"monitor-enabled", "monitor-interval_seconds", "monitor-connect_timeout_ms",
"monitor-ping_enabled", "monitor-max_concurrent_probes",
"ui-enabled", "ui-read_only",
NULL
};
char *query = (char *)malloc(1024);
for (int i=0; tsdb_vars[i] != NULL; i++) {
char *val = GloTSDB->get_variable(tsdb_vars[i]);
if (val) {
const char *prefix = (i < 11) ? "tsdb-" : "";
sprintf(query, "INSERT OR REPLACE INTO global_variables (variable_name, variable_value) VALUES ('%s%s', '%s')", prefix, tsdb_vars[i], val);
struct TsdbVarSpec {
const char* name; // internal TSDB variable name
const char* db_prefix; // prefix to use for the DB variable_name
};
const TsdbVarSpec tsdb_vars[] = {
// TSDB variables stored in DB as "tsdb-<name>"
{"enabled", "tsdb-"},
{"data_dir", "tsdb-"},
{"retention_hours", "tsdb-"},
{"sample_interval_seconds", "tsdb-"},
{"raw_window_minutes", "tsdb-"},
{"rollup_interval_seconds", "tsdb-"},
{"max_series", "tsdb-"},
{"max_disk_mb", "tsdb-"},
{"fsync_mode", "tsdb-"},
{"digest_mode", "tsdb-"},
{"digest_topk", "tsdb-"},
// Monitor variables stored in DB without extra prefix
{"monitor-enabled", ""},
{"monitor-interval_seconds", ""},
{"monitor-connect_timeout_ms", ""},
{"monitor-ping_enabled", ""},
{"monitor-max_concurrent_probes",""},
// UI variables stored in DB without extra prefix
{"ui-enabled", ""},
{"ui-read_only", ""},
{nullptr, nullptr}
};
char *query = (char *)malloc(1024);
for (int i = 0; tsdb_vars[i].name != nullptr; i++) {
char *val = GloTSDB->get_variable(tsdb_vars[i].name);
if (val) {
const char *prefix = tsdb_vars[i].db_prefix ? tsdb_vars[i].db_prefix : "";
sprintf(query, "INSERT OR REPLACE INTO global_variables (variable_name, variable_value) VALUES ('%s%s', '%s')", prefix, tsdb_vars[i].name, val);

Copilot uses AI. Check for mistakes.
Comment on lines +273 to +277
if (metric.counter.value != 0) val = metric.counter.value;
else if (metric.gauge.value != 0) val = metric.gauge.value;
else if (metric.untyped.value != 0) val = metric.untyped.value;

if (val != 0 || metric.gauge.value == 0) { // Keep 0 for gauges
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Logic error in value collection. The condition val != 0 || metric.gauge.value == 0 will always be true when val is 0 and it's a gauge (because the second part evaluates to true). This means all zero-valued counters and untypeds will be skipped. The intent appears to be to keep zeros for gauges only, so the condition should be val != 0 || (val == 0 && metric has gauge type) with proper type checking.

Suggested change
if (metric.counter.value != 0) val = metric.counter.value;
else if (metric.gauge.value != 0) val = metric.gauge.value;
else if (metric.untyped.value != 0) val = metric.untyped.value;
if (val != 0 || metric.gauge.value == 0) { // Keep 0 for gauges
bool is_gauge = false;
// Determine metric type and value. Keep 0 only for gauges.
if (metric.has_counter()) {
val = metric.counter.value;
} else if (metric.has_gauge()) {
is_gauge = true;
val = metric.gauge.value; // may legitimately be 0
} else if (metric.has_untyped()) {
val = metric.untyped.value;
}
if (val != 0 || (val == 0 && is_gauge)) { // Keep 0 for gauges only

Copilot uses AI. Check for mistakes.
Comment on lines +231 to +234
if (ofs) {
ofs.write(reinterpret_cast<const char*>(&req.timestamp), sizeof(req.timestamp));
ofs.write(reinterpret_cast<const char*>(&req.value), sizeof(req.value));
}
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The persist_point function also lacks error handling for file operations. Add checks to verify the ofstream opened successfully and that writes complete without errors.

Suggested change
if (ofs) {
ofs.write(reinterpret_cast<const char*>(&req.timestamp), sizeof(req.timestamp));
ofs.write(reinterpret_cast<const char*>(&req.value), sizeof(req.value));
}
if (!ofs.is_open()) {
std::cerr << "ProxySQL_TSDB: failed to open file for persisting point: " << file_path << std::endl;
return;
}
ofs.write(reinterpret_cast<const char*>(&req.timestamp), sizeof(req.timestamp));
ofs.write(reinterpret_cast<const char*>(&req.value), sizeof(req.value));
if (!ofs) {
std::cerr << "ProxySQL_TSDB: write error while persisting point to file: " << file_path << std::endl;
}

Copilot uses AI. Check for mistakes.
Comment on lines +43 to +62
fetch(`/api/tsdb/query?metric=${metric}&from=${from}&to=${to}`)
.then(r => r.json())
.then(data => {
if (!data.series || data.series.length === 0) return;
const ctx = document.getElementById(elementId).getContext('2d');
const points = data.series[0].points;
new Chart(ctx, {
type: 'line',
data: {
labels: points.map(p => new Date(p[0]).toLocaleTimeString()),
datasets: [{
label: label,
data: points.map(p => p[1]),
borderColor: '#2969a5',
fill: false
}]
},
options: { scales: { yAxes: [{ ticks: { beginAtZero: true } }] } }
});
});
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No error handling in the JavaScript fetch calls. If the API request fails or returns an error, the user will see no data with no indication of what went wrong. Add .catch() handlers to display error messages to the user.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants