diff --git a/src/current/_includes/releases/v20.2/v20.2.13.md b/src/current/_includes/releases/v20.2/v20.2.13.md index 1ad131fa057..6f8cbb6b93b 100644 --- a/src/current/_includes/releases/v20.2/v20.2.13.md +++ b/src/current/_includes/releases/v20.2/v20.2.13.md @@ -22,8 +22,8 @@ Release Date: July 12, 2021

Bug fixes

-- Fixed a bug which prevented the [optimizer](https://www.cockroachlabs.com/docs/v21.1/cost-based-optimizer) from producing plans with [partial indexes](https://www.cockroachlabs.com/docs/v21.1/partial-indexes) when executing some prepared statements that contained placeholders, stable functions, or casts. This bug was present since partial indexes were added in v20.2.0. [#66641][#66641] -- Fixed a panic that could occur in the [optimizer](https://www.cockroachlabs.com/docs/v21.1/cost-based-optimizer) when executing a prepared plan with placeholders. This could happen when one of the tables used by the query had [computed columns](https://www.cockroachlabs.com/docs/v21.1/computed-columns) or a [partial index](https://www.cockroachlabs.com/docs/v21.1/partial-indexes). [#66832][#66832] +- Fixed a bug which prevented the optimizer from producing plans with partial indexes when executing some prepared statements that contained placeholders, stable functions, or casts. This bug was present since partial indexes were added in v20.2.0. [#66641][#66641] +- Fixed a panic that could occur in the optimizer when executing a prepared plan with placeholders. This could happen when one of the tables used by the query had computed columns or a partial index. [#66832][#66832] - Fixed a bug that caused graceful drain to call `time.sleep` multiple times, which cut into the time needed for range [lease transfers](https://www.cockroachlabs.com/docs/v20.2/architecture/replication-layer#leases). [#66852][#66852] - CockroachDB now allows a node with [lease](https://www.cockroachlabs.com/docs/v20.2/architecture/replication-layer#leases) preferences to drain gracefully. [#66714][#66714] - CockroachDB now avoids interacting with [decommissioned nodes](https://www.cockroachlabs.com/docs/v20.2/remove-nodes) during [DistSQL](https://www.cockroachlabs.com/docs/v20.2/architecture/sql-layer#distsql) planning and consistency checking. [#66951][#66951] diff --git a/src/current/_includes/releases/v21.1/v21.1.0-alpha.1.md b/src/current/_includes/releases/v21.1/v21.1.0-alpha.1.md deleted file mode 100644 index 7031324f21b..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.0-alpha.1.md +++ /dev/null @@ -1,519 +0,0 @@ -## v21.1.0-alpha.1 - -Release Date: December 8, 2020 - - - -

Backward-incompatible changes

- -- RocksDB can no longer be used as the storage engine. Passing in `--storage-engine=rocksdb` now returns an error. [#55509][#55509] {% comment %}doc{% endcomment %} -- Rows containing empty arrays in [`ARRAY`](https://www.cockroachlabs.com/docs/v21.1/array) columns are now contained in [GIN indexes](https://www.cockroachlabs.com/docs/v21.1/inverted-indexes). This change is backward-incompatible because prior versions of CockroachDB will not be able to recognize and decode keys for empty arrays. Note that rows containing `NULL` values in an indexed column will still not be included in GIN indexes. [#55970][#55970] {% comment %}doc{% endcomment %} -- Concatenation between a non-null argument and a null argument is now typed as string concatenation, whereas it was previously typed as array concatenation. This means that the result of `NULL || 1` will now be `NULL` instead of `{1}`. To preserve the old behavior, the null argument can be casted to an explicit type. [#55611][#55611] {% comment %}doc{% endcomment %} - -

General changes

- -- Added increased logging and metrics around slow disk operations. [#54215][#54215] {% comment %}doc{% endcomment %} -- CockroachDB now detects stalled disk operations better and crashes the process if a disk operation is taking longer than a minute. Added cluster settings to allow for tuning of this behavior. [#55186][#55186] {% comment %}doc{% endcomment %} -- Added some metrics surrounding schema changes. [#54855][#54855] {% comment %}doc{% endcomment %} -- Upgraded CockroachDB's version of Go to v1.15.4. [#56363][#56363] {% comment %}doc{% endcomment %} -- The timezone data is now built in to the CockroachDB binary, which is the fallback source of time if `tzdata` is not located by the default Go standard library. [#56634][#56634] {% comment %}doc{% endcomment %} -- Renamed instances of "Admin UI" to "DB Console" in the documentation of OIDC cluster settings. [#56869][#56869] -- Included `tar` in docker images. This allows users to use `kubectl cp` on 20.2.x containers. [#57241][#57241] - -

Enterprise edition changes

- -- It is no longer allowed to widen an incremental-backup chain with the inclusion of new complete empty DBs. [#54329][#54329] {% comment %}doc{% endcomment %} -- Added cluster settings to enable/ disable the `BACKUP` and `RESTORE` commands. Attempts to use these features while they are disabled returns an error indicating that the database administrator has disabled the feature. Example usage: `SET CLUSTER SETTING feature.backup.enabled = FALSE; SET CLUSTER SETTING feature.backup.enabled = TRUE; SET CLUSTER SETTING feature.restore.enabled = FALSE; SET CLUSTER SETTING feature.restore.enabled = TRUE;`. [#56533][#56533] {% comment %}doc{% endcomment %} -- Added cluster settings to enable/ disable the `IMPORT`, `EXPORT`, and changefeed commands. Attempts to use these features while they are disabled returns an error indicating that the database administrator has disabled the feature. Example usage: `SET CLUSTER SETTING feature.import.enabled = FALSE; SET CLUSTER SETTING feature.import.enabled = TRUE; SET CLUSTER SETTING feature.export.enabled = FALSE; SET CLUSTER SETTING feature.export.enabled = TRUE; SET CLUSTER SETTING feature.changefeed.enabled = FALSE; SET CLUSTER SETTING feature.changefeed.enabled = TRUE;`. [#56872][#56872] {% comment %}doc{% endcomment %} - -

SQL language changes

- -- Interleaved joins have been removed; merge joins are now planned in all cases when interleaved joins would have been planned previously. [#54163][#54163] {% comment %}doc{% endcomment %} -- It is now possible to create partial GIN indexes. The optimizer will choose to scan partial GIN indexes when the partial index predicate is implied and scanning the GIN index has the lowest estimated cost. [#54376][#54376] {% comment %}doc{% endcomment %} -- `EXPLAIN ANALYZE` diagrams now contain "bytes sent" information on streams. [#54518][#54518] {% comment %}doc{% endcomment %} -- Implemented the geometry built-in functions `ST_Rotate({geometry, float8, geometry})`. [#54610][#54610] -- Implemented the geometry built-in function `ST_ClosestPoint()`. [#54843][#54843] -- Implement the string built-in function `unaccent()`. [#54628][#54628] -- When enterprise features are not enabled, the `follower_read_timestamp()` function now returns `(statement_time - 4.8s)` instead of an error. [#54951][#54951] {% comment %}doc{% endcomment %} -- Added a new virtual table `crdb_internal.invalid_descriptors`, which runs validations on descriptors in the database context and reports any errors. [#54017][#54017] -- Implemented the built-in operator `add jsonb_exists_any(jsonb, text[])`. [#55172][#55172] -- Added the ability to optionally specify the `PRIVILEGES` keyword when issuing the `GRANT ALL` or `REVOKE ALL` statements, for Postgres compatibility. Previously, a statement like the following would fail with a syntax error: `GRANT ALL PRIVILEGES ON DATABASE a TO user1;`. [#55304][#55304] {% comment %}doc{% endcomment %} -- Implemented the built-in function `pg_column_size()`, which counts the amount of bytes stored by column. [#55312][#55312] -- Implemented the geometry built-in functions `ST_Rotate({geometry, float8, float8, float8})`. [#55428][#55428] -- Implemented the built-in function `ST_MemSize()`, which returns the memory space a geometry takes. [#55416][#55416] -- `SHOW ENUMS` now returns an array aggregation of enum values instead of having them separated by the `|` character. [#55386][#55386] -- Implemented the geometry built-in function `ST_PointInsideCircle()`. [#55464][#55464] -- Implement the geometry built-in function `ST_LineFromEncodedPolyline()`. [#55429][#55429] -- Implement the geometry built-in function `ST_AsEncodedPolyline()`. [#55515][#55515] -- Implement the geometry built-in function `ST_LineLocatePoint()`, which computes the fraction of the line segment that represents the location of the given point to the closest point on the original line. [#55470][#55470] -- Implemented the `REASSIGN OWNED BY ... TO ...` statement, which changes ownership of all database objects in the current database, owned by any roles in the first argument, to the new role in the second argument. [#54594][#54594] -- Implemented the geometry built-in function `ST_MinimumBoundingRadius()`, which returns a record containing the center point and radius of the smallest circle that can fully contain a geometry. [#55532][#55532] -- The vectorized engine now supports the JSONFetchValPath (`#>`) operator. [#55570][#55570] -- Added the ability to cast a string containing all integers into a given regtype, e.g., `'1234'::regproc`. [#55607][#55607] -- Potentially hazardous `DROP COLUMN` operations when `sql_safe_updates` is enabled now return a note and link to https://github.com/cockroachdb/cockroach/issues/46541. [#55248][#55248] {% comment %}doc{% endcomment %} -- Changed the `SPLIT AT` output to be consistent with the output from `SHOW RANGES`. [#55543][#55543] {% comment %}doc{% endcomment %} -- Implemented the geometry built-in function `ST_MinimumBoundingCircle()`, which returns polygon shape that approximates minimum bounding circle to contain geometry. [#55567][#55567] -- Added the constraint name to constraint errors, for increased wire-level Postgres compatibility. [#55660][#55660] {% comment %}doc{% endcomment %} -- Added support for using the syntax `... UNIQUE WITHOUT INDEX ...` in `CREATE TABLE` and `ALTER TABLE` statements, both when defining columns and unique constraints. Although this syntax can now be parsed successfully, using this syntax currently returns an error: `unique constraints without an index are not yet supported`. [#55700][#55700] {% comment %}doc{% endcomment %} -- Add the built-in functions `sha224()` and `sha384()`. [#55720][#55720] -- Implemented `SHOW REGIONS`, which returns all of regions available in a cluster. [#55831][#55831] {% comment %}doc{% endcomment %} -- Implemented the geography built-in function `ST_UnaryUnion()` [#55894][#55894] -- Implemented `ALTER TABLE ... SET LOCALITY/REGIONAL AFFINITY` statements, which configure multi-region properties of given tables. These are subject to change. [#55827][#55827] {% comment %}doc{% endcomment %} -- All expressions in `EXPLAIN` output that operate on indexes now show the table name the index is declared on, rather than just an alias. If the query aliases the table, the alias is also shown. For example, a scan on table `foo` that is aliased as `f` was previously displayed as `scan f`. It is now displayed as `scan foo [as=f]`. [#55641][#55641] {% comment %}doc{% endcomment %} -- Changed the underlying type for the version cluster setting. Previously, it was of an internal type representing "state machine", but now it's simply "version". This has no operational implications, but the `Type` column in `cockroach gen settings-list` now shows "version" instead of "custom validation". [#55994][#55994],[#56546][#56546] {% comment %}doc{% endcomment %} -- Removed the `201auto` value for the `vectorize` session variable and the corresponding cluster setting. [#55907][#55907] {% comment %}doc{% endcomment %} -- Expanded the `CREATE SCHEMA`, `DROP SCHEMA`, `ALTER SCHEMA`, `GRANT ... ON SCHEMA`, `REVOKE ... ON SCHEMA`, and `SHOW GRANTS ON SCHEMA` statements to allow schema names prefixed with database names. [#55647][#55647] {% comment %}doc{% endcomment %} -- Added support for dollar-quoted strings with digit. [#55958][#55958] {% comment %}doc{% endcomment %} -- Added a new single-column output format for `EXPLAIN` and `EXPLAIN (VERBOSE)`. [#55866][#55866] {% comment %}doc{% endcomment %} -- Creating multi-column GIN indexes is now allowed by setting the `experimental_enable_mutlti_column_inverted_indexes` session setting to `true`. At this time, these indexes are not fully supported and their behavior is undefined. Using this feature will likely result in errors. Do not enable this setting in a production database. [#55993][#55993] {% comment %}doc{% endcomment %} -- Constraints that have not been validated are now marked "NOT VALID" in the output of `SHOW CREATE` and `SHOW CONSTRAINTS`. [#53485][#53485] {% comment %}doc{% endcomment %} -- The `NOT VALID` option can now be provided for `CHECK` and `FOREIGN KEY` constraints listed as table constraints in `CREATE TABLE` statements. This option has no affect on the constraint created. It will not skip validation. [#53485][#53485] {% comment %}doc{% endcomment %} -- Added a pgcode (`42704`, `undefined_object`) to the error returned when attempting to drop an index by a table and index name that doesn't exist. [#55417][#55417] -- Added the `WITH row_limit="{$num}"` option for importing CSVs to allow users to do a quick test run on an import of `$num` rows. Example: `IMPORT TABLE test ... CSV DATA ... WITH row_limit="3";` [#56080][#56080] {% comment %}doc{% endcomment %} -- Added the `WITH row_limit="{$num}"` option for importing `DELIMITED/AVRO` data to allow users to do a quick test run on an import of $num rows. Example: `IMPORT TABLE test ... DELIMITED/AVRO DATA ... WITH row_limit="3";` [#56135][#56135] {% comment %}doc{% endcomment %} -- Added `WITH row_limit="{$num}"` option for importing bundle formats to allow users to do a quick test run on an import of $num rows. Example: `IMPORT ... WITH row_limit="3";`. [#56587][#56587] {% comment %}doc{% endcomment %} -- `EXPLAIN ANALYZE` diagrams now contain "network latency" information on streams. [#55705][#55705] {% comment %}doc{% endcomment %} -- Implemented the `covar_pop()` and `covar_samp()` aggregation functions. [#55707][#55707] -- Prevented column type modification of columns that are depended on by views. [#56213][#56213] -- Implemented the geometry built-in functions `ST_TransScale({geometry,float8,float8,float8,float8})` [#56198][#56198] -- Implemented the geometry built-in function `ST_Node()`. [#56183][#56183] -- The concatenation operator `||` can now be used between strings and any other non-array types. [#55611][#55611] -- CockroachDB now returns a float instead of a decimal when at least one argument of an aggregate function is decimal. [#56296][#56296] -- Implemented the `regr_intercept()`, `regr_r2()`, and `regr_slope()` aggregation functions. [#56296][#56296] -- `EXPLAIN ANALYZE` diagrams now show "deserialization time" on streams instead of "io time". [#56144][#56144] -- Added a pgcode (`42804`, `DatatypeMismatch`) when adding a default value of the wrong type to a column. [#56455][#56455] -- Attempting to rename an undefined index now returns a `pgcode.UndefinedObject` (`42704`) error instead of an uncategorized error. [#56455][#56455] -- Implemented the `regr_sxx()`, `regr_sxy()`, and `regr_syy()` aggregation functions. [#56585][#56585] -- `SHOW REGIONS` has changed the column name for availability zones to "zones" from "availability_zones". [#56344][#56344] {% comment %}doc{% endcomment %} -- Introduced a `pg_collation` of "default". Strings now return the "default" collation OID in the `pg_attribute` table (this was previously `en_US`). The "default" collation is also visible on the `pg_collation` virtual table. [#56598][#56598] -- A table can now successfully be dropped in a transaction following other schema changes to the table in the same transaction. [#56589][#56589] {% comment %}doc{% endcomment %} -- Added a new variant of explain: `EXPLAIN ANALYZE (PLAN)`. [#56524][#56524] {% comment %}doc{% endcomment %} -- `SHOW REGIONS` functionality is now deferred to `SHOW REGIONS FROM CLUSTER`. [#56627][#56627] {% comment %}doc{% endcomment %} -- It is now possible to hint to the optimizer that it should plan an inverted join by using the syntax `... INNER INVERTED JOIN ...` or `... LEFT INVERTED JOIN ...`. If the hint is provided and it is possible to plan an inverted join, the optimizer will now plan an inverted join, even if it estimates that a different plan would have a lower cost. If the hint is provided but it is not possible to plan an inverted join because there is no GIN index on the right side table or the join condition is not a valid inverted join condition, the database will return an error. [#55679][#55679] {% comment %}doc{% endcomment %} -- Added the empty `pg_catalog.pg_opclass` table to improve compatibility with Postgres. [#56653][#56653] -- `SURVIVE AVAILABILITY ZONE FAILURE` is now `SURVIVE ZONE FAILURE`. [#56837][#56837] {% comment %}doc{% endcomment %} -- Added admin-only, `crdb_internal` functions to enable descriptor repair in dire circumstances. [#55699][#55699] -- Added support for an optional `=` character for `SURVIVE`, e.g., `ALTER DATABASE d SURVIVE = ZONE FAILURE`. [#56881][#56881] {% comment %}doc{% endcomment %} -- Introduced stubs for `ALTER DATABASE ... PRIMARY REGION` and `CREATE TABLE ... PRIMARY REGION`. [#56883][#56883] {% comment %}doc{% endcomment %} -- Dropping the primary index using `DROP INDEX` now returns a `FeatureNotSupported` error along with hints showing supported ways to drop primary indexes. [#56858][#56858] -- Renaming an index to a name that is already being used for another index will now return a `pgcode.DuplicateRelation` (`42P07`) error instead of an uncategorized error. [#56681][#56681] -- The `relpersistence` column in `pg_catalog.pg_class` now correctly displays `t` as the persistence status for temporary tables. [#56827][#56827] -- Added a deprecation notice to statements containing the `INTERLEAVE IN PARENT` clause. [#56874][#56874] -- `SHOW DATABASES` and `crdb_internal.databases` now display all regions as well as survival goals for a given database. [#56880][#56880] {% comment %}doc{% endcomment %} -- Adds a feature flag via cluster settings for the `CREATE STATISTICS` and `ANALYZE` feature. If a user attempts to use the command while disabled, an error indicating that the database administrator had disabled the feature is surfaced. Example usage: `SET CLUSTER SETTING feature.stats.enabled = FALSE; SET CLUSTER SETITNG feature.stats.enabled = TRUE;`. [#57076][#57076] {% comment %}doc{% endcomment %} -- `CREATE DATABASE ... PRIMARY REGION` is now stored on the database descriptor. [#57038][#57038] -- `SHOW DATABASES` and `crdb_internal.databases` now display the `PRIMARY REGION` set on the database descriptor as the `primary_region` column. [#57038][#57038] {% comment %}doc{% endcomment %} -- Added a feature flag via cluster settings for all schema change-related features. If a user attempts to use these features while they are disabled, an error indicating that the database administrator has disabled the feature is surfaced. Example usage: `SET CLUSTER SETTING feature.schema_change.enabled = FALSE; SET CLUSTER SETTING feature.schema_change.enabled = TRUE;`. [#57040][#57040] {% comment %}doc{% endcomment %} -- Changed `pg_constraint` column types for `confkey` and `conkey` to `smallint[]` to improve compatibility with Postgres. [#56975][#56975] -- The `ALTER TABLE...SPLIT/UNSPLIT` and `ALTER INDEX...SPLIT/UNSPLIT` commands are now gated by a schema change feature flag. If a user attempts to use these features while they are disabled, an error indicating that the system administrator has disabled the feature is surfaced. Example usage: `SET CLUSTER SETTING feature.schema_change.enabled = FALSE SET CLUSTER SETTING feature.schema_change.enabled = TRUE;`. [#57142][#57142] {% comment %}doc{% endcomment %} -- When creating a database with the regions clause specified, CockroachDB now creates a `regions` enum type automatically. [#56628][#56628] {% comment %}doc{% endcomment %} -- Implemented `SHOW REGION FROM DATABASE` and `SHOW REGION FROM DATABASE db`, which shows all regions for the given database, as well as whether that region is the primary region. [#57106][#57106] {% comment %}doc{% endcomment %} -- `CREATE TABLE AS SELECT ... FROM ... AS OF SYSTEM TIME x` is now supported. [#55916][#55916] -- Implemented the function `regr_count()`. [#56822][#56822] -- Added the `character_sets` table to the `information_schema`. [#56953][#56953] {% comment %}doc{% endcomment %} -- `SHOW ENUMS` is now extended to take an optional `FROM` clause. The user can specify either the schema name or both the database name and schema name separated by `.`. If a hierarchy is specified, the statement returns enums falling in that hierarchy rather than all of the enums in the current database. [#57197][#57197] {% comment %}doc{% endcomment %} -- The multi-region enum, created implicitly for all multi-region databases, can be introspected using the `pg_catalog.pg_enum` table. It is also displayed in `SHOW ENUMS`. [#57197][#57197] {% comment %}doc{% endcomment %} -- Implemented the geography built-in function `ST_Subdivide()`. [#56898][#56898] -- A `pgcode.UndefinedColumn` error is now returned when adding a unique constraint to one or more undefined columns. [#57316][#57316] -- The database name is now displayed in `SHOW REGIONS FROM DATABASE`. [#57278][#57278] {% comment %}doc{% endcomment %} -- Added `SHOW SURVIVAL GOAL FROM DATABASE [database]`, which shows the survival goal for a multi-region database. [#57278][#57278] {% comment %}doc{% endcomment %} -- Added the `uuid_generate_v4()` built-in function. It works exactly like `gen_random_uuid()` but was added for compatibility with Postgres versions older than PG13. [#57212][#57212] - -

Command-line changes

- -- A `debug.zip` file now includes a script, `hot-ranges.sh`, which will summarize the hottest ranges in the cluster. [#53547][#53547] {% comment %}doc{% endcomment %} -- `cockroach sql` and `cockroach demo` now support the command-line parameter `--file` (shorthand `-f`) to read commands from a named file. The behavior is the same as if the file was redirected on the standard input; in particular, the processing stops at the first error encountered (which is different from interactive usage with a prompt). Note that it is not yet possible to combine `-f` with `-e`. [#54741][#54741] {% comment %}doc{% endcomment %} -- The large banner message "Replication has been disabled for this cluster ..." that was unconditionally emitted on the standard error stream for `cockroach start-single-node` has now become a simple log message at severity `INFO`. [#54749][#54749] -- `cockroach demo` now pre-creates a `demo` user account with a random password to discourage the user of `root`. The `demo` account is currently granted the `admin` role. [#54749][#54749] {% comment %}doc{% endcomment %} -- The CLI help text for `--max-disk-temp-storage` now properly reports the default value. [#54853][#54853] -- The help text displayed by `\?` in `cockroach sql` and `cockroach demo` now groups the recognized client-side commands into sections for easier reading. [#54796][#54796] -- The client-side command `\show` for the SQL shell is deprecated in favor of the new command `\p`. This prints the contents of the query buffer entered so far. [#54796][#54796] {% comment %}doc{% endcomment %} -- The new client-side command `\r` for the SQL shell erases the contents of the query buffer entered so far. This provides a convenient way to reset the input, for example, when the user gets themselves confused with string delimiters. [#54796][#54796] {% comment %}doc{% endcomment %} -- The SQL shell (`cockroach sql`, `cockroach demo`) now supports the client-side command `\echo`, like `psql`. This can be used, for example, to generate informational output when executing SQL scripts non-interactively. [#54796][#54796] {% comment %}doc{% endcomment %} -- The SQL shell (`cockroach sql`, `cockroach demo`) now support the `\i` and `\ir` client-side command which reads SQL file and evaluates its content in-place. `\ir` differs from `\i` in that the file name is resolved relative to the location of the script containing the `\ir` command. This makes `\ir` likely more desirable in the general case. Instances of `\q` inside a file included via `\i`/`\ir` stop evaluation of the file and resume evaluation of the file that included it. This feature is compatible with the identically named `psql` commands. It is meant to help compose complex initialization scripts from a library of standard components. For example, one could be defining each table and its initial contents in separate SQL files, and then use different super-files to include different tables depending on the desired final schema. [#54796][#54796] {% comment %}doc{% endcomment %} -- Removed the `debug sstables` command, superseded by the `debug pebble lsm` command. [#54890][#54890] -- Added the `cockroach debug pebble db checkpoint` debug command to easily create a checkpoint without using rocksdb. [#55751][#55751] {% comment %}doc{% endcomment %} -- Updated the `--storage-engine` help text to reflect RocksDB deletion. [#55509][#55509] -- Added support for `\connect DATABASE` and `\c DATABASE`. [#55934][#55934] {% comment %}doc{% endcomment %} -- Added an import CLI command that allows users to upload and import local dump files into a running cockroach cluster. `PGDUMP` and `MYSQLDUMP` formats are currently supported. [#54896][#54896] {% comment %}doc{% endcomment %} -- `cockroach demo` now allows for nodes to be added using the `\demo` client-side command. This works in both single node and multi-node configurations, for example, when started with `--nodes int` or `--geo-partitioned-replicas`. [#56344][#56344] {% comment %}doc{% endcomment %} -- Some specific situations now have dedicated exit status codes. The following codes are defined: - - Code | Description - -----|------------ - 0 | Process terminated without error. - 1 | An unspecified error was encountered. Explanation should be present in the stderr or logging output. - 2 | Go runtime error, or uncaught panic. Likely a bug in CockroachDB. Explanation may be present in logging output. - 3 | Server process interrupted gracefully with Ctrl+C / SIGINT. - 4 | Command-line flag error. - 5 | A logging operation to the process' stderr stream failed (e.g., stderr has been closed). Some details may be present in the file output, if enabled. - 6 | A logging operation to file has failed (e.g., log disk full, no inodes, permission issue, etc.). Some details may be present in the stderr stream. - 7 | Server detected an internal error and triggered an emergency shutdown. - 8 | Logging failed while processing an emergency shutdown. - [#56574][#56574] -- `cockroach demo` now tries to use the same TCP port numbers for the SQL and HTTP servers on every invocation. This is meant to simplify documentation. These defaults can be overridden with the new (`demo`-specific) command line flags `--sql-port` and/or `--http-port`. [#56737][#56737] {% comment %}doc{% endcomment %} -- The SQL shell now accepts `yes`/`no` as boolean options for slash commands, following `psql` behavior. [#56829][#56829] {% comment %}doc{% endcomment %} -- A `\x [on|off]` command has been added to toggle the `records` display format, following `psql` behavior. [#56829][#56829] {% comment %}doc{% endcomment %} -- CockroachDB now prints a warning if the `--locality` flag does not contain a "region" tier. [#57179][#57179] {% comment %}doc{% endcomment %} - -

API endpoint changes

- -- Added a new prometheus metric called `seconds_until_license_expiry` that reports the number of seconds until the enterprise license on the cluster expires, a negative number if the expiration is in the past, or `0` if there is no license. [#55565][#55565] {% comment %}doc{% endcomment %} - -

DB Console changes

- -- Changed the view for tables without data on main pages. [#54943][#54943] -- Updated the design of the custom date range selector on the Cluster > Maps view and Metrics pages [#54851][#54851] -- The DB Console now shows messages provided by server instead of custom generic messages defined on the client side, for example, messages about permission restrictions to show pages for non-admin roles. [#50869][#50869] -- Added a new metric called `raft.scheduler.latency`, which monitors the latency for operations to be picked up and processed by the Raft scheduler. [#56943][#56943] -- Redesigned inline error alerts when a user has insufficient rights to see some resources. [#50869][#50869] -- Implemented a permission denied error for non-admin users on the Node Map and Events views. [#50869][#50869] -- Tables within user-defined schemas are now included in the Data Distribution page. [#56388][#56388] {% comment %}doc{% endcomment %} -- Creating, dropping, and altering roles or users now causes events to be logged and displayed. [#55945][#55945] -- If statement diagnostics are enabled for a statement, the bytes sent over the network are now shown. [#55969][#55969] -- `ALTER DATABASE OWNER` and `CONVERT TO SCHEMA` now cause events to be logged and displayed. [#55891][#55891] -- Changing schema objects now causes events to be logged and displayed. [#55785][#55785] -- Changing privileges (i.e., with `GRANT` or `REVOKE`) now causes events to be logged and displayed. [#55612][#55612] -- Renaming databases or tables now causes events to be logged and displayed. [#55269][#55269] -- Added descriptions for failed job on the Job Details page. [#54268][#54268] {% comment %}doc{% endcomment %} - -

Bug fixes

- -- Fixed the `rpath` and `so` names of `libgeos.so` and `libgeos_c.so` such that a `dlopen` to `libgeos.so` is not needed. [#55129][#55129] -- Made lease transfers during rebalancing adhere to the rate limit utilized in other lease transfer cases which eliminates unexpected lease oscillations when adding a new node. [#54322][#54322] -- CockroachDB now handles PostgreSQL "cancel" messages on TLS connections in the same way as when they are sent without TLS: the connection is closed, but no action takes place. No error is logged. As a reminder, PostgreSQL "cancel" messages are still unsupported in CockroachDB and client should still use `CANCEL QUERY` instead. [#54618][#54618] -- Cleared table statistics from job description in failed and canceled backup jobs. [#54446][#54446] -- Fixed an error message that referred to `experimental_enable_hash_sharded_indexes` as a cluster setting when it is in fact a session variable. [#54960][#54960] -- Fixed a nil pointer error that could occur at planning time for some spatial queries when a GIN index existed on a geometry or geography column. [#55076][#55076] -- Fixed `SHOW ENUMS` column names to have `values` instead of `string_agg` for column names, and `owner` for the owner itself. [#55139][#55139] -- Fixed `SHOW TYPES` to show the `owner` column instead of the `value` column. [#55139][#55139] -- Fixed a bug where empty enums did not show up for `SHOW ENUMS` or `SHOW TYPES`. [#55143][#55143] -- Fixed a bug where, on failure of `CREATE TABLE AS` or `CREATE MATERIALIZED VIEW`, tables would be left in an invalid non-public state until GC instead of being marked as dropped, possibly causing spurious validation failures. The bug was introduced in earlier 20.2 pre-releases. [#55272][#55272] -- Fixed a rare scenario in which a node would refuse to start after updating the binary. The log message would indicate: `store [...], last used with cockroach version [...], is too old for running version [...] (which requires data from [...] or later)`. [#55240][#55240] -- Changefeeds defined on multiple tables will now only backfill affected tables after a schema change. [#55135][#55135] -- Fixed a bug where adding child tables or types to a schema being restored would cause those new child objects to become corrupted with no parent schema if the restore job had to be rolled back. [#55157][#55157] -- Fixed a bug where the seconds component of a zone offset of a `TIMESTAMPTZ` value was not displayed. [#55071][#55071] -- Fixed a bug where casts to regclass were not escaped, e.g., when the type or table name had `"` characters. [#55607][#55607] -- Fixed a bug where casts from string to regproc, regtype or regprocedure would not work if they contained `"` characters at the beginning or at the end. [#55607][#55607] -- Fixed a bug which could cause `IMPORT`, `BACKUP`, or `RESTORE` to experience an error when they occur concurrently to when the cluster sets its version to upgraded. [#55524][#55524] -- Fixed a rare crash during tracing when reading un-decodable data. [#55783][#55783] -- Prevented a crash, introduced in the 20.2 series, caused by range scans over virtual tables with virtual indexes. [#56459][#56459] -- In some cases CockroachDB, would attempt to transfer ranges to nodes in the process of being decommissioned or being shut down; this could cause disruption the moment the node did actually terminate. This bug has been fixed. It had been introduced some time before v2.0. [#55808][#55808] -- Fixed a bug causing queries sent to a freshly-restarted node to sometimes hang for a long time while the node catches up with replication. [#55148][#55148] -- Fixed typing of collated strings so that collation names are case-insensitive and hyphens/underscores are interchangeable. [#56352][#56352] -- Fixed internal errors related to very large `LIMIT` and/or `OFFSET` values. [#56672][#56672] -- Improved the accuracy of reported CPU usage when running in containers. [#56461][#56461] -- Fixed a bug which can lead to canceled schema change jobs ending in the failed rather than canceled state. [#55513][#55513] -- Prevented an opportunity for livelock in the jobs subsystem due to frequent updates to already finished jobs. [#56855][#56855] -- The `LogFile` reserved API, which was used by `cockroach debug zip`, could corrupt log entries. This has been fixed. [#56901][#56901] -- `DELETE` statements no longer have a chance of returning an incorrect number of deleted rows in transactions that will eventually need to restart due to contention. [#56458][#56458] -- Fixed a race condition in the `tpcc` workload with the `--scatter` flag where tables could be scattered multiple times or not at all. [#56942][#56942] -- Fixed a bug where reg* types were not parsed properly over pgwire, `COPY` or prepared statements. [#56298][#56298] -- Previously, casts and parsing of strings to types could allow an out-of-bounds value to be successfully used (e.g., `SELECT -232321321312::int2`) but fail with an out-of-bounds message when it is inserted into the table. This is now checked when the value is parsed or being casted to. [#55095][#55095] -- `cockroach debug merge-logs --redact=true --redactable-output=false` now properly removes redaction markers. [#57121][#57121] -- Fixed a bug related to validation of un-upgraded pre-19.2 inbound foreign keys. [#57132][#57132] -- Creating a materialized view that references a column with a NULL value no longer results in an error. [#57139][#57139] -- `ST_GeomFromGeoJSON` now sets the SRID to 4326, matching PostGIS 3.0 / RFC7946 behavior. [#57152][#57152] -- Fixed a bug that caused an "ambiguous column reference" error during foreign key cascading updates. This error was incorrectly produced when the child table's reference column name was equal to the concatenation of the parent's reference column name and "_new", and when the child table had a `CHECK` constraint, computed column, or partial index predicate expression that referenced the column. This bug was introduce in version 20.2. [#57153][#57153] -- Fixed a bug that caused errors or corrupted partial indexes of child tables in foreign key relationships with cascading `UPDATE`s and `DELETE`s. The corrupt partial indexes could result in incorrect query results. Any partial indexes on child tables of foreign key relationships with `ON DELETE CASCADE` or `ON UPDATE CASCADE` actions may be corrupt and should be dropped and re-created. This bug was introduce in version 20.2. [#57100][#57100] -- Second timezone offsets for `TIMETZ` now correctly display over the Postgres wire protocol; these were previously omitted. [#57265][#57265] -- `SELECT FOR UPDATE` now requires both `SELECT` and `UPDATE` privileges, instead of just `UPDATE` privileges. [#57309][#57309] -- Fixed a bug where users of an OSS build of CockroachDB would see "Page Not Found" when loading the DB Console. [#56591][#56591] - -

Performance improvements

- -- The optimizer can now deduce that certain variable arguments to functions must be non-null. This improves cardinality estimation for those variables and unlocks other types of optimizations. As a result, the optimizer may choose better query plans when a function is used as a filter predicate. [#54558][#54558] -- Improved the selectivity estimate for array contains predicates (e.g., `arr @> ARRAY[1]`) in the optimizer. This improves the optimizer's cardinality estimation for queries containing these predicates, and may result in better query plans in some cases. [#54768][#54768] -- Updated the cost model in the optimizer to make index joins more expensive and better reflect the reality of their cost. As a result, the optimizer will choose index joins less frequently, generally resulting in more efficient query plans. [#54768][#54768] -- The optimizer simplifies join expressions to only scan a single table when the join filter is a contradiction. A limitation, now removed, was preventing this simplification from occurring in some cases, leading to more efficient query plans in some cases. [#54813][#54813] -- Improved the efficiency of plans for the execution of left outer spatial joins. [#55216][#55216] -- The optimizer now considers partial indexes when exploring zigzag joins. This may lead to more efficient query plans for queries that (1) operate on tables with partial indexes and (2) have a filter that holds two columns, indexed by two indexes, constant. [#55401][#55401] -- The optimizer now attempts to split a query with a disjunctive filter (OR expression) into a UNION of index scans, where one or both of the scans is an unconstrained partial index scan. As a result, more efficient query plans may be generated for queries with disjunctive filters that operate on tables with partial indexes. [#55915][#55915] -- CSV imports should now be slightly faster. [#55845][#55845] -- Previously, all `CHECK` constraints defined on a table would be tested for every `UPDATE` to the table. Now, a check constraint will not be tested for validity when the values of columns it references are not being updated. The referenced columns are no longer fetched in cases where they were only fetched to test `CHECK` constraints. [#56007][#56007] -- Indexes on computed columns can now be utilized when filters reference the computed expression and not the computed column directly. [#55867][#55867] -- The query optimizer can now generate inverted zigzag joins over partial GIN indexes. This may lead to more efficient query plans when filtering by a column that is indexed by a partial GIN index. [#56101][#56101] -- They query optimizer can now plan zigzag joins on two partial indexes with the same predicate, leading to more efficient query plans in some cases. [#56103][#56103] -- The optimizer now converts inner joins with single-row values expressions into projections. This allows decorrelation of subqueries that only reference variables from the outer query, such as `SELECT (SELECT value + 10) FROM table`. [#55961][#55961] -- The optimizer may now plan an inverted join if two tables are joined on `JSONB` or `ARRAY` columns using a contains predicate (e.g., `WHERE a @> b`), and the first column has a GIN index. The inverted join will be chosen if the optimizer expects it to be more efficient than any alternative plan. For queries in which the only alternative is a cartesian product followed by a filter, the inverted join will likely result in a performance improvement. [#55679][#55679] -- The hybrid logical clock used to coordinate distributed operations now performs significantly better under high contention with many concurrent updates from remote nodes. [#56708][#56708] -- The Raft processing goroutine pool's size is now capped at 96. This was observed to prevent instability on large machines (32+ vCPU) in clusters with many ranges (50k+ per node). [#56860][#56860] -- Interactions between Raft heartbeats and the Raft goroutine pool scheduler are now more efficient and avoid excessive mutex contention. This was observed to prevent instability on large machines (32+ vCPU) in clusters with many ranges (50k+ per node). [#56860][#56860] -- The Raft scheduler now prioritizes the node liveness Range. This was observed to prevent instability on large machines (32+ vCPU) in clusters with many ranges (50k+ per node). [#56860][#56860] -- The optimizer now supports using a GIN index on `JSONB` or `ARRAY` columns for a wider variety of filter predicates. Previously, GIN index usage was only supported for simple predicates (e.g., `WHERE a @> '{"foo": "bar"}'`), but now more complicated predicates are supported by combining simple contains (`@>`) expressions with `AND` and `OR` (e.g., `WHERE a @> '{"foo": "bar"}' OR a @> '{"foo": "baz"}'`). A GIN index will be used if it is available on the filtered column and the optimizer expects it to be more efficient than any alternative plan. This may result in performance improvements for queries involving `JSONB` and `ARRAU` columns. [#56732][#56732] - -

Doc updates

- -- Updated several Hello World tutorials to use `cockroach demo` as the backend and an external repository for code samples, including Go with [pgx](https://www.cockroachlabs.com/docs/v21.1/build-a-go-app-with-cockroachdb) and [GORM](https://www.cockroachlabs.com/docs/v21.1/build-a-go-app-with-cockroachdb-gorm), Java with [JDBC](https://www.cockroachlabs.com/docs/v21.1/build-a-java-app-with-cockroachdb) and [Hibernate](https://www.cockroachlabs.com/docs/v21.1/build-a-java-app-with-cockroachdb-hibernate), and Python with [psycopg2](https://www.cockroachlabs.com/docs/v21.1/build-a-python-app-with-cockroachdb), [SQLAlchemy](https://www.cockroachlabs.com/docs/v21.1/build-a-python-app-with-cockroachdb-sqlalchemy), and [Django](https://www.cockroachlabs.com/docs/v21.1/build-a-python-app-with-cockroachdb-django). [#9025](https://github.com/cockroachdb/docs/pull/9025),[#8991](https://github.com/cockroachdb/docs/pull/8991), -- Updated the [Production Checklist](https://www.cockroachlabs.com/docs/v21.1/recommended-production-settings) to recommend disabling Linux memory swapping to avoid over-allocating memory. [#8979](https://github.com/cockroachdb/docs/pull/8979) - -
- -

Contributors

- -This release includes 1040 merged PRs by 88 authors. We would like to thank the following contributors from the CockroachDB community: - -- Adrian Popescu (first-time contributor) -- Alan Acosta (first-time contributor) -- ArjunM98 (first-time contributor) -- Artem Barger -- Azdim Zul Fahmi (first-time contributor) -- David Pacheco (first-time contributor) -- Erik Grinaker -- Gabriel Jaldon (first-time contributor) -- Jake Rote (first-time contributor) -- Joshua M. Clulow (first-time contributor) -- Marcin Knychała (first-time contributor) -- Max Neverov (first-time contributor) -- Miguel Novelo (first-time contributor) -- Ruixin Bao (first-time contributor) -- TAKAHASHI Yuto (first-time contributor) -- Tayo (first-time contributor) -- Tim Graham (first-time contributor) -- Tom Milligan (first-time contributor) -- Vaibhav -- alex-berger@gmx.ch (first-time contributor) -- alex.berger@nexiot.ch (first-time contributor) -- haseth (first-time contributor) -- hewei03 (first-time contributor) -- neeral -- xinyue (first-time contributor) - -
- -[#30624]: https://github.com/cockroachdb/cockroach/pull/30624 -[#50869]: https://github.com/cockroachdb/cockroach/pull/50869 -[#53390]: https://github.com/cockroachdb/cockroach/pull/53390 -[#53485]: https://github.com/cockroachdb/cockroach/pull/53485 -[#53547]: https://github.com/cockroachdb/cockroach/pull/53547 -[#53926]: https://github.com/cockroachdb/cockroach/pull/53926 -[#54017]: https://github.com/cockroachdb/cockroach/pull/54017 -[#54026]: https://github.com/cockroachdb/cockroach/pull/54026 -[#54044]: https://github.com/cockroachdb/cockroach/pull/54044 -[#54064]: https://github.com/cockroachdb/cockroach/pull/54064 -[#54140]: https://github.com/cockroachdb/cockroach/pull/54140 -[#54143]: https://github.com/cockroachdb/cockroach/pull/54143 -[#54163]: https://github.com/cockroachdb/cockroach/pull/54163 -[#54215]: https://github.com/cockroachdb/cockroach/pull/54215 -[#54230]: https://github.com/cockroachdb/cockroach/pull/54230 -[#54268]: https://github.com/cockroachdb/cockroach/pull/54268 -[#54317]: https://github.com/cockroachdb/cockroach/pull/54317 -[#54322]: https://github.com/cockroachdb/cockroach/pull/54322 -[#54329]: https://github.com/cockroachdb/cockroach/pull/54329 -[#54349]: https://github.com/cockroachdb/cockroach/pull/54349 -[#54350]: https://github.com/cockroachdb/cockroach/pull/54350 -[#54352]: https://github.com/cockroachdb/cockroach/pull/54352 -[#54355]: https://github.com/cockroachdb/cockroach/pull/54355 -[#54376]: https://github.com/cockroachdb/cockroach/pull/54376 -[#54446]: https://github.com/cockroachdb/cockroach/pull/54446 -[#54518]: https://github.com/cockroachdb/cockroach/pull/54518 -[#54558]: https://github.com/cockroachdb/cockroach/pull/54558 -[#54594]: https://github.com/cockroachdb/cockroach/pull/54594 -[#54610]: https://github.com/cockroachdb/cockroach/pull/54610 -[#54618]: https://github.com/cockroachdb/cockroach/pull/54618 -[#54628]: https://github.com/cockroachdb/cockroach/pull/54628 -[#54668]: https://github.com/cockroachdb/cockroach/pull/54668 -[#54673]: https://github.com/cockroachdb/cockroach/pull/54673 -[#54741]: https://github.com/cockroachdb/cockroach/pull/54741 -[#54749]: https://github.com/cockroachdb/cockroach/pull/54749 -[#54768]: https://github.com/cockroachdb/cockroach/pull/54768 -[#54796]: https://github.com/cockroachdb/cockroach/pull/54796 -[#54811]: https://github.com/cockroachdb/cockroach/pull/54811 -[#54813]: https://github.com/cockroachdb/cockroach/pull/54813 -[#54843]: https://github.com/cockroachdb/cockroach/pull/54843 -[#54851]: https://github.com/cockroachdb/cockroach/pull/54851 -[#54853]: https://github.com/cockroachdb/cockroach/pull/54853 -[#54855]: https://github.com/cockroachdb/cockroach/pull/54855 -[#54870]: https://github.com/cockroachdb/cockroach/pull/54870 -[#54880]: https://github.com/cockroachdb/cockroach/pull/54880 -[#54890]: https://github.com/cockroachdb/cockroach/pull/54890 -[#54896]: https://github.com/cockroachdb/cockroach/pull/54896 -[#54943]: https://github.com/cockroachdb/cockroach/pull/54943 -[#54951]: https://github.com/cockroachdb/cockroach/pull/54951 -[#54960]: https://github.com/cockroachdb/cockroach/pull/54960 -[#55050]: https://github.com/cockroachdb/cockroach/pull/55050 -[#55063]: https://github.com/cockroachdb/cockroach/pull/55063 -[#55071]: https://github.com/cockroachdb/cockroach/pull/55071 -[#55076]: https://github.com/cockroachdb/cockroach/pull/55076 -[#55095]: https://github.com/cockroachdb/cockroach/pull/55095 -[#55120]: https://github.com/cockroachdb/cockroach/pull/55120 -[#55129]: https://github.com/cockroachdb/cockroach/pull/55129 -[#55135]: https://github.com/cockroachdb/cockroach/pull/55135 -[#55139]: https://github.com/cockroachdb/cockroach/pull/55139 -[#55143]: https://github.com/cockroachdb/cockroach/pull/55143 -[#55147]: https://github.com/cockroachdb/cockroach/pull/55147 -[#55148]: https://github.com/cockroachdb/cockroach/pull/55148 -[#55157]: https://github.com/cockroachdb/cockroach/pull/55157 -[#55172]: https://github.com/cockroachdb/cockroach/pull/55172 -[#55186]: https://github.com/cockroachdb/cockroach/pull/55186 -[#55216]: https://github.com/cockroachdb/cockroach/pull/55216 -[#55222]: https://github.com/cockroachdb/cockroach/pull/55222 -[#55240]: https://github.com/cockroachdb/cockroach/pull/55240 -[#55248]: https://github.com/cockroachdb/cockroach/pull/55248 -[#55263]: https://github.com/cockroachdb/cockroach/pull/55263 -[#55269]: https://github.com/cockroachdb/cockroach/pull/55269 -[#55272]: https://github.com/cockroachdb/cockroach/pull/55272 -[#55300]: https://github.com/cockroachdb/cockroach/pull/55300 -[#55304]: https://github.com/cockroachdb/cockroach/pull/55304 -[#55312]: https://github.com/cockroachdb/cockroach/pull/55312 -[#55373]: https://github.com/cockroachdb/cockroach/pull/55373 -[#55386]: https://github.com/cockroachdb/cockroach/pull/55386 -[#55401]: https://github.com/cockroachdb/cockroach/pull/55401 -[#55416]: https://github.com/cockroachdb/cockroach/pull/55416 -[#55417]: https://github.com/cockroachdb/cockroach/pull/55417 -[#55428]: https://github.com/cockroachdb/cockroach/pull/55428 -[#55429]: https://github.com/cockroachdb/cockroach/pull/55429 -[#55459]: https://github.com/cockroachdb/cockroach/pull/55459 -[#55464]: https://github.com/cockroachdb/cockroach/pull/55464 -[#55470]: https://github.com/cockroachdb/cockroach/pull/55470 -[#55509]: https://github.com/cockroachdb/cockroach/pull/55509 -[#55513]: https://github.com/cockroachdb/cockroach/pull/55513 -[#55515]: https://github.com/cockroachdb/cockroach/pull/55515 -[#55517]: https://github.com/cockroachdb/cockroach/pull/55517 -[#55524]: https://github.com/cockroachdb/cockroach/pull/55524 -[#55532]: https://github.com/cockroachdb/cockroach/pull/55532 -[#55543]: https://github.com/cockroachdb/cockroach/pull/55543 -[#55565]: https://github.com/cockroachdb/cockroach/pull/55565 -[#55567]: https://github.com/cockroachdb/cockroach/pull/55567 -[#55570]: https://github.com/cockroachdb/cockroach/pull/55570 -[#55607]: https://github.com/cockroachdb/cockroach/pull/55607 -[#55611]: https://github.com/cockroachdb/cockroach/pull/55611 -[#55612]: https://github.com/cockroachdb/cockroach/pull/55612 -[#55626]: https://github.com/cockroachdb/cockroach/pull/55626 -[#55638]: https://github.com/cockroachdb/cockroach/pull/55638 -[#55641]: https://github.com/cockroachdb/cockroach/pull/55641 -[#55644]: https://github.com/cockroachdb/cockroach/pull/55644 -[#55647]: https://github.com/cockroachdb/cockroach/pull/55647 -[#55660]: https://github.com/cockroachdb/cockroach/pull/55660 -[#55679]: https://github.com/cockroachdb/cockroach/pull/55679 -[#55699]: https://github.com/cockroachdb/cockroach/pull/55699 -[#55700]: https://github.com/cockroachdb/cockroach/pull/55700 -[#55705]: https://github.com/cockroachdb/cockroach/pull/55705 -[#55707]: https://github.com/cockroachdb/cockroach/pull/55707 -[#55720]: https://github.com/cockroachdb/cockroach/pull/55720 -[#55721]: https://github.com/cockroachdb/cockroach/pull/55721 -[#55751]: https://github.com/cockroachdb/cockroach/pull/55751 -[#55759]: https://github.com/cockroachdb/cockroach/pull/55759 -[#55760]: https://github.com/cockroachdb/cockroach/pull/55760 -[#55783]: https://github.com/cockroachdb/cockroach/pull/55783 -[#55785]: https://github.com/cockroachdb/cockroach/pull/55785 -[#55808]: https://github.com/cockroachdb/cockroach/pull/55808 -[#55812]: https://github.com/cockroachdb/cockroach/pull/55812 -[#55827]: https://github.com/cockroachdb/cockroach/pull/55827 -[#55831]: https://github.com/cockroachdb/cockroach/pull/55831 -[#55845]: https://github.com/cockroachdb/cockroach/pull/55845 -[#55866]: https://github.com/cockroachdb/cockroach/pull/55866 -[#55867]: https://github.com/cockroachdb/cockroach/pull/55867 -[#55891]: https://github.com/cockroachdb/cockroach/pull/55891 -[#55894]: https://github.com/cockroachdb/cockroach/pull/55894 -[#55907]: https://github.com/cockroachdb/cockroach/pull/55907 -[#55915]: https://github.com/cockroachdb/cockroach/pull/55915 -[#55916]: https://github.com/cockroachdb/cockroach/pull/55916 -[#55934]: https://github.com/cockroachdb/cockroach/pull/55934 -[#55945]: https://github.com/cockroachdb/cockroach/pull/55945 -[#55958]: https://github.com/cockroachdb/cockroach/pull/55958 -[#55961]: https://github.com/cockroachdb/cockroach/pull/55961 -[#55969]: https://github.com/cockroachdb/cockroach/pull/55969 -[#55970]: https://github.com/cockroachdb/cockroach/pull/55970 -[#55982]: https://github.com/cockroachdb/cockroach/pull/55982 -[#55983]: https://github.com/cockroachdb/cockroach/pull/55983 -[#55993]: https://github.com/cockroachdb/cockroach/pull/55993 -[#55994]: https://github.com/cockroachdb/cockroach/pull/55994 -[#56007]: https://github.com/cockroachdb/cockroach/pull/56007 -[#56025]: https://github.com/cockroachdb/cockroach/pull/56025 -[#56080]: https://github.com/cockroachdb/cockroach/pull/56080 -[#56101]: https://github.com/cockroachdb/cockroach/pull/56101 -[#56103]: https://github.com/cockroachdb/cockroach/pull/56103 -[#56135]: https://github.com/cockroachdb/cockroach/pull/56135 -[#56144]: https://github.com/cockroachdb/cockroach/pull/56144 -[#56183]: https://github.com/cockroachdb/cockroach/pull/56183 -[#56198]: https://github.com/cockroachdb/cockroach/pull/56198 -[#56213]: https://github.com/cockroachdb/cockroach/pull/56213 -[#56283]: https://github.com/cockroachdb/cockroach/pull/56283 -[#56296]: https://github.com/cockroachdb/cockroach/pull/56296 -[#56298]: https://github.com/cockroachdb/cockroach/pull/56298 -[#56344]: https://github.com/cockroachdb/cockroach/pull/56344 -[#56352]: https://github.com/cockroachdb/cockroach/pull/56352 -[#56363]: https://github.com/cockroachdb/cockroach/pull/56363 -[#56373]: https://github.com/cockroachdb/cockroach/pull/56373 -[#56388]: https://github.com/cockroachdb/cockroach/pull/56388 -[#56392]: https://github.com/cockroachdb/cockroach/pull/56392 -[#56455]: https://github.com/cockroachdb/cockroach/pull/56455 -[#56458]: https://github.com/cockroachdb/cockroach/pull/56458 -[#56459]: https://github.com/cockroachdb/cockroach/pull/56459 -[#56461]: https://github.com/cockroachdb/cockroach/pull/56461 -[#56470]: https://github.com/cockroachdb/cockroach/pull/56470 -[#56477]: https://github.com/cockroachdb/cockroach/pull/56477 -[#56524]: https://github.com/cockroachdb/cockroach/pull/56524 -[#56533]: https://github.com/cockroachdb/cockroach/pull/56533 -[#56546]: https://github.com/cockroachdb/cockroach/pull/56546 -[#56574]: https://github.com/cockroachdb/cockroach/pull/56574 -[#56585]: https://github.com/cockroachdb/cockroach/pull/56585 -[#56587]: https://github.com/cockroachdb/cockroach/pull/56587 -[#56589]: https://github.com/cockroachdb/cockroach/pull/56589 -[#56591]: https://github.com/cockroachdb/cockroach/pull/56591 -[#56598]: https://github.com/cockroachdb/cockroach/pull/56598 -[#56627]: https://github.com/cockroachdb/cockroach/pull/56627 -[#56628]: https://github.com/cockroachdb/cockroach/pull/56628 -[#56631]: https://github.com/cockroachdb/cockroach/pull/56631 -[#56634]: https://github.com/cockroachdb/cockroach/pull/56634 -[#56653]: https://github.com/cockroachdb/cockroach/pull/56653 -[#56672]: https://github.com/cockroachdb/cockroach/pull/56672 -[#56679]: https://github.com/cockroachdb/cockroach/pull/56679 -[#56681]: https://github.com/cockroachdb/cockroach/pull/56681 -[#56708]: https://github.com/cockroachdb/cockroach/pull/56708 -[#56732]: https://github.com/cockroachdb/cockroach/pull/56732 -[#56737]: https://github.com/cockroachdb/cockroach/pull/56737 -[#56762]: https://github.com/cockroachdb/cockroach/pull/56762 -[#56773]: https://github.com/cockroachdb/cockroach/pull/56773 -[#56822]: https://github.com/cockroachdb/cockroach/pull/56822 -[#56827]: https://github.com/cockroachdb/cockroach/pull/56827 -[#56829]: https://github.com/cockroachdb/cockroach/pull/56829 -[#56837]: https://github.com/cockroachdb/cockroach/pull/56837 -[#56855]: https://github.com/cockroachdb/cockroach/pull/56855 -[#56858]: https://github.com/cockroachdb/cockroach/pull/56858 -[#56860]: https://github.com/cockroachdb/cockroach/pull/56860 -[#56869]: https://github.com/cockroachdb/cockroach/pull/56869 -[#56872]: https://github.com/cockroachdb/cockroach/pull/56872 -[#56874]: https://github.com/cockroachdb/cockroach/pull/56874 -[#56880]: https://github.com/cockroachdb/cockroach/pull/56880 -[#56881]: https://github.com/cockroachdb/cockroach/pull/56881 -[#56883]: https://github.com/cockroachdb/cockroach/pull/56883 -[#56898]: https://github.com/cockroachdb/cockroach/pull/56898 -[#56901]: https://github.com/cockroachdb/cockroach/pull/56901 -[#56942]: https://github.com/cockroachdb/cockroach/pull/56942 -[#56943]: https://github.com/cockroachdb/cockroach/pull/56943 -[#56953]: https://github.com/cockroachdb/cockroach/pull/56953 -[#56964]: https://github.com/cockroachdb/cockroach/pull/56964 -[#56974]: https://github.com/cockroachdb/cockroach/pull/56974 -[#56975]: https://github.com/cockroachdb/cockroach/pull/56975 -[#57003]: https://github.com/cockroachdb/cockroach/pull/57003 -[#57004]: https://github.com/cockroachdb/cockroach/pull/57004 -[#57006]: https://github.com/cockroachdb/cockroach/pull/57006 -[#57007]: https://github.com/cockroachdb/cockroach/pull/57007 -[#57038]: https://github.com/cockroachdb/cockroach/pull/57038 -[#57040]: https://github.com/cockroachdb/cockroach/pull/57040 -[#57050]: https://github.com/cockroachdb/cockroach/pull/57050 -[#57076]: https://github.com/cockroachdb/cockroach/pull/57076 -[#57100]: https://github.com/cockroachdb/cockroach/pull/57100 -[#57106]: https://github.com/cockroachdb/cockroach/pull/57106 -[#57121]: https://github.com/cockroachdb/cockroach/pull/57121 -[#57132]: https://github.com/cockroachdb/cockroach/pull/57132 -[#57139]: https://github.com/cockroachdb/cockroach/pull/57139 -[#57142]: https://github.com/cockroachdb/cockroach/pull/57142 -[#57143]: https://github.com/cockroachdb/cockroach/pull/57143 -[#57152]: https://github.com/cockroachdb/cockroach/pull/57152 -[#57153]: https://github.com/cockroachdb/cockroach/pull/57153 -[#57179]: https://github.com/cockroachdb/cockroach/pull/57179 -[#57180]: https://github.com/cockroachdb/cockroach/pull/57180 -[#57197]: https://github.com/cockroachdb/cockroach/pull/57197 -[#57212]: https://github.com/cockroachdb/cockroach/pull/57212 -[#57241]: https://github.com/cockroachdb/cockroach/pull/57241 -[#57256]: https://github.com/cockroachdb/cockroach/pull/57256 -[#57265]: https://github.com/cockroachdb/cockroach/pull/57265 -[#57278]: https://github.com/cockroachdb/cockroach/pull/57278 -[#57284]: https://github.com/cockroachdb/cockroach/pull/57284 -[#57309]: https://github.com/cockroachdb/cockroach/pull/57309 -[#57316]: https://github.com/cockroachdb/cockroach/pull/57316 -[#57320]: https://github.com/cockroachdb/cockroach/pull/57320 -[#57355]: https://github.com/cockroachdb/cockroach/pull/57355 diff --git a/src/current/_includes/releases/v21.1/v21.1.0-alpha.2.md b/src/current/_includes/releases/v21.1/v21.1.0-alpha.2.md deleted file mode 100644 index 1a489dc2cd0..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.0-alpha.2.md +++ /dev/null @@ -1,528 +0,0 @@ -## v21.1.0-alpha.2 - -Release Date: February 1, 2021 - - - -

Backward-incompatible changes

- -- The payload fields for certain event types in `system.eventlog` have been changed and/or renamed. Note that the payloads in `system.eventlog` were an undocumented, reserved feature so no guarantee was made about cross-version compatibility to this point. The list of changes includes (but is not limited to): - - `TargetID` has been renamed to `NodeID` for `node_join`. - - `TargetID` has been renamed to `TargetNodeID` for `node_decommissioning` / `node_decommissioned` / `node_recommissioned`. - - `NewDatabaseName` has been renamed to `NewDatabaseParent` for `convert_to_schema`. - - `grant_privilege` and `revoke_privilege` have been removed; they are replaced by `change_database_privilege`, `change_schema_privilege`, `change_type_privilege`, and `change_table_privilege`. Each event only reports a change for one user/role, so the `Grantees` field was renamed to `Grantee`. - - Each `drop_role` event now pertains to a single [user/role](https://www.cockroachlabs.com/docs/v21.1/authorization#sql-users). [#57737][#57737] -- The connection and authentication logging enabled by the [cluster settings](https://www.cockroachlabs.com/docs/v21.1/cluster-settings) `server.auth_log.sql_connections.enabled` and `server.auth_log.sql_sessions.enabled` was previously using a text format which was hard to parse and integrate with external monitoring tools. This has been changed to use the standard notable event mechanism, with standardized payloads. The output format is now structured; see the generated [reference documentation](https://github.com/cockroachdb/cockroach/blob/master/docs/generated/eventlog.md) for details about the supported event types and payloads. [#57839][#57839] {% comment %}doc{% endcomment %} -- The logging format for [SQL audit](https://www.cockroachlabs.com/docs/v21.1/sql-audit-logging), [execution](https://www.cockroachlabs.com/docs/v21.1/logging-use-cases#sql_exec), and [query](https://www.cockroachlabs.com/docs/v21.1/logging-use-cases#sql_perf) logs has changed from a crude space-delimited format to JSON. To opt out of this new behavior and restore the pre-v21.1 logging format, you can set the [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings) `sql.log.unstructured_entries.enabled` to `true`. [#59110][#59110] {% comment %}doc{% endcomment %} - -

Security updates

- -- The entropy of the auto-generated user account password for [`cockroach demo`](https://www.cockroachlabs.com/docs/v21.1/cockroach-demo) shell has been lowered, to ensure that the passwords are short and easy to copy over. This makes the password easy to brute-force and thus removes any remaining protection there may have been to the confidentiality of `demo` sessions. (The reason why a password remains, as opposed to no password whatsoever, is to prevent accidental misuse of the HTTP service by other applications running on the same machine.) Since the required shortness erases any pretense of security, the algorithm has been simplified to remove usage of a cryptographic PRNG altogether. [#58305][#58305] {% comment %}doc{% endcomment %} -- When using a SQL proxy, by default CockroachDB only knows about the network address of the proxy. That *peer* address is then used for logging, [authentication](https://www.cockroachlabs.com/docs/v21.1/authentication) rules, etc. This is undesirable, as security logging and authentication rules need to operate on the actual (final) client address instead. CockroachDB can now be configured to solve this problem (conf mechanism detailed below). When so configured, a SQL proxy can inform the CockroachDB server of the real address of the client via a server status parameter called `crdb:remote_addr`. The value must be the IP address of the client, followed by a colon, followed by the port number, using the standard Go syntax (e.g., `11.22.33.44:5566` for IPv4, `[11:22::33]:4455` for IPv6). When provided, this value overrides the SQL proxy's address for logging and authentication purposes. In any case, the original peer address is also logged alongside the client address (overridden or not), via the new logging tag `peer`. Security considerations: - - Enabling this feature allows the peer to spoof its address with regard to authentication and thus bypass authentication rules that would otherwise apply to its address, which can introduce a serious security vulnerability if the peer is not trusted. This is why this feature is not enabled by default, and must only be enabled when using a trusted SQL proxy. - - This feature should only be used with SQL proxies which actively scrub a `crdb:remote_addr` parameter received by a remote client, and replaces it by its own. If the proxy mistakenly forwards the parameter as provided by the client, it opens the door to the aforementioned security vulnerability. - - Care must be taken in host-based authentication (HBA) rules: - - TLS client cert validation, if requested by a rule, is still performed using the certificate presented by the proxy, not that presented by the client. This means that this new feature is not sufficient to forward TLS client cert authentication through a proxy. (If TLS client cert authentication is required, it must be performed by the proxy directly.) - - The `protocol` field (first column) continues to apply to the connection type between CockroachDB and the proxy, not between the proxy and the client. Only the 4th column (the CIDR pattern) is matched against the proxy-provided remote address override. Therefore, it is not possible to apply different rules to different client address when proxying TCP connections via a unix socket, because HBA rules for unix connections do not use the address column. Also when proxying client SSL connections via a non-SSL proxy connection, or proxying client non-SSL connections via a SSL proxy connection, care must be taken to configure address-based rule matching using the proper connection type. A reliable way to bypass this complexity is to only use the `host` connection type which applies equally to SSL and non-SSL connections. As of this implementation, the feature is enabled using the non-documented environment variable `COCKROACH_TRUST_CLIENT_PROVIDED_SQL_REMOTE_ADDR`. The use of an environment variable is a stop-gap so that this feature can be used in CockroachCloud SQL pods, which do not have access to [cluster settings](https://www.cockroachlabs.com/docs/v21.1/cluster-settings). The environment variable will be eventually removed and replaced by another mechanism. [#58381][#58381] -- Added ability to set region-specific callback URLs in the OIDC config. The `server.oidc_authentication.redirect_url` [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings) can now accept JSON as an alternative to the basic URL string setting. If a JSON value is set, it *must* contain a `redirect_url` key that maps to an object with key-value pairs where the key is a `region` matching an existing locality setting and the value is a callback URL. [#57519][#57519] {% comment %}doc{% endcomment %} - -

General changes

- -- CockroachDB now runs fewer threads in parallel if running inside a container with a CPU limit. [#57390][#57390] -- Upgraded the CockroachDB binary to Go 1.15.6. [#57713][#57713] {% comment %}doc{% endcomment %} -- The new [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings) `server.eventlog.enabled` controls whether notable events are also written to the table `system.eventlog`. Its default value is `true`. Changing this cluster setting can help recovering partial cluster availability when the `system.eventlog` table becomes unavailable. Note that even when `false`, notable events are still propagated to the logging system, where they can be redirected to files or other log sinks. [#57879][#57879] {% comment %}doc{% endcomment %} -- Moved RBAC resources in Kubernetes manifests to `rbac.authorization.k8s.io/v1`. [#55065][#55065] {% comment %}doc{% endcomment %} -- Cluster version upgrades, as initiated by [`SET CLUSTER SETTING version = -`](https://www.cockroachlabs.com/docs/v21.1/set-cluster-setting), now perform internal maintenance duties that will increase the time that it takes for the command to complete. This delay is proportional to the amount of data currently stored in the cluster. The cluster will also experience a small amount of additional load during this period while the upgrade is being finalized. These changes expand upon our original prototype in [#57445](https://github.com/cockroachdb/cockroach/pull/57445). [#58088][#58088] {% comment %}doc{% endcomment %} -- Upgraded to v3.9.2 of `protobuf` to consume new changes for Bazel. [#58891][#58891] -- Increased the default value of `sql.tablecache.lease.refresh_limit` to 500 in order to accommodate larger schemas without encountering `RETRY_COMMIT_DEADLINE_EXCEEDED` errors and generally higher latency. [#59319][#59319] {% comment %}doc{% endcomment %} - -

Enterprise edition changes

- -- CockroachDB now permits partitioning by user-defined [types](https://www.cockroachlabs.com/docs/v21.1/data-types) such as [enums](https://www.cockroachlabs.com/docs/v21.1/enum). [#57327][#57327] {% comment %}doc{% endcomment %} - -

SQL language changes

- -- Added a new [built-in](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators#built-in-functions) method to calculate Voronoi polygons. [#56496][#56496] {% comment %}doc{% endcomment %} -- Added a new [built-in](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators#built-in-functions) method to calculate Voronoi lines. [#56496][#56496] {% comment %}doc{% endcomment %} -- Previously, timezones had to be entered in the same case as they were stored on the system. Now, timezone names can be case-insensitive provided they match well-known zone names according to [Go's `time/tzdata` package](https://golang.org/src/time/tzdata/tzdata.go). [#57250][#57250] {% comment %}doc{% endcomment %} -- Added `LOCALITY REGIONAL BY TABLE IN PRIMARY REGION` syntax for configuring table locality. [#57275][#57275] {% comment %}doc{% endcomment %} -- The default value for `vectorize_row_count_threshold` setting has been decreased from 1000 to 0, meaning that, by default, CockroachDB will always use the [vectorized engine](https://www.cockroachlabs.com/docs/v21.1/vectorized-execution) for all supported queries regardless of the row estimate (unless `vectorize=off` is set). [#55713][#55713] {% comment %}doc{% endcomment %} -- Added the ability to [`CREATE TABLE`](https://www.cockroachlabs.com/docs/v21.1/create-table) with LOCALITY set. [#57419][#57419] {% comment %}doc{% endcomment %} -- [`EXPLAIN`](https://www.cockroachlabs.com/docs/v21.1/explain) and [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v21.1/explain-analyze) now show counts and durations in a more user-friendly way. [#57420][#57420] {% comment %}doc{% endcomment %} -- [`EXPORT`](https://www.cockroachlabs.com/docs/v21.1/export) can now be used by non-`admin` users with `SELECT` privilege on the table being exported, unless the destination URI requires `admin` privileges. [#57380][#57380] {% comment %}doc{% endcomment %} -- [`CREATE DATABASE ... WITH [PRIMARY] REGIONS`](https://www.cockroachlabs.com/docs/v21.1/create-database) now modifies the [zone configurations](https://www.cockroachlabs.com/docs/v21.1/configure-replication-zones) with the settings as set by the multiregion configuration. [#57244][#57244] {% comment %}doc{% endcomment %} -- [`SHOW [BACKUP]`](https://www.cockroachlabs.com/docs/v21.1/show-backup) is no longer `admin`-only. It depends on the URI construct and the credentials specified in the [`SHOW [BACKUP]`](https://www.cockroachlabs.com/docs/v21.1/show-backup) query. [#57412][#57412] {% comment %}doc{% endcomment %} -- [built-in](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators#built-in-functions) aggregate function `regr_avgx` is now supported. [#57295][#57295] {% comment %}doc{% endcomment %} -- Added support for running [`IMPORT`](https://www.cockroachlabs.com/docs/v21.1/import) in a mixed-version cluster. [#57382][#57382] {% comment %}doc{% endcomment %} -- Introduced `SHOW REGIONS FROM ALL DATABASES`, which shows region metadata for each database in the table. [#57500][#57500] {% comment %}doc{% endcomment %} -- Added `collations` table to the `information_schema`. The `collations` table contains the collations available in the current database. [#57609][#57609] {% comment %}doc{% endcomment %} -- Added `collation_character_set_applicability` table to the `information_schema`. [#57609][#57609] {% comment %}doc{% endcomment %} -- `LOCALITY` directives on [`CREATE TABLE`](https://www.cockroachlabs.com/docs/v21.1/create-table) are now persisted onto the database. [#57513][#57513] {% comment %}doc{% endcomment %} -- Multi-region defined table localities now display on the [`SHOW CREATE TABLE`](https://www.cockroachlabs.com/docs/v21.1/show-create) command. [#57513][#57513] {% comment %}doc{% endcomment %} -- Arrays in `pg_catalog` tables now are displayed in a format that matches PostgreSQL. [#56980][#56980] {% comment %}doc{% endcomment %} -- The [`SET TRACING`](https://www.cockroachlabs.com/docs/v21.1/set-vars#set-tracing) `local` option (to only trace messages issued by the local node) was removed. [#57567][#57567] {% comment %}doc{% endcomment %} -- Several `SHOW` commands have been adjusted to enforce a particular ordering of the output rows. [#57644][#57644] {% comment %}doc{% endcomment %} -- Implemented `regr_avgy` [built-in](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators#built-in-functions) aggregation function. [#57583][#57583] {% comment %}doc{% endcomment %} -- `crdb_internal.tables` and [`SHOW TABLES`](https://www.cockroachlabs.com/docs/v21.1/show-tables) now show locality data on the tables. [#57653][#57653] {% comment %}doc{% endcomment %} -- [`CREATE TABLE ... LOCALITY ...`](https://www.cockroachlabs.com/docs/v21.1/create-table) statements now modify the [zone configurations](https://www.cockroachlabs.com/docs/v21.1/configure-replication-zones) to be in line with the desired locality. [#57654][#57654] {% comment %}doc{% endcomment %} -- Added `sql.trace.stmt.enable_thresold`. Similar to `sql.trace.txn.enable_threshold`, this logs a trace of any statements that take longer than the specified duration. This new setting allows for more granularity than the transaction threshold since it applies to individual statements and does not include overhead such as communication with a SQL client. [#57733][#57733] {% comment %}doc{% endcomment %} -- Added a session setting `experimental_enable_unique_without_index_constraints` and cluster default `sql.defaults.experimental_enable_unique_without_index_constraints.enabled` to enable the use of `UNIQUE WITHOUT INDEX` syntax. The default value of both settings is `false` since this feature is not yet fully supported. Use of `UNIQUE WITHOUT INDEX` also depends on all nodes being upgraded to the cluster version `UniqueWithoutIndexConstraints`. [#57666][#57666] {% comment %}doc{% endcomment %} -- Implemented [built-in](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators#built-in-functions) function `levenshtein`. [#56843][#56843] {% comment %}doc{% endcomment %} -- Added table-view dependency information in `pg_depend` to improve compatibility with PostgreSQL. [#57240][#57240] {% comment %}doc{% endcomment %} -- The cluster event logging system has been modernized. In particular, the schema of the entries for the `info` column in `system.eventlog` has been stabilized. [#57737][#57737] {% comment %}doc{% endcomment %} -- The `targetID` and `reportingID` columns in `system.eventlog` are now deprecated. Their values, for relevant event types, can be found as fields inside the `info` column instead. [#57737][#57737] {% comment %}doc{% endcomment %} -- Added the `soundex()` and `difference()` [built-in](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators#built-in-functions) functions. [#57615][#57615] {% comment %}doc{% endcomment %} -- New [`EXPLAIN ANALYZE (DISTSQL)`](https://www.cockroachlabs.com/docs/v21.1/explain-analyze) output format. [`EXPLAIN ANALYZE (DISTSQL)`](https://www.cockroachlabs.com/docs/v21.1/explain-analyze) now only works as a top-level statement (it can no longer be part of a bigger query). [#57804][#57804] {% comment %}doc{% endcomment %} -- Added [built-in](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators#built-in-functions) function `ST_OrientedEnvelope` to calculate the mimimum-area rotated rectangle. [#57697][#57697] {% comment %}doc{% endcomment %} -- A new `default_transaction_use_follower_reads` session variable is now supported, which configures SQL transactions to perform stale reads from follower replicas. [#57851][#57851] {% comment %}doc{% endcomment %} -- Both [`ALTER TABLE OWNER`](https://www.cockroachlabs.com/docs/v21.1/alter-table) and `REASSIGN OWNED BY` now report structured notable events about the ownership changes. Note that `REASSIGN OWNED BY` currently also reports an `alter_table_owner` for views and sequences that were implicitly reassigned, even though CockroachDB does not yet support the `ALTER VIEW OWNER` / `ALTER SEQUENCE OWNER` statements. [#57969][#57969] {% comment %}doc{% endcomment %} -- [`EXPLAIN (DISTSQL)`](https://www.cockroachlabs.com/docs/v21.1/explain) has a new output schema and format. [#57954][#57954] -- Added an overload to `crdb_internal.pb_to_json` to suppress populating default values in fields. [#58087][#58087] {% comment %}doc{% endcomment %} -- [`IMPORT INTO`](https://www.cockroachlabs.com/docs/v21.1/import-into) for CSV now supports `nextval` as a default expression of a non-targeted column. [#56473][#56473] {% comment %}doc{% endcomment %} -- The [`EXPLAIN`](https://www.cockroachlabs.com/docs/v21.1/explain) output for foreign key checks is now labeled `constraint-check` rather than `fk-check`. This change is in preparation for adding support for unique constraint checks, which will use the same label. [#58053][#58053] {% comment %}doc{% endcomment %} -- The error message for unique constraint violations now matches the error used by PostgreSQL. For example, the new error message looks like this: `ERROR: duplicate key value violates unique constraint "primary" DETAIL: Key (k)=(1) already exists.` [#58053][#58053] {% comment %}doc{% endcomment %} -- [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v21.1/explain-analyze) diagrams now contain "network hops" information on streams. [#58078][#58078] {% comment %}doc{% endcomment %} -- Added support for the `IF NOT EXISTS` prefix to [`CREATE TYPE`](https://www.cockroachlabs.com/docs/v21.1/create-type) statements. [#58173][#58173] {% comment %}doc{% endcomment %} -- Added a `session_variables` table to the `information_schema`. The `session_variables` table exposes the session variables. [#57837][#57837] {% comment %}doc{% endcomment %} -- Indexing into scalar JSON values using an integer index is now properly supported. [#58359][#58359] {% comment %}doc{% endcomment %} -- The `crdb_internal.cluster_id` method now returns the ID of the underlying KV cluster in multi-tenant scenarios rather than the Nil UUID. The `ClusterID` -is needed for logging and metrics for SQL tenants. [#58317][#58317] {% comment %}doc{% endcomment %} -- `SHOW STATEMENTS` is now an alias of [`SHOW QUERIES`](https://www.cockroachlabs.com/docs/v21.1/show-statements). The new syntax is preferred by the SQL parser. [#58072][#58072] {% comment %}doc{% endcomment %} -- Implemented the `gateway_region` [built-in](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators#built-in-functions), which returns the region of the connection's current node. [#58423][#58423] {% comment %}doc{% endcomment %} -- [`BACKUP`](https://www.cockroachlabs.com/docs/v21.1/backup) and [`RESTORE`](https://www.cockroachlabs.com/docs/v21.1/restore) now use the block-level checksums embedded in their data files instead of collecting / verifying more expensive file-level SHA512 checksums. [#58487][#58487] {% comment %}doc{% endcomment %} -- Multi-tenant clusters will now send anonymous usage information to the central CockroachDB registration server. [#58399][#58399] {% comment %}doc{% endcomment %} -- Added support for [`ALTER DATABASE ... ADD REGION`](https://www.cockroachlabs.com/docs/v21.1/alter-database). [#58233][#58233] {% comment %}doc{% endcomment %} -- Multiple [`SHOW ZONE CONFIGURATION`](https://www.cockroachlabs.com/docs/v21.1/show-zone-configurations) statements that previously used `FOR` can now also use `FROM` (e.g., `SHOW ZONE CONFIGURATION FOR RANGE`). This change standardizes the use of `FROM` for `SHOW ZONE CONFIGURATION`. [#58740][#58740] {% comment %}doc{% endcomment %} -- Implemented `SHOW REGIONS`, which shows a list of regions along with the databases associated with them. [#57618][#57618] {% comment %}doc{% endcomment %} -- Added [`ALTER DATABASE ... PRIMARY REGION`](https://www.cockroachlabs.com/docs/v21.1/alter-database). [#58725][#58725] {% comment %}doc{% endcomment %} -- Columns implicitly added from [hash-sharded indexes](https://www.cockroachlabs.com/docs/v21.1/experimental-features#hash-sharded-indexes) will no longer display on `pg_index` and `pg_indexes`. [#58749][#58749] {% comment %}doc{% endcomment %} -- Added a new `implicit` column to `crdb_internal.index_columns`, which signifies whether the column is implicitly added onto the index through [`PARTITION BY`](https://www.cockroachlabs.com/docs/v21.1/partition-by) with an implicit column or a [hash-sharded index](https://www.cockroachlabs.com/docs/v21.1/experimental-features#hash-sharded-indexes). [#58749][#58749] {% comment %}doc{% endcomment %} -- CockroachDB now omits implicit columns or [hash-sharded index](https://www.cockroachlabs.com/docs/v21.1/experimental-features#hash-sharded-indexes) columns from automatically generated index names. [#58898][#58898] {% comment %}doc{% endcomment %} -- Implemented `PARTITION ALL BY` syntax for [`CREATE TABLE`](https://www.cockroachlabs.com/docs/v21.1/create-table), which automatically partitions the table, and all indexes, with the same partitioning scheme. [#58748][#58748] {% comment %}doc{% endcomment %} -- `PARTITION ALL BY` tables now display correctly on [`SHOW CREATE TABLE`](https://www.cockroachlabs.com/docs/v21.1/show-create). [#58928][#58928] {% comment %}doc{% endcomment %} -- Creating a table and changing its schema within a transaction no longer schedules a schema change job. [#58888][#58888] {% comment %}doc{% endcomment %} -- Implemented geo [built-in](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators#built-in-functions) function `ST_GeneratePoints`. [#58288][#58288] {% comment %}doc{% endcomment %} -- Hash aggregation can now spill to disk when it exhausts its memory limit when executed via the [vectorized engine](https://www.cockroachlabs.com/docs/v21.1/vectorized-execution). [#57817][#57817] {% comment %}doc{% endcomment %} -- Implemented [`ALTER DATABASE ... SURVIVE REGION/ZONE FAILURE`](https://www.cockroachlabs.com/docs/v21.1/alter-database). [#58937][#58937] {% comment %}doc{% endcomment %} -- [`CREATE INDEX`](https://www.cockroachlabs.com/docs/v21.1/create-index) will now inherit `PARTITION BY` clauses from `PARTITION ALL BY` tables. [#58988][#58988] {% comment %}doc{% endcomment %} -- Implemented the geometry-based [built-in](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators#built-in-functions) functions `ST_LineSubstring({geometry,float8,float8})`. [#58125][#58125] {% comment %}doc{% endcomment %} -- Implemented `REGIONAL BY ROW` logic on [`CREATE TABLE`](https://www.cockroachlabs.com/docs/v21.1/create-table). This is gated behind the experimental session variable `experimental_enable_implicit_column_partitioning`. [#58987][#58987] {% comment %}doc{% endcomment %} -- Added support for `ALTER VIEW` / `SEQUENCE OWNER TO` commands. [#59049][#59049] {% comment %}doc{% endcomment %} -- Casts to type `unknown[]` are no longer accepted in CockroachDB. Any such casts will fail to parse and return the error `ERROR: type "unknown[]" does not exist`. This is consistent with PostgreSQL's behavior. [#59136][#59136] {% comment %}doc{% endcomment %} -- Implemented `REGIONAL BY ROW AS ...`, which allows a column of type `crdb_internal_region` to be used as a column for `REGIONAL BY ROW` multi-region properties. [#59121][#59121] {% comment %}doc{% endcomment %} -- Enabled altering of table locality for tables that are `REGIONAL BY TABLE` to `REGIONAL BY TABLE` (but in a different region). [#59144][#59144] {% comment %}doc{% endcomment %} -- [`PARTITION ALL BY`](https://www.cockroachlabs.com/docs/v21.1/partition-by) statements will now apply for tables when using [`ALTER PRIMARY KEY`](https://www.cockroachlabs.com/docs/v21.1/alter-primary-key). [#59178][#59178] {% comment %}doc{% endcomment %} -- Implemented the geometry [built-in](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators#built-in-functions) function `ST_ShiftLongitude`, which is useful for plotting data on a Pacific-centred map. [#59234][#59234] {% comment %}doc{% endcomment %} -- Implemented [`ALTER TABLE ... SET LOCALITY GLOBAL`](https://www.cockroachlabs.com/docs/v21.1/alter-table) for tables starting as `REGIONAL BY ROW`. [#59256][#59256] {% comment %}doc{% endcomment %} -- Improved the error message when people use set `search_path` incorrectly, or with a schema that legitimately has a comma in its name. [#53974][#53974] {% comment %}doc{% endcomment %} -- Creation of interleaved tables and indexes is now disabled by default. They can be re-enabled temporarily by running [`SET CLUSTER SETTING sql.defaults.interleaved_tables.enabled=true`](https://www.cockroachlabs.com/docs/v21.1/set-cluster-setting). [#59248][#59248] {% comment %}doc{% endcomment %} -- CockroachDB now uses a structured logging format for the [SQL audit](https://www.cockroachlabs.com/docs/v21.1/sql-audit-logging), [execution](https://www.cockroachlabs.com/docs/v21.1/logging-use-cases#sql_exec), and [query](https://www.cockroachlabs.com/docs/v21.1/logging-use-cases#sql_perf) logs. See the generated [reference documentation](https://github.com/cockroachdb/cockroach/blob/master/docs/generated/eventlog.md) for details. Of note, audit and execution logs now also include information about whether a query plan contains full index scans. Previously, this information was only included in the slow query log. [#59110][#59110] {% comment %}doc{% endcomment %} -- [`CREATE INDEX`](https://www.cockroachlabs.com/docs/v21.1/create-index) on `REGIONAL BY ROW` tables will now correctly include the implicit partitioning and inherit the correct [zone configurations](https://www.cockroachlabs.com/docs/v21.1/configure-replication-zones). [#59223][#59223] {% comment %}doc{% endcomment %} -- Made a minor improvement to [`EXPLAIN`](https://www.cockroachlabs.com/docs/v21.1/explain) output for "render" node. [#59313][#59313] {% comment %}doc{% endcomment %} -- [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v21.1/explain-analyze) now defaults to the new `EXPLAIN ANALYZE (PLAN)`, which shows a text representation of the logical plan, annotated with execution statistics. {% comment %}doc{% endcomment %} [#57337][#57337] - -

Command-line changes

- -- The logging configuration can now be specified via the `--log` parameter. See the documentation for details. The flags `--log-dir`, `--log-file-max-size`, `--log-file-verbosity`, `--log-group-max-size` are now deprecated. [#57134][#57134] {% comment %}doc{% endcomment %} -- A new command `cockroach debug check-log-config` prints out the logging configuration that results from the provided combination of `--store`, `--log`, and other logging-related flags on the command line. [#57134][#57134] {% comment %}doc{% endcomment %} -- The events that were previously only stored in `system.eventlog` are now also directed unconditionally to an external logging channel using a JSON format. Refer to the [configuration](https://github.com/cockroachdb/cockroach/blob/master/docs/generated/logsinks.md) to see how to customize how events are directed to external sinks. Note that the exact external output format (and thus how to detect/parse the events from e.g., log files) is not yet stabilized and remains subject to change. [#57737][#57737] {% comment %}doc{% endcomment %} -- Notable events that pertain to SQL schema, user, and privilege changes are now sent on the new `SQL_SCHEMA`, `USER_ADMIN`, and `PRIVILEGES` [logging channels](https://github.com/cockroachdb/cockroach/blob/master/docs/generated/logging.md). These can now be redirected to different [sinks](https://github.com/cockroachdb/cockroach/blob/master/docs/generated/logsinks.md) from the other log entries. - - The `SQL_SCHEMA` channel is used to report changes to the SQL logical schema, excluding privilege and ownership changes (which are reported on the separate channel `PRIVILEGES`) and zone config changes (which go to `OPS`). This includes: - - Database, schema, table, sequence, view, and type creation. - - Adding, removing, and changing table columns. - - Changing sequence parameters. - - More generally, changes to the schema that affect the functional behavior of client apps using stored objects. - - The `USER_ADMIN` channel is typically configured in "audit" mode, with event numbering and synchronous writes. It is used to report changes in users and roles, including: - - Users added and dropped. - - Changes to authentication credentials, including passwords, validity, etc. - - Role grants and revocations. - - Role option grants and revocations. - - The `PRIVILEGES` channel is typically configured in "audit" mode, with event numbering and synchronous writes. It is used to report data authorization changes, including: - - Privilege grants and revocations on database, objects, etc. - - Object ownership changes. [#51987][#51987] -- Logging events that are relevant to cluster operators are now categorized under the new `OPS` and `HEALTH` [logging channels](https://github.com/cockroachdb/cockroach/blob/master/docs/generated/logging.md). These can now be redirected separately from other logging events. - - The `OPS` channel is used to report "point" operational events, initiated by user operators or automation: - - Operator or system actions on server processes: process starts, stops, shutdowns, and crashes (if they can be logged). Each event includes any command-line parameters and the CockroachDB version. - - Actions that impact the topology of a cluster: node additions, removals, decommissions, etc. - - [Cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings) changes - - [Zone configuration](https://www.cockroachlabs.com/docs/v21.1/configure-replication-zones) changes. - - The `HEALTH` channel is the channel used to report "background" operational events, initiated by CockroachDB for reporting on automatic processes: - - Current resource usage, including critical resource usage. - - Node-node connection events, including connection errors and gossip details. - - Range and table leasing events. - - Up- and down-replication. - - Range unavailability. [#57171][#57171] -- Server terminations that are triggered when a node encounters an internal fatal error are now reported on the `OPS` logging channel. The exact text of the error is not reported on the `OPS` channel, however, as it may be complex (e.g., when there is a replica inconsistency); and the `OPS` channel is typically monitored by tools that just detect irregularities. The text of the message refers instead to the channel where the additional details can be found. [#57171][#57171] {% comment %}doc{% endcomment %} -- The notable events `set_zone_config` and `remove_zone_config` are now sent to the `OPS` logging channel. [#57171][#57171] {% comment %}doc{% endcomment %} -- Added a flag to `cockroach debug decode-proto` to suppress populating default values in fields. [#58087][#58087] {% comment %}doc{% endcomment %} -- When a SQL notable (structured) event is logged, the payload now attempts to include the session's `application_name` as field `ApplicationName`. This is intended for use when setting up event routing and filtering in external tools. [#58130][#58130] {% comment %}doc{% endcomment %} -- It is now possible to set the `format` parameter of any log sink, including file sinks, to `json`, `json-compact`, `json-fluent`, or `json-fluent-compact` to write entries as structured JSON. Refer to the generated [reference documentation](https://github.com/cockroachdb/cockroach/blob/master/docs/generated/logformats.md) for details. [#58126][#58126] {% comment %}doc{% endcomment %} -- The DB Console URL printed by [`cockroach demo`](https://www.cockroachlabs.com/docs/v21.1/cockroach-demo) now automatically logs in the user when pasted into a web browser. [#56740][#56740] {% comment %}doc{% endcomment %} -- The URLs generated when [`cockroach demo`](https://www.cockroachlabs.com/docs/v21.1/cockroach-demo) starts have been made shorter and easier to copy/paste by shortening the generated password. [#58305][#58305] {% comment %}doc{% endcomment %} -- When using the JSON output formats for log entries, the server identifiers are now reported as part of each payload once known (either cluster ID + node ID in single-tenant or KV servers, or tenant ID + SQL instance ID in multi-tenant SQL servers). [#58128][#58128] {% comment %}doc{% endcomment %} -- Fixed [`cockroach demo --global`](https://www.cockroachlabs.com/docs/v21.1/cockroach-demo) from crashing with `didn't get expected magic bytes header`. [#58466][#58466] {% comment %}doc{% endcomment %} -- Previously, for certain log files, CockroachDB would both flush individual writes (i.e., propagate them from within the `cockroach` process to the OS) and also synchronize writes (i.e., ask the OS to confirm the log data was written to disk). Per-write synchronization was found to be unnecessary and possibly detrimental to performance and operating cost, so it was removed. Now, the log data continues to be flushed, as before, and CockroachDB only requests synchronization periodically (every 30s). [#58995][#58995] {% comment %}doc{% endcomment %} -- The parameter `sync-writes` for file sink configurations has been removed. (This is not a backward-incompatible change because the configuration feature is new in v21.1.) [#58995][#58995] {% comment %}doc{% endcomment %} -- The parameter `buffered-writes` for file sink configurations has been added. It is set to `true` (writes are buffered) by default and set to `false` (i.e., avoid buffering and flush every log entry) when the `auditable` flag is requested. [#58995][#58995] {% comment %}doc{% endcomment %} -- The default output format for `file-group` and `stderr` sinks has been changed to `crdb-v2`. This new format is non-ambiguous and makes it possible to reliably parse log files. Refer to the format's [documentation](https://github.com/cockroachdb/cockroach/blob/master/docs/generated/logformats.md) for details. Additionally, it prevents single log lines from exceeding a large size; this problem is inherent to the `crdb-v1` format and can prevent [`cockroach debug zip`](https://www.cockroachlabs.com/docs/v21.1/cockroach-debug-zip) from retrieving `v1` log files. The new format has also been designed so that existing log file analyzers for the `crdb-v1` format can read entries written the new format. However, this conversion may be imperfect. Refer to the new format's [documentation](https://github.com/cockroachdb/cockroach/blob/master/docs/generated/logformats.md) for details. In case of incompatibility, users can force the previous format by using `format: crdb-v1` in their logging configuration. [#59087][#59087] {% comment %}doc{% endcomment %} - -

API endpoint changes

- -- The notable event `create_statistics` is only reported when the [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings) `sql.stats.post_events.enabled` is enabled. This fact is now also reported in the [event log reference documentation](https://github.com/cockroachdb/cockroach/blob/master/docs/generated/eventlog.md). [#57877][#57877] {% comment %}doc{% endcomment %} -- The `Timestamp` field of structured notable events is now numeric, and encodes a number of nanoseconds since the Unix epoch. [#58070][#58070] {% comment %}doc{% endcomment %} -- The Health API now checks that the SQL server is ready to accept clients when a readiness check is requested. [#59350][#59350] {% comment %}doc{% endcomment %} - -

DB Console changes

- -- Minor style changes to represent new branding palette. [#57130][#57130] -- Changed the default per-page value on the [Transactions page](https://www.cockroachlabs.com/docs/v21.1/ui-transactions-page) to 20; minor style updates. [#57824][#57824] -- Added a timeseries graph indicating statement denials due to feature flags on the [SQL Dashboard](https://www.cockroachlabs.com/docs/v21.1/ui-sql-dashboard) of DB Console. [#57533][#57533] {% comment %}doc{% endcomment %} -- Updated labels on the [Hardware Dashboard](https://www.cockroachlabs.com/docs/v21.1/ui-hardware-dashboard) to be more accurate. [#57224][#57224] -- Updated the link back to [Statements](https://www.cockroachlabs.com/docs/v21.1/ui-statements-page) from [Statement Details](https://www.cockroachlabs.com/docs/v21.1/ui-statements-page#statement-details-page) so that it will always link to the Statements list instead of invoking the "back" action on the browser. [#57975][#57975] -- On the [Sessions page](https://www.cockroachlabs.com/docs/v21.1/ui-sessions-page), every `age` label has been replaced with `duration` and every `txn` label has been replaced with `transaction`. The actual metrics remain unchanged. [#58616][#58616] {% comment %}doc{% endcomment %} -- Renamed the `CPUs` metric column to `vCPUs` in the Node List on the [Cluster Overview page](https://www.cockroachlabs.com/docs/v21.1/ui-cluster-overview-page). [#58495][#58495] {% comment %}doc{% endcomment %} -- Added Open SQL Transactions to the [SQL Dashboard](https://www.cockroachlabs.com/docs/v21.1/ui-sql-dashboard) and renamed `SQL Queries` to `SQL Statements`. Removed references to "Distributed SQL Queries" when metrics really are discussing all SQL queries. [#57477][#57477] {% comment %}doc{% endcomment %} - -

Bug fixes

- -- Fixed a bug in [`RESTORE`](https://www.cockroachlabs.com/docs/v21.1/restore) where some unusual range boundaries in interleaved tables caused an error. [#58219][#58219] -- Previously, CockroachDB would encounter an internal error when performing a `JSONB - String` operation via the [vectorized execution engine](https://www.cockroachlabs.com/docs/v21.1/vectorized-execution). This has been fixed. The bug was introduced in v20.2.0. [#57349][#57349] -- Previously, in rare situations, an automated replication change could result in a loss of quorum. This was possible when a cluster had down nodes and a simultaneous change in replication factor. Note that a change in the replication factor can occur automatically if the cluster is comprised of fewer than five available nodes. Experimentally the likelihood of encountering this issue, even under contrived conditions, was small. [#56735][#56735] -- Fixed an internal error when using aggregates and window functions in an `ORDER BY` for a `UNION` or `VALUES` clause. [#57498][#57498] -- `DROP TYPE` and certain other statements that work over SQL scalar types now properly support type names containing special characters. [#57354][#57354] -- Fixed a performance regression introduced in v20.2 to reading virtual tables which introspect the schema. [#57542][#57542] -- Removed `system.jobs` full table scan that is expensive in the face of many completed jobs. [#57587][#57587] -- In v20.2.0 we mistakenly permitted users with the `admin` role to drop tables in the system database. This commit revokes that privilege. [#57568][#57568] -- Previously, users could not perform a cluster restore from old backup chains (incrementals on top of fulls) when using the [`BACKUP INTO`](https://www.cockroachlabs.com/docs/v21.1/backup) syntax. [#57656][#57656] -- Fixed a bug that could cause [`IMPORT`](https://www.cockroachlabs.com/docs/v21.1/import) to incorrectly read files stored on Google Cloud if uploaded using its compression option (`gsutil -Z`). [#57745][#57745] -- Fixed a bug where `ST_MakeLine` and `ST_Collect` did not respect ordering when used over a window clause. [#57724][#57724] -- Fixed a bug where schema change jobs to add foreign keys to existing tables, via [`ALTER TABLE`](https://www.cockroachlabs.com/docs/v21.1/alter-table), could sometimes not be successfully reverted (either due to being canceled or having failed). [#57598][#57598] -- Fixed a bug where concurrent addition of a foreign key constraint and drop of a unique index could cause the [foreign key constraint](https://www.cockroachlabs.com/docs/v21.1/foreign-key) to be added with no unique constraint on the referenced columns. [#57598][#57598] -- Adding a primary key constraint to a table without a primary key no longer ignores the name specified for the primary key. [#57146][#57146] -- Previously, when displaying replica counts during the [decommissioning process](https://www.cockroachlabs.com/docs/v21.1/remove-nodes), we were overcounting the replicas displayed for r1. This is no longer the case. [#57812][#57812] -- Fixed a bug whereby tables in schemas other than `public` would not be displayed when running [`SHOW TABLES FROM `](https://www.cockroachlabs.com/docs/v21.1/show-tables). [#57749][#57749] -- Fixed a bug where canceled queries reading from virtual tables could cause a crashing panic. [#57828][#57828] -- Fixed an assertion error caused by some DDL statements used in conjunction with common table expressions (`WITH`). [#57927][#57927] -- Previously, [`SHOW GRANTS ON DATABASE`](https://www.cockroachlabs.com/docs/v21.1/show-grants) did not include privileges that were granted on a database. Now it does. The `SHOW GRANTS ON DATABASE` output no longer includes a column for `schema_name`, as these grants are not specific to any schema. [#56866][#56866] -- The `information_schema.schema_privileges` table now includes the correct schema-level privileges for non-user-defined schemas. Previously, all of these schemas were omitted from the table. [#56866][#56866] -- The `has_schema_privilege` [built-in](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators#built-in-functions) function now works on user-defined schemas when checking for the `USAGE` privilege. [#56866][#56866] -- The `ST_FrechetDistance` [built-in](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators#built-in-functions) function no longer causes a `SIGFPE` panic for very small values of `densifyFrac` (e.g., 1e-20), and returns an error instead. [#57966][#57966] -- CockroachDB now removes a node's status entry when the node is [decommissioned](https://www.cockroachlabs.com/docs/v21.1/remove-nodes), to prevent it from appearing in API calls and UIs, and prevent it from affecting node constraints such as localities and attributes for various operations. [#56529][#56529] -- Fixed a bug which caused type information to be omitted when decoding descriptors using either `crdb_internal.pb_to_json` or `cockroach debug decode-proto`. [#58087][#58087] -- Fixed a bug from v21.1.0-alpha.1 where the binary could crash if a running node lost its claim to a job while updating. [#58161][#58161] -- Fixed a bug where multiple invocations of `AddGeometryColumn` affecting the same table would result in only the last invocation applying. [#56663][#56663] -- Previously, CockroachDB could return non-deterministic output when querying the `information_schema.statistics` virtual table (internally used by [`SHOW INDEXES`](https://www.cockroachlabs.com/docs/v21.1/show-index))—namely, the implicit columns of the secondary indexes could be in arbitrary order. This is now fixed, and the columns will be in the same order as they are in the primary index. [#58191][#58191] -- Fixed a `column family 0 not found` crash caused by explaining or gathering [statement diagnostics](https://www.cockroachlabs.com/docs/v21.1/ui-statements-page#diagnostics) on certain queries involving virtual tables. [#58208][#58208] -- Added a safeguard against crashes while running `SHOW STATISTICS USING JSON`, which is used internally for statement diagnostics and [`EXPLAIN ANALYZE (DEBUG)`](https://www.cockroachlabs.com/docs/v21.1/explain-analyze). [#58221][#58221] -- Fixed a bug where prior schema changes on a table that failed and could not be fully reverted could prevent the table from being dropped. [#57836][#57836] -- Previously, CockroachDB could crash when performing a `DELETE` operation after an alteration of the primary key when in some cases. This is now fixed. The bug was introduced in v20.1. [#58153][#58153] -- Fixed a bug that could cause incremental backups to a backup in a collection (i.e., [`BACKUP INTO ... IN ...`](https://www.cockroachlabs.com/docs/v21.1/backup)) on some cloud storage providers to ignore existing incremental backups previously appended to that destination and instead backup incrementally from the base backup in that destination. [#58292][#58292] -- Fixed an internal panic when using the `SHOW STATISTICS USING JSON` statement on a table containing `ENUM` types. [#58251][#58251] -- A memory leak in the optimizer has been fixed. The leak could have caused unbounded growth of memory usage for a session when planning queries on tables with partial indexes. [#58306][#58306] -- Fixed a bug where primary key changes on tables being watched by [changefeeds](https://www.cockroachlabs.com/docs/v21.1/stream-data-out-of-cockroachdb-using-changefeeds) would be silently ignored. [#58140][#58140] -- The `pg_catalog` metadata tables were using the `CHAR` data type for single-byte character columns. Now they use the `char` data type, to match PostgreSQL. This resolves errors that would occur when using some drivers (like `tokio-postgres` for Rust) to access `pg_catalog` tables in the binary query format. [#58084][#58084] -- Prepared statements that include [enums](https://www.cockroachlabs.com/docs/v21.1/enum) and use the binary format will no longer result in an error. [#58043][#58043] -- Fixed an internal error that could occur when executing prepared statements with placeholders that delete data from columns referenced by a foreign key with `ON DELETE CASCADE`. [#58431][#58431] -- Fixed a bug which caused errors when querying a table with a disjunctive filter (an `OR` expression) that is the same or similar to the predicate of one of the table's partial indexes. [#58434][#58434] -- Previously, in event log entries for `ALTER TYPE` and `DROP TYPE` statements, the `TypeName` field did not contain fully qualified names. [#58257][#58257] -- A [`CREATE TABLE`](https://www.cockroachlabs.com/docs/v21.1/create-table) statement with indexes with duplicate names will no longer result in an assertion failure. This bug was present since v20.2. [#58433][#58433] -- Previously, event logs were not capturing the qualified table names for [`COMMENT ON INDEX`](https://www.cockroachlabs.com/docs/v21.1/comment-on) and [`COMMENT ON COLUMN`](https://www.cockroachlabs.com/docs/v21.1/comment-on) commands. This PR changes the event logs to use the qualified table name. Tests were also added for these changes. [#58472][#58472] -- The `has_${OBJECT}_privilege` built-in methods such as `has_schema_privilege` now additionally check whether the roles of which a user is a direct or indirect member also have privileges on the object. Previously only one user was checked, which was incorrect. This bug has been present since v2.0 but became more prominent in v20.2 when [role-based access control](https://www.cockroachlabs.com/docs/v21.1/authorization#roles) was made available in the core version of CockroachDB. [#58254][#58254] -- CockroachDB previously would return an internal error when attempting to execute a hash join on a JSON column via the [vectorized engine](https://www.cockroachlabs.com/docs/v21.1/vectorized-execution). Now a more user-friendly error is returned. [#57994][#57994] -- Fixed a panic in protobuf decoding. [#58716][#58716] -- The user authentication flow no longer performs extraneous name lookups. This performance regression was present since v20.2. [#58671][#58671] -- CockroachDB could previously return an internal error when evaluating a binary expression between a Decimal and an Interval that required a cast to a Float when the value is out of range. A more user-friendly error is now returned instead. [#58743][#58743] -- Qualified table name for `alter_table_owner` event log. [#58504][#58504] -- Fixed a bug that caused errors when accessing a tuple column (`tuple.column` syntax) of a tuple that could be statically determined to be null. [#58747][#58747] -- The `indoption` column in `pg_catalog.index` is now populated correctly. [#58947][#58947] -- Previously, CockroachDB could encounter an internal error when executing queries with tuples containing null values and [enums](https://www.cockroachlabs.com/docs/v21.1/enum) in a distributed setting. This is now fixed. [#58894][#58894] -- Fixed a nil pointer panic edge case in query setup code. [#59002][#59002] -- Non-ASCII characters in `NAME` results in [`cockroach sql`](https://www.cockroachlabs.com/docs/v21.1/cockroach-sql) / [`demo`](https://www.cockroachlabs.com/docs/v21.1/cockroach-demo) (e.g., in the results [`SHOW TABLES`](https://www.cockroachlabs.com/docs/v21.1/show-tables), [SHOW CONSTRAINTS](https://www.cockroachlabs.com/docs/v21.1/show-constraints)) are now displayed without being escaped to octal codes. [#56630][#56630] -- Garbage collection (GC) jobs now populate the `running_status` column for [`SHOW JOBS`](https://www.cockroachlabs.com/docs/v21.1/show-jobs). This bug has been present since v20.1. [#58612][#58612] -- Previously, CockroachDB could encounter an internal error when executing queries with `BYTES` or `STRING` types via the [vectorized engine](https://www.cockroachlabs.com/docs/v21.1/vectorized-execution) in rare circumstances. This is now fixed. [#59028][#59028] -- Previously, the `substring` function on byte arrays would treat its input as unicode code points, which would cause the wrong bytes to be returned. Now it only operates on the raw bytes. [#58265][#58265] -- Previously, the `substring(byte[])` functions were not able to interpret bytes that had the `\` character since it was being treated as the beginning of an escape sequence. This is now fixed. [#58265][#58265] -- Fixed a bug in which some non-conflicting rows provided as input to an [`INSERT ... ON CONFLICT DO NOTHING`](https://www.cockroachlabs.com/docs/v21.1/insert) statement could be discarded, and not inserted. This could happen in cases where the table had one or more unique indexes in addition to the primary index, and some of the rows in the input conflicted with existing values in one or more unique index. This scenario could cause the rows that did not conflict to be erroneously discarded. This has now been fixed. [#59147][#59147] -- Fixed an internal error that could occur when `ARRAY[NULL]` was used in a query due to incorrect typing. `ARRAY[NULL]` is now typed as `string[]` if the type cannot be otherwise inferred from the context. This is the same logic that PostgreSQL uses, thus improving compatibility in addition to fixing the internal error. [#59136][#59136] -- Fix a slow / hanging query that can be caused by using large `max_decimal_digits` on `ST_AsGeoJSON`. [#59165][#59165] -- Queries that attempt to retrieve just the key columns of a single system table row will no longer return erroneous values. [#58659][#58659] -- Fixed a bug in URL handling of HTTP external storage paths on Windows. [#59216][#59216] -- Previously, CockroachDB could crash when executing `ALTER INDEX ... SPLIT/UNSPLIT AT` when more values were provided than are explicitly specified in the index. This has been fixed. [#59213][#59213] -- Fixed a bug where multi-tenancy SQL pods would not successfully initialize the GEOS library. [#59259][#59259] -- Placeholder values are now included alongside statements in structured events. [#59110][#59110] -- Added qualification prefix for user-defined schema names in event logs. [#58617][#58617] -- Added qualification prefix for dropped views in event logs. [#59058][#59058] -- Parsing errors are no longer thrown when importing a `pgdump` file with array data. [#58244][#58244] -- Fixed a crash when creating backup schedules writing to GCS buckets. [#57617][#57617] -- CockroachDB now correctly exports `schedules_BACKUP_*` metrics as well as backup RPO metrics. [#57488][#57488] - -

Performance improvements

- -- Previously, CockroachDB when performing an unordered `DISTINCT` operation via the [vectorized execution engine](https://www.cockroachlabs.com/docs/v21.1/vectorized-execution) would buffer up all tuples from the input, which is a suboptimal behavior when the query has a `LIMIT` clause. This has now been fixed. This behavior was introduced in v20.1. Note that the row-by-row engine doesn't have this issue. [#57579][#57579] -- The query optimizer can use filters that constrain columns to multiple constant values to generate lookup joins. For example, a join filter `x.a = y.a AND y.b IN (1, 2)` can be used to generate a lookup join on table `y` assuming that it has an index on `(a, b)` or `(b, a)`. [#57690][#57690] -- The query optimizer now explores plans with lookup joins on partitioned indexes, resulting in more efficient query plans in some cases. [#57690][#57690] -- Potentially improved performance for [`UPDATE`](https://www.cockroachlabs.com/docs/v21.1/update) statements where the table has computed columns that do not depend on updated columns. [#58188][#58188] -- CockroachDB now allows the storage engine to compact `sstables` based on reads, on read-heavy workloads. [#58247][#58247] -- Partial indexes with `IS NOT NULL` predicates can be used in cases where `JOIN` filters implicitly imply the predicate. This results in more efficient query plans for `JOIN`s and foreign checks. [#58204][#58204] -- Queries that use a geospatial GIN index can now take advantage of [vectorized execution](https://www.cockroachlabs.com/docs/v21.1/vectorized-execution) for some parts of the query plan, resulting in improved performance. [#58241][#58241] -- Previously, indexed columns of partial indexes were always fetched for [`UPDATE`s](https://www.cockroachlabs.com/docs/v21.1/update) and [`UPSERT`s](https://www.cockroachlabs.com/docs/v21.1/upsert). Now they are only fetched if they are required for maintaining the state of the index. If an `UPDATE` or `UPSERT` mutates columns that are neither indexed by a partial index nor referenced in a partial index predicate, they will no longer be fetched (assuming that they are not needed to maintain the state of other indexes, including the primary index). [#58358][#58358] -- [`UPDATE`](https://www.cockroachlabs.com/docs/v21.1/update) operations on tables with partial indexes no longer evaluate partial index predicate expressions when it is guaranteed that the operation will not alter the state of the partial index. In some cases, this can eliminate fetching the existing value of columns that are referenced in partial index predicates. [#58358][#58358] -- The `sum_int` aggregate function is now evaluated more efficiently in a distributed setting. [#58345][#58345] -- [`INSERT ... ON CONFLICT ... DO NOTHING`](https://www.cockroachlabs.com/docs/v21.1/insert) statements now use anti-joins for detecting conflicts. This simplifies the query plan for these statements, which may result in more efficient execution. [#58679][#58679] -- Improved the accuracy of histogram calculations for the following types: `string`/`uuid`/`inet` family. Additionally, support for `time`/`timetz` histogram calculations was also added. This improves optimizer's estimates and results in better query plans in certain instances. [#55797][#55797] -- The optimizer now uses collected histogram statistics to better estimate the cost of JSON and ARRAY GIN index scans, which may lead to more efficient query plans. [#59326][#59326] - -

Build changes

- -- CockroachDB now builds on Ubuntu 20.10 and other distros using `gcc-10`. [#58895][#58895] {% comment %}doc{% endcomment %} - -

Doc updates

- -- The types of logging sinks available through configuration are now [automatically documented](https://github.com/cockroachdb/cockroach/blob/master/docs/generated/logsinks.md). [#59083][#59083] -- The various output formats available for logging configurations are now documented. See the generated [reference documentation](https://github.com/cockroachdb/cockroach/blob/master/docs/generated/logformats.md) for details. [#58075][#58075] -- The cluster event logging system has been standardized. [Reference documentation](https://github.com/cockroachdb/cockroach/blob/master/docs/generated/eventlog.md) is now available (auto-generated from source code); changes to non-reserved payloads will now be announced at least one release version in advance. The event types are organized into broad categories: SQL Logical Schema Changes, SQL Privilege Changes, SQL User Management, CLuster-level events and SQL Miscellaneous operations. [#57737][#57737] -- A report of the possible logging severities and channels is now [automatically generated](https://github.com/cockroachdb/cockroach/blob/master/docs/generated/logging.md). [#57134][#57134] - -
- -

Contributors

- -This release includes 615 merged PRs by 85 authors. We would like to thank the following contributors from the CockroachDB community: - -- Alan Acosta (first-time contributor) -- ArjunM98 -- Azdim Zul Fahmi -- Cheng Jing (first-time contributor) -- Cyrus Javan -- Erik Grinaker -- Javier Fernandez-Ivern (first-time contributor) -- Kumar Akshay (first-time contributor) -- Maciej Rzasa (first-time contributor) -- Marcin Knychała -- Max Neverov -- Miguel Novelo (first-time contributor) -- Omar Bahareth (first-time contributor) -- Petr Jediný -- Ruixin Bao -- Saif Al-Harthi (first-time contributor) -- Vaibhav -- Yanxin (first-time contributor) -- b41sh (first-time contributor) -- mosquito2333 (first-time contributor) -- neeral - -
- -[#51987]: https://github.com/cockroachdb/cockroach/pull/51987 -[#52745]: https://github.com/cockroachdb/cockroach/pull/52745 -[#53974]: https://github.com/cockroachdb/cockroach/pull/53974 -[#55065]: https://github.com/cockroachdb/cockroach/pull/55065 -[#55713]: https://github.com/cockroachdb/cockroach/pull/55713 -[#55797]: https://github.com/cockroachdb/cockroach/pull/55797 -[#56329]: https://github.com/cockroachdb/cockroach/pull/56329 -[#56473]: https://github.com/cockroachdb/cockroach/pull/56473 -[#56496]: https://github.com/cockroachdb/cockroach/pull/56496 -[#56529]: https://github.com/cockroachdb/cockroach/pull/56529 -[#56630]: https://github.com/cockroachdb/cockroach/pull/56630 -[#56663]: https://github.com/cockroachdb/cockroach/pull/56663 -[#56735]: https://github.com/cockroachdb/cockroach/pull/56735 -[#56740]: https://github.com/cockroachdb/cockroach/pull/56740 -[#56843]: https://github.com/cockroachdb/cockroach/pull/56843 -[#56866]: https://github.com/cockroachdb/cockroach/pull/56866 -[#56980]: https://github.com/cockroachdb/cockroach/pull/56980 -[#57130]: https://github.com/cockroachdb/cockroach/pull/57130 -[#57134]: https://github.com/cockroachdb/cockroach/pull/57134 -[#57136]: https://github.com/cockroachdb/cockroach/pull/57136 -[#57146]: https://github.com/cockroachdb/cockroach/pull/57146 -[#57171]: https://github.com/cockroachdb/cockroach/pull/57171 -[#57195]: https://github.com/cockroachdb/cockroach/pull/57195 -[#57224]: https://github.com/cockroachdb/cockroach/pull/57224 -[#57240]: https://github.com/cockroachdb/cockroach/pull/57240 -[#57244]: https://github.com/cockroachdb/cockroach/pull/57244 -[#57250]: https://github.com/cockroachdb/cockroach/pull/57250 -[#57272]: https://github.com/cockroachdb/cockroach/pull/57272 -[#57274]: https://github.com/cockroachdb/cockroach/pull/57274 -[#57275]: https://github.com/cockroachdb/cockroach/pull/57275 -[#57295]: https://github.com/cockroachdb/cockroach/pull/57295 -[#57327]: https://github.com/cockroachdb/cockroach/pull/57327 -[#57337]: https://github.com/cockroachdb/cockroach/pull/57337 -[#57349]: https://github.com/cockroachdb/cockroach/pull/57349 -[#57354]: https://github.com/cockroachdb/cockroach/pull/57354 -[#57380]: https://github.com/cockroachdb/cockroach/pull/57380 -[#57382]: https://github.com/cockroachdb/cockroach/pull/57382 -[#57387]: https://github.com/cockroachdb/cockroach/pull/57387 -[#57390]: https://github.com/cockroachdb/cockroach/pull/57390 -[#57412]: https://github.com/cockroachdb/cockroach/pull/57412 -[#57419]: https://github.com/cockroachdb/cockroach/pull/57419 -[#57420]: https://github.com/cockroachdb/cockroach/pull/57420 -[#57423]: https://github.com/cockroachdb/cockroach/pull/57423 -[#57477]: https://github.com/cockroachdb/cockroach/pull/57477 -[#57488]: https://github.com/cockroachdb/cockroach/pull/57488 -[#57498]: https://github.com/cockroachdb/cockroach/pull/57498 -[#57500]: https://github.com/cockroachdb/cockroach/pull/57500 -[#57513]: https://github.com/cockroachdb/cockroach/pull/57513 -[#57519]: https://github.com/cockroachdb/cockroach/pull/57519 -[#57533]: https://github.com/cockroachdb/cockroach/pull/57533 -[#57542]: https://github.com/cockroachdb/cockroach/pull/57542 -[#57567]: https://github.com/cockroachdb/cockroach/pull/57567 -[#57568]: https://github.com/cockroachdb/cockroach/pull/57568 -[#57579]: https://github.com/cockroachdb/cockroach/pull/57579 -[#57583]: https://github.com/cockroachdb/cockroach/pull/57583 -[#57587]: https://github.com/cockroachdb/cockroach/pull/57587 -[#57598]: https://github.com/cockroachdb/cockroach/pull/57598 -[#57609]: https://github.com/cockroachdb/cockroach/pull/57609 -[#57615]: https://github.com/cockroachdb/cockroach/pull/57615 -[#57617]: https://github.com/cockroachdb/cockroach/pull/57617 -[#57618]: https://github.com/cockroachdb/cockroach/pull/57618 -[#57644]: https://github.com/cockroachdb/cockroach/pull/57644 -[#57653]: https://github.com/cockroachdb/cockroach/pull/57653 -[#57654]: https://github.com/cockroachdb/cockroach/pull/57654 -[#57656]: https://github.com/cockroachdb/cockroach/pull/57656 -[#57666]: https://github.com/cockroachdb/cockroach/pull/57666 -[#57690]: https://github.com/cockroachdb/cockroach/pull/57690 -[#57697]: https://github.com/cockroachdb/cockroach/pull/57697 -[#57713]: https://github.com/cockroachdb/cockroach/pull/57713 -[#57724]: https://github.com/cockroachdb/cockroach/pull/57724 -[#57730]: https://github.com/cockroachdb/cockroach/pull/57730 -[#57733]: https://github.com/cockroachdb/cockroach/pull/57733 -[#57737]: https://github.com/cockroachdb/cockroach/pull/57737 -[#57745]: https://github.com/cockroachdb/cockroach/pull/57745 -[#57749]: https://github.com/cockroachdb/cockroach/pull/57749 -[#57776]: https://github.com/cockroachdb/cockroach/pull/57776 -[#57804]: https://github.com/cockroachdb/cockroach/pull/57804 -[#57811]: https://github.com/cockroachdb/cockroach/pull/57811 -[#57812]: https://github.com/cockroachdb/cockroach/pull/57812 -[#57817]: https://github.com/cockroachdb/cockroach/pull/57817 -[#57824]: https://github.com/cockroachdb/cockroach/pull/57824 -[#57828]: https://github.com/cockroachdb/cockroach/pull/57828 -[#57836]: https://github.com/cockroachdb/cockroach/pull/57836 -[#57837]: https://github.com/cockroachdb/cockroach/pull/57837 -[#57839]: https://github.com/cockroachdb/cockroach/pull/57839 -[#57851]: https://github.com/cockroachdb/cockroach/pull/57851 -[#57877]: https://github.com/cockroachdb/cockroach/pull/57877 -[#57879]: https://github.com/cockroachdb/cockroach/pull/57879 -[#57927]: https://github.com/cockroachdb/cockroach/pull/57927 -[#57954]: https://github.com/cockroachdb/cockroach/pull/57954 -[#57966]: https://github.com/cockroachdb/cockroach/pull/57966 -[#57969]: https://github.com/cockroachdb/cockroach/pull/57969 -[#57975]: https://github.com/cockroachdb/cockroach/pull/57975 -[#57991]: https://github.com/cockroachdb/cockroach/pull/57991 -[#57994]: https://github.com/cockroachdb/cockroach/pull/57994 -[#58014]: https://github.com/cockroachdb/cockroach/pull/58014 -[#58034]: https://github.com/cockroachdb/cockroach/pull/58034 -[#58043]: https://github.com/cockroachdb/cockroach/pull/58043 -[#58045]: https://github.com/cockroachdb/cockroach/pull/58045 -[#58053]: https://github.com/cockroachdb/cockroach/pull/58053 -[#58070]: https://github.com/cockroachdb/cockroach/pull/58070 -[#58072]: https://github.com/cockroachdb/cockroach/pull/58072 -[#58075]: https://github.com/cockroachdb/cockroach/pull/58075 -[#58078]: https://github.com/cockroachdb/cockroach/pull/58078 -[#58084]: https://github.com/cockroachdb/cockroach/pull/58084 -[#58087]: https://github.com/cockroachdb/cockroach/pull/58087 -[#58088]: https://github.com/cockroachdb/cockroach/pull/58088 -[#58117]: https://github.com/cockroachdb/cockroach/pull/58117 -[#58118]: https://github.com/cockroachdb/cockroach/pull/58118 -[#58120]: https://github.com/cockroachdb/cockroach/pull/58120 -[#58121]: https://github.com/cockroachdb/cockroach/pull/58121 -[#58125]: https://github.com/cockroachdb/cockroach/pull/58125 -[#58126]: https://github.com/cockroachdb/cockroach/pull/58126 -[#58128]: https://github.com/cockroachdb/cockroach/pull/58128 -[#58130]: https://github.com/cockroachdb/cockroach/pull/58130 -[#58140]: https://github.com/cockroachdb/cockroach/pull/58140 -[#58146]: https://github.com/cockroachdb/cockroach/pull/58146 -[#58153]: https://github.com/cockroachdb/cockroach/pull/58153 -[#58161]: https://github.com/cockroachdb/cockroach/pull/58161 -[#58173]: https://github.com/cockroachdb/cockroach/pull/58173 -[#58188]: https://github.com/cockroachdb/cockroach/pull/58188 -[#58191]: https://github.com/cockroachdb/cockroach/pull/58191 -[#58192]: https://github.com/cockroachdb/cockroach/pull/58192 -[#58204]: https://github.com/cockroachdb/cockroach/pull/58204 -[#58208]: https://github.com/cockroachdb/cockroach/pull/58208 -[#58212]: https://github.com/cockroachdb/cockroach/pull/58212 -[#58219]: https://github.com/cockroachdb/cockroach/pull/58219 -[#58221]: https://github.com/cockroachdb/cockroach/pull/58221 -[#58233]: https://github.com/cockroachdb/cockroach/pull/58233 -[#58241]: https://github.com/cockroachdb/cockroach/pull/58241 -[#58244]: https://github.com/cockroachdb/cockroach/pull/58244 -[#58247]: https://github.com/cockroachdb/cockroach/pull/58247 -[#58251]: https://github.com/cockroachdb/cockroach/pull/58251 -[#58254]: https://github.com/cockroachdb/cockroach/pull/58254 -[#58257]: https://github.com/cockroachdb/cockroach/pull/58257 -[#58265]: https://github.com/cockroachdb/cockroach/pull/58265 -[#58288]: https://github.com/cockroachdb/cockroach/pull/58288 -[#58292]: https://github.com/cockroachdb/cockroach/pull/58292 -[#58305]: https://github.com/cockroachdb/cockroach/pull/58305 -[#58306]: https://github.com/cockroachdb/cockroach/pull/58306 -[#58317]: https://github.com/cockroachdb/cockroach/pull/58317 -[#58345]: https://github.com/cockroachdb/cockroach/pull/58345 -[#58349]: https://github.com/cockroachdb/cockroach/pull/58349 -[#58354]: https://github.com/cockroachdb/cockroach/pull/58354 -[#58358]: https://github.com/cockroachdb/cockroach/pull/58358 -[#58359]: https://github.com/cockroachdb/cockroach/pull/58359 -[#58381]: https://github.com/cockroachdb/cockroach/pull/58381 -[#58399]: https://github.com/cockroachdb/cockroach/pull/58399 -[#58423]: https://github.com/cockroachdb/cockroach/pull/58423 -[#58431]: https://github.com/cockroachdb/cockroach/pull/58431 -[#58433]: https://github.com/cockroachdb/cockroach/pull/58433 -[#58434]: https://github.com/cockroachdb/cockroach/pull/58434 -[#58452]: https://github.com/cockroachdb/cockroach/pull/58452 -[#58454]: https://github.com/cockroachdb/cockroach/pull/58454 -[#58466]: https://github.com/cockroachdb/cockroach/pull/58466 -[#58470]: https://github.com/cockroachdb/cockroach/pull/58470 -[#58472]: https://github.com/cockroachdb/cockroach/pull/58472 -[#58487]: https://github.com/cockroachdb/cockroach/pull/58487 -[#58495]: https://github.com/cockroachdb/cockroach/pull/58495 -[#58504]: https://github.com/cockroachdb/cockroach/pull/58504 -[#58612]: https://github.com/cockroachdb/cockroach/pull/58612 -[#58616]: https://github.com/cockroachdb/cockroach/pull/58616 -[#58617]: https://github.com/cockroachdb/cockroach/pull/58617 -[#58659]: https://github.com/cockroachdb/cockroach/pull/58659 -[#58671]: https://github.com/cockroachdb/cockroach/pull/58671 -[#58679]: https://github.com/cockroachdb/cockroach/pull/58679 -[#58716]: https://github.com/cockroachdb/cockroach/pull/58716 -[#58725]: https://github.com/cockroachdb/cockroach/pull/58725 -[#58740]: https://github.com/cockroachdb/cockroach/pull/58740 -[#58743]: https://github.com/cockroachdb/cockroach/pull/58743 -[#58747]: https://github.com/cockroachdb/cockroach/pull/58747 -[#58748]: https://github.com/cockroachdb/cockroach/pull/58748 -[#58749]: https://github.com/cockroachdb/cockroach/pull/58749 -[#58888]: https://github.com/cockroachdb/cockroach/pull/58888 -[#58891]: https://github.com/cockroachdb/cockroach/pull/58891 -[#58894]: https://github.com/cockroachdb/cockroach/pull/58894 -[#58895]: https://github.com/cockroachdb/cockroach/pull/58895 -[#58898]: https://github.com/cockroachdb/cockroach/pull/58898 -[#58903]: https://github.com/cockroachdb/cockroach/pull/58903 -[#58928]: https://github.com/cockroachdb/cockroach/pull/58928 -[#58937]: https://github.com/cockroachdb/cockroach/pull/58937 -[#58947]: https://github.com/cockroachdb/cockroach/pull/58947 -[#58987]: https://github.com/cockroachdb/cockroach/pull/58987 -[#58988]: https://github.com/cockroachdb/cockroach/pull/58988 -[#58995]: https://github.com/cockroachdb/cockroach/pull/58995 -[#59002]: https://github.com/cockroachdb/cockroach/pull/59002 -[#59008]: https://github.com/cockroachdb/cockroach/pull/59008 -[#59009]: https://github.com/cockroachdb/cockroach/pull/59009 -[#59028]: https://github.com/cockroachdb/cockroach/pull/59028 -[#59049]: https://github.com/cockroachdb/cockroach/pull/59049 -[#59058]: https://github.com/cockroachdb/cockroach/pull/59058 -[#59083]: https://github.com/cockroachdb/cockroach/pull/59083 -[#59087]: https://github.com/cockroachdb/cockroach/pull/59087 -[#59110]: https://github.com/cockroachdb/cockroach/pull/59110 -[#59121]: https://github.com/cockroachdb/cockroach/pull/59121 -[#59136]: https://github.com/cockroachdb/cockroach/pull/59136 -[#59144]: https://github.com/cockroachdb/cockroach/pull/59144 -[#59147]: https://github.com/cockroachdb/cockroach/pull/59147 -[#59165]: https://github.com/cockroachdb/cockroach/pull/59165 -[#59178]: https://github.com/cockroachdb/cockroach/pull/59178 -[#59206]: https://github.com/cockroachdb/cockroach/pull/59206 -[#59212]: https://github.com/cockroachdb/cockroach/pull/59212 -[#59213]: https://github.com/cockroachdb/cockroach/pull/59213 -[#59216]: https://github.com/cockroachdb/cockroach/pull/59216 -[#59223]: https://github.com/cockroachdb/cockroach/pull/59223 -[#59234]: https://github.com/cockroachdb/cockroach/pull/59234 -[#59248]: https://github.com/cockroachdb/cockroach/pull/59248 -[#59256]: https://github.com/cockroachdb/cockroach/pull/59256 -[#59259]: https://github.com/cockroachdb/cockroach/pull/59259 -[#59313]: https://github.com/cockroachdb/cockroach/pull/59313 -[#59319]: https://github.com/cockroachdb/cockroach/pull/59319 -[#59326]: https://github.com/cockroachdb/cockroach/pull/59326 -[#59350]: https://github.com/cockroachdb/cockroach/pull/59350 diff --git a/src/current/_includes/releases/v21.1/v21.1.0-alpha.3.md b/src/current/_includes/releases/v21.1/v21.1.0-alpha.3.md deleted file mode 100644 index c70df2657bd..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.0-alpha.3.md +++ /dev/null @@ -1,158 +0,0 @@ -## v21.1.0-alpha.3 - -Release Date: February 8, 2021 - - - -

General changes

- -- Added ability to further debug connections shut down automatically by the server. [#59460][#59460] - -

SQL language changes

- -- Fixed up [`ALTER TABLE ... ADD CONSTRAINT ... UNIQUE`](https://www.cockroachlabs.com/docs/v21.1/add-constraint) to partition correctly under a [`PARTITION ALL BY`](https://www.cockroachlabs.com/docs/v21.1/partition-by) table. [#59364][#59364] -- CockroachDB now applies zone configs to new [unique constraints](https://www.cockroachlabs.com/docs/v21.1/unique) in `REGIONAL BY ROW` tables. [#59364][#59364] -- A new, unused field called `global_reads` was added to [zone configurations](https://www.cockroachlabs.com/docs/v21.1/configure-replication-zones). The field does not yet have any effect. [#59304][#59304] -- A new [private cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings) `sql.distsql.temp_storage.hash_agg.enabled` was added that allows users to disable the disk spilling capability of the hash aggregation in the [vectorized execution engine](https://www.cockroachlabs.com/docs/v21.1/vectorized-execution). It is `true` by default which incurs some performance hit. Setting it to `false` will improve the performance, but the queries might hit an out-of-memory limit error. [#59414][#59414] -- The optimizer now enforces a unique constraint on the explicit index columns for implicitly partitioned unique indexes and unique indexes in [`REGIONAL BY ROW` tables](https://www.cockroachlabs.com/docs/v21.1/create-table). An attempt to insert or update a row such that the unique constraint is violated will result in an error. [#59501][#59501] -- [`REGIONAL BY ROW` tables](https://www.cockroachlabs.com/docs/v21.1/create-table) now preserve [zone configurations](https://www.cockroachlabs.com/docs/v21.1/configure-replication-zones) when using [`ALTER PRIMARY KEY`](https://www.cockroachlabs.com/docs/v21.1/alter-primary-key). [#59365][#59365] -- Implemented [`ALTER TABLE ... LOCALITY REGIONAL BY TABLE`](https://www.cockroachlabs.com/docs/v21.1/alter-table) from `LOCALITY GLOBAL`. [#59407][#59407] -- [Zone configs](https://www.cockroachlabs.com/docs/v21.1/configure-replication-zones) now support new attributes `num_voters` and `voter_constraints`. `num_voters` will specify the number of [voting replicas](https://www.cockroachlabs.com/docs/v21.1/architecture/life-of-a-distributed-transaction#consensus). When `num_voters` is explicitly specified, `num_replicas` will be the sum of voting and non-voting replicas. `voter_constraints` will specify the constraints that govern the placement of just the voting replicas, whereas the existing `constraints` attributes will govern the placement of all replicas (voting as well as non-voting). [#57184][#57184] -- Added SQL syntax for [`RESTORE tenant x FROM REPLICATION STREAM FROM 'replication_stream'`](https://www.cockroachlabs.com/docs/v21.1/restore). This allows the user to start an ingestion job to ingest KVs from the replication stream into the destination tenant's keyspace. [#59112][#59112] -- The [`SHOW CREATE`](https://www.cockroachlabs.com/docs/v21.1/show-create) statement now lists [`ALTER PARTITION`](https://www.cockroachlabs.com/docs/v21.1/alter-partition) statements sorted by the partition name and index_name. [#59580][#59580] -- Error messages for cross-database links now include a hint directing to the user to the [deprecation docs](https://www.cockroachlabs.com/docs/v21.1/cluster-settings). An example message looks like: ``` ERROR: the view cannot refer to other databases; (see the 'sql.cross_db_views.enabled' cluster setting) SQLSTATE: 0A000 HINT: Note that cross database references will be removed in future releases. See: https://www.cockroachlabs.com/docs/releases/v21.1.html#v21-1-0-deprecations ``` [#59551][#59551] -- The `escape_string_warning` session variable from PostgreSQL was added with [compatibility-only](https://www.cockroachlabs.com/docs/v21.1/postgresql-compatibility) support. It defaults to `on` and cannot be changed. [#59479][#59479] -- Multi-column [GIN indexes](https://www.cockroachlabs.com/docs/v21.1/create-index#create-gin-indexes) can now be created. The last indexed column must be inverted types such as [`JSON`](https://www.cockroachlabs.com/docs/v21.1/jsonb), [`ARRAY`](https://www.cockroachlabs.com/docs/v21.1/array), [`GEOMETRY`, and `GEOGRAPHY`](https://www.cockroachlabs.com/docs/v21.1/spatial-data). All preceding columns must have types that are indexable. These indexes may be used for queries that constrain all index columns. [#59565][#59565] -- Added `WITH full_table_name` option to create a changefeed on `movr.public.drivers` instead of `drivers`. [#59258][#59258] -- [`UPSERT`s](https://www.cockroachlabs.com/docs/v21.1/upsert) on tables with an implicitly partitioned primary index now use only the explicit primary key columns as the conflict columns, excluding all implicit partitioning columns. This also applies to `REGIONAL BY ROW` tables, ensuring that the `crdb_region` column is not included in the `UPSERT` key. [#59654][#59654] - -

Command-line changes

- -- The [`cockroach`](https://www.cockroachlabs.com/docs/v21.1/cockroach-commands) command now supports the command-line parameter `--version` which reports its version parameters. This makes `cockroach --version` equivalent to `cockroach version`. [#58665][#58665] -- The [`cockroach version`](https://www.cockroachlabs.com/docs/v21.1/cockroach-version) command now supports a new parameter `--build-tag`; when specified, it displays the technical build tag, which makes it possible to integrate with automated deployment tools. [#58665][#58665] -- The `channels` parameter for the log sink [configurations](https://github.com/cockroachdb/cockroach/blob/master/docs/generated/logsinks.md) now supports a more flexible input configuration format: - Syntax to include specific channels: - `channels: dev,sessions` (yaml string) - `channels: 'dev,sessions'` (yaml string, equivalent to previous) - `channels: [dev,sessions]` (yaml array, can use multi-line syntax with hyphens too) - `channels: ['dev','sessions']` (same as previous) - `channels: '[dev,sessions]'` (bracket-enclosed list inside yaml string) - - Syntax to include all channels: - `channels: all` (yaml string) - `channels: 'all'` (same as previous) - `channels: [all]` (yaml array) - `channels: ['all']` (same as previous) - `channels: '[all]'` (bracket-enclosed "all" inside yaml string) - - Syntax to include all channels except some: - `channels: all except dev,sessions` (yaml string) - `channels: 'all except dev,sessions'` (same as previous, quoted string) - `channels: 'all except [dev,sessions]'` (same as previous, list is bracket enclosed) - - For example: ``` sinks: stderr: channels: - DEV - SQL_SESSIONS ``` uses the "native" YAML syntax for lists. [#59352][#59352] - -- The notification that `SIGHUP` was received, and that log files are flushed to disk as a result, is now sent to the [OPS logging channel](https://github.com/cockroachdb/cockroach/blob/master/docs/generated/logging.md#ops). [#59345][#59345] -- The notification that `SIGHUP` was received, and that TLS certificates are reloaded from disk as a result, is now sent to the OPS logging channel as a structured event. Refer to the [reference docs](https://github.com/cockroachdb/cockroach/blob/master/docs/generated/logformats.md) for details about the event payload. [#59345][#59345] - -

API endpoint changes

- -- Added a new API tree, in `/api/v2/*`, currently undocumented, that avoids the use of cookie-based authentication in favor of sessions in headers, and support for pagination. [#58436][#58436] - -

DB Console changes

- -- Updated the [table details page](https://www.cockroachlabs.com/docs/v21.1/ui-databases-page#table-details) to show table-specific zone configuration values when set, show constraints and lease preferences, and display a valid statement to re-configure zone configuration for that table. [#59196][#59196] -- Users can now see the time series of full table or index scans in the [Advanced Debug page](https://www.cockroachlabs.com/docs/v21.1/ui-debug-pages). [#59261][#59261] -- The `sql.leases.active` gauge is now available to track the outstanding descriptor leases in the [cluster](https://www.cockroachlabs.com/docs/v21.1/ui-cluster-overview-page). [#57561][#57561] -- Long queries are truncated in the DB Console. [#59603][#59603] - -

Bug fixes

- -- Added event logs for [`SET SCHEMA`](https://www.cockroachlabs.com/docs/v21.1/set-schema) statements. [#58737][#58737] -- Fixed an issue where a [left inverted join](https://www.cockroachlabs.com/docs/v21.1/joins) could have incorrect results. In particular, some output rows could have non-`NULL` values for right-side columns when the right-side columns should have been `NULL`. This issue has only existed in alpha releases of 21.1 so far, and it is now fixed. [#59279][#59279] -- Fixed a panic where type hints mismatching placeholder names cause a crash. [#59450][#59450] -- Unexpected internal errors containing stack traces that reference a `countingWriter` null pointer have now been fixed. [#59477][#59477] -- Fixed a bug introduced in v20.1 in the DB Console where incorrect zone configuration values were shown on the table details page and constraints and lease preferences were not displayed. [#59196][#59196] -- [Changefeeds](https://www.cockroachlabs.com/docs/v21.1/create-changefeed) no longer error with "1e2 is not roundtrippable at scale 2" when 100 is stored in a column with width 2. [#59075][#59075] -- Fixed a bug which prevented [renaming a column](https://www.cockroachlabs.com/docs/v21.1/rename-column) that was referenced earlier in a transaction as part of a computed expression, index predicate, check expression, or not null constraint. [#59384][#59384] -- Added event logs for privilege changes in `crdb_internal.unsafe_xxx`. [#59282][#59282] -- Fixed a bug in which incorrect results could be returned for [left and anti joins](https://www.cockroachlabs.com/docs/v21.1/joins). This could happen when one of the columns on one side of the join was constrained to multiple constant values, either due to a check constraint or an `IN` clause. The bug resulted in non-matching input rows getting returned multiple times in the output, which is incorrect. This bug only affected previous alpha releases of 21.1, and has now been fixed. [#59646][#59646] -- Fixed [`EXPLAIN ANALYZE (DEBUG)`](https://www.cockroachlabs.com/docs/v21.1/explain-analyze) interaction with the SQL shell. [#59557][#59557] -- Fixed a bug preventing [foreign key constraints](https://www.cockroachlabs.com/docs/v21.1/foreign-key) referencing hidden columns (e.g., `rowid`) from being added. [#59659][#59659] -- Fixed a bug where [`DROP SCHEMA ... CASCADE`](https://www.cockroachlabs.com/docs/v21.1/drop-schema) could result in types which are referenced being dropped. [#59281][#59281] -- Fixed a bug whereby [dropping a schema](https://www.cockroachlabs.com/docs/v21.1/drop-schema) with a table that used a [user-defined type](https://www.cockroachlabs.com/docs/v21.1/create-type) which was not being dropped (because it is in a different schema) would result in a descriptor corruption due to a dangling back-reference to a dropped table on the type descriptor. [#59281][#59281] - -

Performance improvements

- -- The [query optimizer](https://www.cockroachlabs.com/docs/v21.1/cost-based-optimizer) now plans scans over GIN indexes on [`JSON`](https://www.cockroachlabs.com/docs/v21.1/jsonb) columns for query filters that constrain the `JSON` column with equality and fetch value operators (`->`) inside conjunctions and disjunctions, like `j->'a' = '1' AND j->'b' = '2'`. [#59266][#59266] -- Fixed a bug included in 20.2.1 for the [`JSON`](https://www.cockroachlabs.com/docs/v21.1/jsonb) fetch value operator, `->` which resulted in chained `->` operators in query filters not being index accelerated (e.g., `j->'a'->'b' = '1'`). Chained `->` operators are now index accelerated. [#59494][#59494] -- Improved the allocation performance of workloads that use the [`EXTRACT`](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators) built-in. [#59598][#59598] - -
- -

Contributors

- -This release includes 116 merged PRs by 42 authors. -We would like to thank the following contributors from the CockroachDB community: - -- John Seekins (first-time contributor) -- Ulf Adams (first-time contributor) - -
- -[#57184]: https://github.com/cockroachdb/cockroach/pull/57184 -[#57561]: https://github.com/cockroachdb/cockroach/pull/57561 -[#58436]: https://github.com/cockroachdb/cockroach/pull/58436 -[#58665]: https://github.com/cockroachdb/cockroach/pull/58665 -[#58737]: https://github.com/cockroachdb/cockroach/pull/58737 -[#58863]: https://github.com/cockroachdb/cockroach/pull/58863 -[#58904]: https://github.com/cockroachdb/cockroach/pull/58904 -[#59023]: https://github.com/cockroachdb/cockroach/pull/59023 -[#59026]: https://github.com/cockroachdb/cockroach/pull/59026 -[#59075]: https://github.com/cockroachdb/cockroach/pull/59075 -[#59112]: https://github.com/cockroachdb/cockroach/pull/59112 -[#59196]: https://github.com/cockroachdb/cockroach/pull/59196 -[#59258]: https://github.com/cockroachdb/cockroach/pull/59258 -[#59261]: https://github.com/cockroachdb/cockroach/pull/59261 -[#59266]: https://github.com/cockroachdb/cockroach/pull/59266 -[#59279]: https://github.com/cockroachdb/cockroach/pull/59279 -[#59281]: https://github.com/cockroachdb/cockroach/pull/59281 -[#59282]: https://github.com/cockroachdb/cockroach/pull/59282 -[#59304]: https://github.com/cockroachdb/cockroach/pull/59304 -[#59345]: https://github.com/cockroachdb/cockroach/pull/59345 -[#59352]: https://github.com/cockroachdb/cockroach/pull/59352 -[#59364]: https://github.com/cockroachdb/cockroach/pull/59364 -[#59365]: https://github.com/cockroachdb/cockroach/pull/59365 -[#59384]: https://github.com/cockroachdb/cockroach/pull/59384 -[#59395]: https://github.com/cockroachdb/cockroach/pull/59395 -[#59407]: https://github.com/cockroachdb/cockroach/pull/59407 -[#59414]: https://github.com/cockroachdb/cockroach/pull/59414 -[#59450]: https://github.com/cockroachdb/cockroach/pull/59450 -[#59460]: https://github.com/cockroachdb/cockroach/pull/59460 -[#59474]: https://github.com/cockroachdb/cockroach/pull/59474 -[#59477]: https://github.com/cockroachdb/cockroach/pull/59477 -[#59479]: https://github.com/cockroachdb/cockroach/pull/59479 -[#59494]: https://github.com/cockroachdb/cockroach/pull/59494 -[#59501]: https://github.com/cockroachdb/cockroach/pull/59501 -[#59551]: https://github.com/cockroachdb/cockroach/pull/59551 -[#59557]: https://github.com/cockroachdb/cockroach/pull/59557 -[#59565]: https://github.com/cockroachdb/cockroach/pull/59565 -[#59580]: https://github.com/cockroachdb/cockroach/pull/59580 -[#59598]: https://github.com/cockroachdb/cockroach/pull/59598 -[#59603]: https://github.com/cockroachdb/cockroach/pull/59603 -[#59646]: https://github.com/cockroachdb/cockroach/pull/59646 -[#59654]: https://github.com/cockroachdb/cockroach/pull/59654 -[#59659]: https://github.com/cockroachdb/cockroach/pull/59659 -[088057a8f]: https://github.com/cockroachdb/cockroach/commit/088057a8f -[71de4f752]: https://github.com/cockroachdb/cockroach/commit/71de4f752 -[73b15ad5b]: https://github.com/cockroachdb/cockroach/commit/73b15ad5b -[893e3f68c]: https://github.com/cockroachdb/cockroach/commit/893e3f68c -[b94aad66d]: https://github.com/cockroachdb/cockroach/commit/b94aad66d -[c3f328eb5]: https://github.com/cockroachdb/cockroach/commit/c3f328eb5 -[c955e882e]: https://github.com/cockroachdb/cockroach/commit/c955e882e -[c9eafd522]: https://github.com/cockroachdb/cockroach/commit/c9eafd522 -[daf42d6b8]: https://github.com/cockroachdb/cockroach/commit/daf42d6b8 -[e2c147721]: https://github.com/cockroachdb/cockroach/commit/e2c147721 -[ea9074ba7]: https://github.com/cockroachdb/cockroach/commit/ea9074ba7 -[f901ad7aa]: https://github.com/cockroachdb/cockroach/commit/f901ad7aa -[fa324020c]: https://github.com/cockroachdb/cockroach/commit/fa324020c diff --git a/src/current/_includes/releases/v21.1/v21.1.0-beta.1.md b/src/current/_includes/releases/v21.1/v21.1.0-beta.1.md deleted file mode 100644 index 33a7041207e..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.0-beta.1.md +++ /dev/null @@ -1,509 +0,0 @@ -## v21.1.0-beta.1 - -Release Date: March 22, 2021 - - - -

Backward-incompatible changes

- -- The [`cockroach debug ballast`](https://www.cockroachlabs.com/docs/v21.1/cockroach-debug-ballast) command now refuses to overwrite the target ballast file if it already exists. This change is intended to prevent mistaken uses of the `ballast` command by operators. Scripts that integrate `cockroach debug ballast` can consider adding a `rm` command. [#59995][#59995] {% comment %}doc{% endcomment %} -- Removed the `kv.atomic_replication_changes.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings). All replication changes on a range now use joint-consensus. [#61170][#61170] {% comment %}doc{% endcomment %} - -

Security updates

- -- It is now possible to log SQL statements executed by admin users. This logging is enabled via the `sql.log.admin_audit.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings). When set, events of type `admin_query` are logged to the `SENSITIVE_ACCESS` channel. [#60708][#60708] {% comment %}doc{% endcomment %} - -

General changes

- -- Updated the AWS SDK used to interface with AWS services such as S3 (for backup, restore, and import) and KMS (for backup/restore). [#59709][#59709] {% comment %}doc{% endcomment %} -- [`SHOW TRACE FOR SESSION`](https://www.cockroachlabs.com/docs/v21.1/show-trace) previously included CockroachDB internal traces for async threads kicked off as part of user operations. This trace data is no longer captured. [#59815][#59815] {% comment %}doc{% endcomment %} -- Crash reports that are sent to Cockroach Labs no longer redact the names of built-in virtual tables from the `crdb_internal`, `information_schema`, and `pg_catalog` schemas. [#60799][#60799] -- Raised the default limit on the maximum number of spans that can be protected by the `protectedts` subsystem. This limit can be configured using the `kv.protectedts.max_spans` [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings). [#61018][#61018] {% comment %}doc{% endcomment %} - -

Enterprise edition changes

- -- Added the `bulkio.stream_ingestion.minimum_flush_interval` [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings), which allows the user to control how often the stream ingestion job will flush. Note that the job may still flush more often if the in-memory buffers are filled. [#60160][#60160] {% comment %}doc{% endcomment %} -- CockroachDB now supports [primary key](https://www.cockroachlabs.com/docs/v21.1/primary-key) changes in [`CREATE CHANGEFEED`](https://www.cockroachlabs.com/docs/v21.1/create-changefeed). [#58422][#58422] {% comment %}doc{% endcomment %} -- [Multi-region database](https://www.cockroachlabs.com/docs/v21.1/movr-flask-database) creations are now permitted as long as the cluster has a CockroachDB subscription. [#61041][#61041] {% comment %}doc{% endcomment %} -- `ALTER DATABASE ... ADD REGION` now requires an enterprise license.[#61169][#61169] {% comment %}doc{% endcomment %} - -

SQL language changes

- -- Added a virtual table, `crdb_internal.node_inflight_trace_spans`, which exposes span ID, parent span ID, trace ID, start time, duration, and operation of node-local inflight spans. [#59492][#59492] {% comment %}doc{% endcomment %} -- Added `WITH avro_schema_prefix` option to Avro [changefeeds](https://www.cockroachlabs.com/docs/v21.1/create-changefeed) to prevent collisions in a shared schema registry [#59710][#59710] {% comment %}doc{% endcomment %} -- CockroachDB now allows implicitly partitioned indexes to be referenced by foreign keys using the non-implicit columns. [#59692][#59692] {% comment %}doc{% endcomment %} -- The [`SHOW ZONE CONFIGURATION`](https://www.cockroachlabs.com/docs/v21.1/show-zone-configurations) statement has been changed to use `FROM` instead of `FOR`. [#59410][#59410] {% comment %}doc{% endcomment %} -- It is now possible to use the `NOT VISIBLE` qualifier for new column definitions in [`CREATE TABLE`](https://www.cockroachlabs.com/docs/v21.1/create-table). This causes the column to become "hidden". Hidden columns are not considered when using `*` in `SELECT` clauses. Note that CockroachDB already supported hidden columns (e.g., `rowid`, which is added automatically when a table definition has no `PRIMARY KEY` constraint). This change adds the opportunity for end-users to define their own hidden columns. It is intended for use in combination with other features related to geo-partitioning introduced in v21.1, which offer more control about how geo-partitioning keys get exposed to client ORMs and other automated tools that inspect the SQL schema. [#58923][#58923] {% comment %}doc{% endcomment %} -- Added the `goroutine_id` column to the `crdb_internal.node_inflight_trace_spans` virtual table that represents the goroutine ID associated with a particular span. [#59717][#59717] {% comment %}doc{% endcomment %} -- Updated [`SHOW BACKUP ... WITH PRIVILEGES`](https://www.cockroachlabs.com/docs/v21.1/show-backup) to display ownership information of objects in the backup. [#59732][#59732] {% comment %}doc{% endcomment %} -- The [`ALTER TYPE ... DROP VALUE ...`](https://www.cockroachlabs.com/docs/v21.1/alter-type) statement has been added to drop values from an [`ENUM`](https://www.cockroachlabs.com/docs/v21.1/enum) type. The statement drops the provided value if it isn't used as a value in any table's row. The use of this syntax is gated behind the `sql_safe_updates` [session variable](https://www.cockroachlabs.com/docs/v21.1/set-vars), as it is susceptible to the value being used in a table expression (such as a computed column). [#58688][#58688] {% comment %}doc{% endcomment %} -- Added support for `COPY CSV`. [#59790][#59790] {% comment %}doc{% endcomment %} -- CockroachDB now supports specifying `NULL` and `DELIMITER` in `COPY`. [#59790][#59790] {% comment %}doc{% endcomment %} -- [`ALTER TABLE ... SET LOCALITY`](https://www.cockroachlabs.com/docs/v21.1/alter-table) adds the ability to change the locality of a `GLOBAL` or `REGIONAL BY TABLE` table to `REGIONAL BY ROW`, provided an existing column of the appropriate type already exists. [#59624][#59624] {% comment %}doc{% endcomment %} -- Overload sequence operators now accept a regclass. [#59396][#59396] -- `crdb_internal.invalid_objects` now includes invalid type descriptors. [#59978][#59978] {% comment %}doc{% endcomment %} -- Added the `finished` column to the virtual table `crdb_internal.node_inflight_trace_spans`, which represents whether each span has finished or not. [#59856][#59856] {% comment %}doc{% endcomment %} -- Partitioned [GIN indexes](https://www.cockroachlabs.com/docs/v21.1/inverted-indexes) are now supported. [#59858][#59858] {% comment %}doc{% endcomment %} -- Hidden columns (created by using `NOT VISIBLE` or implicitly created via hash sharded indexes, a lack of a primary key definition, or by using `REGIONAL BY ROW`) will now display with `NOT VISIBLE` annotations on [`SHOW CREATE TABLE`](https://www.cockroachlabs.com/docs/v21.1/show-create). [#59828][#59828] {% comment %}doc{% endcomment %} -- `REGIONAL BY ROW` tables that have an implicitly created `crdb_region` table will now mark the given column as hidden so it does not display in `SELECT *` statements. [#59831][#59831] {% comment %}doc{% endcomment %} -- CockroachDB now recognizes the `options` URL parameter. The `options` parameter specifies session variables to set at connection start. This is treated the same as defined in the [PostgreSQL docs](https://www.postgresql.org/docs/13/libpq-connect.html#LIBPQ-PARAMKEYWORDS). [#59621][#59621] {% comment %}doc{% endcomment %} -- Implemented the ability to `ALTER TABLE SET LOCALITY` to `REGIONAL BY ROW` without specifying `AS` for `GLOBAL` and `REGIONAL BY TABLE` tables. [#59824][#59824] {% comment %}doc{% endcomment %} -- Added the `sql.show_tables.estimated_row_count.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings), which defaults to `true`. If `false`, `estimated_row_count` will not display on `SHOW TABLES` which improves performance. [#60282][#60282] {% comment %}doc{% endcomment %} -- [`ALTER DATABASE ... DROP REGION`](https://www.cockroachlabs.com/docs/v21.1/alter-database) is now implemented. [#59989][#59989] {% comment %}doc{% endcomment %} -- When a connection is established, CockroachDB will now return a placeholder `BackendKeyData` message in the response. This is for compatibility with some tools, but using `BackendKeyData` to cancel a query will still have no effect (which is the same as before). [#60281][#60281] {% comment %}doc{% endcomment %} -- To match the behavior of `PRIMARY KEY` creations setting, all relevant fields to `NOT NULL`, all `PARTITION BY` / R`EGIONAL BY ROW` columns will now have their fields set to `NOT NULL` at [`CREATE TABLE`](https://www.cockroachlabs.com/docs/v21.1/create-table) time. [#60379][#60379] {% comment %}doc{% endcomment %} -- [`ALTER DATABASE ... DROP REGION`](https://www.cockroachlabs.com/docs/v21.1/alter-database) now allows for dropping the primary region. [#60437][#60437] {% comment %}doc{% endcomment %} -- Added support for [restore](https://www.cockroachlabs.com/docs/v21.1/restore) of multi-region databases. [#60257][#60257] {% comment %}doc{% endcomment %} -- Implemented functionality to [`ALTER TABLE SET LOCALITY`](https://www.cockroachlabs.com/docs/v21.1/alter-table) from `REGIONAL BY ROW` to `REGIONAL BY TABLE` or `GLOBAL`. [#60311][#60311] {% comment %}doc{% endcomment %} -- Materialized views now require ownership or admin privilege to refresh. [#60448][#60448] {% comment %}doc{% endcomment %} -- `CONNECT` can be granted to / revoked from users at the database level (e.g., `GRANT CONNECT ON DATABASE db TO user;`). `CONNECT` allows users to view all objects in a database in `information_schema` and `pg_catalog`. Previously, only users could only see an object in `information_schema` / `pg_catalog` if they had any privilege (e.g., `SELECT`) on the object. Added a warning when using `USE DATABASE` that notifies the user that they do not have `CONNECT` privilege on the database and will need `CONNECT` privilege to use databases in a future release. [#59676][#59676] {% comment %}doc{% endcomment %} -- Add the built-in `crdb_internal.show_create_all_tables`, which takes in a database name (`STRING`) and returns a flat log of all the [`CREATE TABLE`](https://www.cockroachlabs.com/docs/v21.1/create-table) statements in the database followed by `ALTER` statements to add constraints. The output can be used to recreate a database. This built-in was added to replace old dump logic. [#60154][#60154] {% comment %}doc{% endcomment %} -- Implemented `default_to_database_primary_region`, which will return the region passed in if the region is defined on the database, falling back to the primary key if not. [#59836][#59836] {% comment %}doc{% endcomment %} -- The default expression for `REGIONAL BY ROW` tables is now `default_to_database_primary_region(gateway_region())`, allowing users to add to `REGIONAL BY ROW` tables from any region. This would previously error if the gateway's region was not defined on the database. [#59836][#59836] {% comment %}doc{% endcomment %} -- [`EXPLAIN`](https://www.cockroachlabs.com/docs/v21.1/explain) now shows the estimated percentage of the table that is scanned. [#60467][#60467] {% comment %}doc{% endcomment %} -- Multi-region tables will have their zone configs regenerated during database [restore](https://www.cockroachlabs.com/docs/v21.1/restore). [#60519][#60519] {% comment %}doc{% endcomment %} -- CockroachDB now supports the `DETACHED` option when running [`IMPORT`](https://www.cockroachlabs.com/docs/v21.1/import). The detached import will return `jobID` and the user can later use [`SHOW JOB`](https://www.cockroachlabs.com/docs/v21.1/show-jobs) to check the result of the detached import. This allows users to run `IMPORT` under explicit transactions with `DETACHED` option specified. [#60442][#60442] {% comment %}doc{% endcomment %} -- Casting JSON numeric scalars to numeric types now works as expected. [#41367][#41367] {% comment %}doc{% endcomment %} -- Most batches of data flowing through the vectorized execution engine will now be limited in size by `sql.distsql.temp_storage.workmem` (64MiB by default), which should improve the stability of CockroachDB clusters. [#59851][#59851] {% comment %}doc{% endcomment %} -- Implemented the ability to [`ALTER TABLE LOCALITY REGIONAL BY ROW`](https://www.cockroachlabs.com/docs/v21.1/alter-table) from other `REGIONAL BY ROW` variants. [#60497][#60497] {% comment %}doc{% endcomment %} -- Setting of [zone configs](https://www.cockroachlabs.com/docs/v21.1/configure-replication-zones) for non-physical tables is now forbidden. [#60592][#60592] {% comment %}doc{% endcomment %} -- Added telemetry to track usage of `pg_catalog` and `information_schema` tables [#60511][#60511] -- [`SHOW JOBS`](https://www.cockroachlabs.com/docs/v21.1/show-jobs) now displays a meaningful value in the `running_status` column for GC jobs which are actually performing garbage collection, as opposed to waiting on a timer. [#59220][#59220] {% comment %}doc{% endcomment %} -- Added the [`SHOW CREATE ALL TABLES`](https://www.cockroachlabs.com/docs/v21.1/show-create) statement, which allows the user to get the statements (`CREATE`/`ALTER`) to recreate a the current database. The command returns a flat log of the create statements followed by the alter statements for adding the constraints. The commands are ordered topologically such that dependencies appear before it's references. `ALTER` statements follow create statements to guarantee that the objects are all added before adding constraints. This command is added to replace old dump logic. [#60539][#60539] {% comment %}doc{% endcomment %} -- Added the `schema_name` and `table_i`d columns to the `crdb_internal.ranges` and `crdb_internal.ranges_no_leases` virtual tables. [#59865][#59865] {% comment %}doc{% endcomment %} -- Using the `CACHE` sequence option no longer results in an "unimplemented" error. The `CACHE` option is now fully implemented and will allow nodes to cache sequence numbers. A cache size of 1 means that there is no cache, and cache sizes of less than 1 are not valid. [#56954][#56954] {% comment %}doc{% endcomment %} -- The `serial_normalization` [session variable](https://www.cockroachlabs.com/docs/v21.1/set-vars) can now be set to the value `sql_sequence_cached`. If this value is set, the `sql.defaults.serial_sequences_cache_size` [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings) can be used to control the number of values to cache in a user's session with a default of 256. When the cache is empty, the underlying sequence will only be incremented once to populate it. Using `sql_sequence_cached` will result in better performance than `sql_sequence` because the former will perform fewer distributed calls to increment sequences. However, cached sequences may result in large gaps between serial sequence numbers if a session terminates before using all the values in its cache. [#56954][#56954] {% comment %}doc{% endcomment %} -- CockroachDB can now encode and decode [sequence](https://www.cockroachlabs.com/docs/v21.1/create-sequence) regclasses. [#59864][#59864] -- CockroachDB now allows renaming of [sequences](https://www.cockroachlabs.com/docs/v21.1/create-sequence) referenced by ID and conversion of sequences referenced by name to ID. [#59864][#59864] {% comment %}doc{% endcomment %} -- [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v21.1/explain-analyze) now shows more top-level query statistics. [#60641][#60641] {% comment %}doc{% endcomment %} -- Added `payloads_for_span` built-in that takes in a span ID and returns its payloads in `JSONB` format. If the span is not found, or if the span does not have any payloads, the built-in returns an empty JSON object. [#60616][#60616] {% comment %}doc{% endcomment %} -- Added a new [`IMPORT PGDUMP`](https://www.cockroachlabs.com/docs/v21.1/import) option, `ignore_unsupported`, to skip over all the unsupported `PGDUMP` statements. The collection of these statements will be appropriately documented. [#57827][#57827] {% comment %}doc{% endcomment %} -- Users will now need to specify `ignore_unsupported` to ignore all unsupported import statements during an [`IMPORT PGDUMP`](https://www.cockroachlabs.com/docs/v21.1/import). [#57827][#57827] {% comment %}doc{% endcomment %} -- Added a new [`IMPORT PGDUMP`](https://www.cockroachlabs.com/docs/v21.1/import) option, `ignored_stmt_log`, which allows users to specify where they would like to log statements that have been skipped during an import, by virtue of being unsupported. [#57827][#57827] {% comment %}doc{% endcomment %} -- CockroachDB now supports `VIRTUAL` [computed columns](https://www.cockroachlabs.com/docs/v21.1/computed-columns) (as opposed to `STORED`). These are computed columns that are not stored in the primary index and are recomputed as necessary. [#60748][#60748] {% comment %}doc{% endcomment %} -- [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v21.1/explain-analyze) now includes the nodes which were involved in the execution of each operator in the tree. [#60550][#60550] {% comment %}doc{% endcomment %} -- Added missing tables and columns at `pg_catalog`. [#60758][#60758] {% comment %}doc{% endcomment %} -- Added a new [session setting](https://www.cockroachlabs.com/docs/v21.1/set-vars) `locality_optimized_partitioned_index_scan` and corresponding [cluster default setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings) `sql.defaults.locality_optimized_partitioned_index_scan.enabled`. Both are currently disabled by default, and are currently unused. In the future, these settings will be used to enable or disable locality optimized search. If enabled, the optimizer will try to search locally for rows in `REGIONAL BY ROW` tables before searching remote nodes. [#60771][#60771] {% comment %}doc{% endcomment %} -- Added the new `parse_timestamp` function, which can be used to parse absolute timestamp strings in computed column expressions or partial index predicates. [#60772][#60772] {% comment %}doc{% endcomment %} -- CockroachDB now supports storing spatial type objects with `Z` and `M` dimensions (e.g., `POINTZ`, `LINESTRINGM`). [#60832][#60832] {% comment %}doc{% endcomment %} -- [`ALTER DATABASE ... ADD REGION`](https://www.cockroachlabs.com/docs/v21.1/alter-database) now re-partitions `REGIONAL BY ROW` tables and updates the [zone configs](https://www.cockroachlabs.com/docs/v21.1/configure-replication-zones) on the newly created partitions as well. [#60596][#60596] {% comment %}doc{% endcomment %} -- Updated the `crdb_internal.payloads_for_span` built-in to return a table instead of a `JSONB` array. Each row of the table represents one payload for the given span. It has columns for `payload_type` and `payload_jsonb`. [#60784][#60784] {% comment %}doc{% endcomment %} -- [`ALTER DATABASE ... DROP REGION ...`](https://www.cockroachlabs.com/docs/v21.1/alter-database) now re-partitions `REGIONAL BY ROW` tables to remove the partition for the removed region and removes the [zone configuration](https://www.cockroachlabs.com/docs/v21.1/configure-replication-zones) for the partition as well. [#60938][#60938] {% comment %}doc{% endcomment %} -- Added the experimental `experimental_enable_stream_replication` [session setting](https://www.cockroachlabs.com/docs/v21.1/set-vars) and `sql.defaults.experimental_stream_replication.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings) to enable cluster streaming. [#60826][#60826] {% comment %}doc{% endcomment %} -- Introduced a new `is_multi_region` column to c`rdb_internal.create_statements`, which informs whether the database of the object is a multi-region database. [#60761][#60761] {% comment %}doc{% endcomment %} -- Introduced the new `crdb_internal.filter_multiregion_fields_from_zone_config_sql` built-in, which removes multi-region fields from a [`CONFIGURE ZONE`](https://www.cockroachlabs.com/docs/v21.1/configure-zone) statement. [#60761][#60761] {% comment %}doc{% endcomment %} -- Added new virtual tables `crdb_internal.cluster_contention_events` and `crdb_internal.node_contention_events`, which expose cluster-wide and gateway-only, respectively, contention information. In order to access them, the user needs to be either an admin or have `VIEWACTIVITY` grant. [#60713][#60713] {% comment %}doc{% endcomment %} -- The structured payloads used for SQL audit and execution logs now include a transaction counter since the beginning of the session. Statements issued inside the same SQL [transaction](https://www.cockroachlabs.com/docs/v21.1/transactions) will thus be logged with the same counter value, thus enabling per-transactions grouping during log analysis. Separate sessions use independent counters. [#41929][#41929] {% comment %}doc{% endcomment %} -- The geometry built-ins `ST_MakePoint` and `ST_MakePointM` have been implemented and provide a mechanism for easily creating new points. [#61105][#61105] {% comment %}doc{% endcomment %} -- Added the `payloads_for_trace()` built-in so that all payloads attached to all spans for a given trace ID will be displayed, utilizing the `crdb_internal.payloads_for_span()` built-in under the hood. All payloads for long-running spans are also added to debug.zip in the `crdb_internal.node_inflight_trace_spans` table dump. [#60922][#60922] {% comment %}doc{% endcomment %} -- [`IMPORT PGDUMP`](https://www.cockroachlabs.com/docs/v21.1/import) can now import dump files with non-public schemas. [#57183][#57183] {% comment %}doc{% endcomment %} -- Added the `sql.optimizer.uniqueness_checks_for_gen_random_uuid.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings), which controls creation of uniqueness checks for [`UUID`](https://www.cockroachlabs.com/docs/v21.1/uuid) columns set to `gen_random_uuid()`. When enabled, uniqueness checks will be added for the `UUID` column if it has a unique constraint that cannot be enforced by an index. When disabled, no uniqueness checks are planned for these columns, and uniqueness is assumed due to the near-zero collision probability of `gen_random_uuid()`. [#61132][#61132] {% comment %}doc{% endcomment %} -- The `pg_catalog.pg_class` table now has a `relpartbound` column. This is only for compatibility, and the column value is always `NULL`. [#61162][#61162] {% comment %}doc{% endcomment %} -- Cluster [backup](https://www.cockroachlabs.com/docs/v21.1/backup) now [restores](https://www.cockroachlabs.com/docs/v21.1/restore) the zone configurations first. This means that there should be less range-relocation during and after cluster restores. [#60461][#60461] {% comment %}doc{% endcomment %} -- Fixed `information_schema.columns.udt_schema` for `ENUM` or user-defined types. [#61139][#61139] {% comment %}doc{% endcomment %} -- The geography built-in `ST_Affine` now supports 3D transformations. [#61286][#61286] {% comment %}doc{% endcomment %} -- The `ST_Z`, `ST_M`, and `ST_Zmflag` built-ins are now available for use. [#61032][#61032] {% comment %}doc{% endcomment %} -- Geometry built-ins for forcing geometries into non-2D layouts are now available. [#61297][#61297] {% comment %}doc{% endcomment %} -- `ST_Buffer` now requires at least 1 quadrant segment. [#61315][#61315] {% comment %}doc{% endcomment %} -- `ST_RotateX` function is now available. [#61326][#61326] {% comment %}doc{% endcomment %} -- [`EXPLAIN`](https://www.cockroachlabs.com/docs/v21.1/explain) now shows estimated row counts for all operators even without `VERBOSE` (except when we do not have statistics for the tables). [#61344][#61344] {% comment %}doc{% endcomment %} -- Updated the error message returned in case of a [unique constraint](https://www.cockroachlabs.com/docs/v21.1/unique) violation to hide the names and values for implicit partitioning columns and hash sharded index columns. [#61362][#61362] -- Fixed `information_schema.columns.is_identity` to display the correct value. [#61348][#61348] -- `ST_RotateY` and `ST_RotateZ` are now available. [#61387][#61387] {% comment %}doc{% endcomment %} -- CockroachDB now prevents `densifyFracs < 1e-6` for `ST_FrechetDistance` and `ST_HausdorffDistance` to protect panics and out-of-memory errors. [#61427][#61427] -- CockroachDB now disallows `GRANT/REVOKE` operations on system tables. [#61410][#61410] {% comment %}doc{% endcomment %} -- `ST_Snap` is now available. [#61523][#61523] {% comment %}doc{% endcomment %} -- Updates to certain fields in the zone configurations are blocked for multi-region enabled databases. This block can be overridden through the use of the `FORCE` keyword on the blocked statement. [#61499][#61499] {% comment %}doc{% endcomment %} -- The `ST_AddMeasure` function is now available for use. [#61514][#61514] {% comment %}doc{% endcomment %} -- Added a new built-in that sets the verbosity of all spans in a given trace. Syntax: `crdb_internal.set_trace_verbose($traceID,$verbosityAsBool)`. [#61353][#61353] {% comment %}doc{% endcomment %} -- `Pg_index.indkey` now includes attributes. [#61494][#61494] {% comment %}doc{% endcomment %} -- [`SHOW CREATE ALL TABLES`](https://www.cockroachlabs.com/docs/v21.1/show-create) is updated to be more memory efficient, however this should still not be run on a database with an excessive number of tables. Users should not run this on a database with more than 10000 tables (arbitrary but tested number). [#61127][#61127] {% comment %}doc{% endcomment %} -- Set the default [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings) for `sql.defaults.locality_optimized_partitioned_index_scan.enabled` to `true`, which is the default for the [session setting](https://www.cockroachlabs.com/docs/v21.1/set-vars) `locality_optimized_partitioned_index_scan`. When this session setting is enabled, the optimizer will plan queries to use "locality optimized search" when possible, which instructs the execution engine to search for keys in local nodes before searching remote nodes. If a key is found in a local node, the execution engine may be able to avoid visiting the remote nodes altogether. [#61601][#61601] {% comment %}doc{% endcomment %} -- [`ALTER TYPE ... DROP VALUE`](https://www.cockroachlabs.com/docs/v21.1/alter-type) is gated behind a feature flag. The [session setting](https://www.cockroachlabs.com/docs/v21.1/set-vars) is called `enable_drop_enum_value` and the corresponding [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings) is called `sql.defaults.drop_enum_value.enabled`. [#61723][#61723] {% comment %}doc{% endcomment %} -- `TYPE SCHEMA CHANGE` jobs which include a `DROP VALUE` in them are now cancellable. All other jobs remain non-cancellable. [#61733][#61733] {% comment %}doc{% endcomment %} -- A statement contention timeseries is now displayed in the SQL metrics overview dashboard. [#61844][#61844] {% comment %}doc{% endcomment %} -- The `ST_SnapToGrid` function can now be used to snap `Z` and `M` dimensions. [#61826][#61826] {% comment %}doc{% endcomment %} -- Added `json_extract_path_text` and `jsonb_extract_path_text` built-ins. [#61813][#61813] {% comment %}doc{% endcomment %} -- Removed `pg_catalog` tables that were mistakenly added, notably all tables that end in `_index` that are not `pg_catalog.pg_classes` and all statistics collector tables that are not `pg_stat_activity`. [#61876][#61876] -- The `replicas` column of `crdb_internal.ranges{_no_leases}` now includes both voting and non-voting replicas and `crdb_internal.ranges{_no_leases}` include two new columns: `voting_replicas` and `non_voting_replicas`, which work as labeled. [#61962][#61962] {% comment %}doc{% endcomment %} - -

Operational changes

- -- Added the `storage.transaction.separated_intents.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings), which enables separated intents by default. [#59829][#59829] {% comment %}doc{% endcomment %} - -

Command-line changes

- -- It is now possible to redirect logging to [Fluentd](https://www.fluentd.org)-compatible network collectors. See the reference sink documentation for details.This is an alpha-quality feature. Note that Fluent-enabled configuration only provide minimal event buffering, and log entries are dropped if the logging server becomes unavailable or network errors are encountered. This is a known limitation and will be likely improved in a later version. [#57170][#57170] {% comment %}doc{% endcomment %} -- The SQL shell ([`cockroach sql`](https://www.cockroachlabs.com/docs/v21.1/cockroach-sql) / [`cockroach demo`](https://www.cockroachlabs.com/docs/v21.1/cockroach-demo)) now supports a flag `--embedded`, for use with playground-style web applications. [#59750][#59750] {% comment %}doc{% endcomment %} -- [`cockroach userfile get`](https://www.cockroachlabs.com/docs/v21.1/cockroach-userfile-get) can now be used to fetch files from a cluster. [#60490][#60490] {% comment %}doc{% endcomment %} -- Added the `ignore-unsupported-statements`, `log-ignored-statements` and `row-limit` flags to the [`cockroach import`](https://www.cockroachlabs.com/docs/v21.1/cockroach-import) command. [#60923][#60923] {% comment %}doc{% endcomment %} -- The doctor tool can now report multiple descriptor validation failures per descriptor. [#60775][#60775] -- The new `cockroach connect` command now recognizes `--single-node` to prepare a TLS configuration suitable for a subsequent `start-single-node` command. Additionally, the command checks that either `--single-node` is specified, or both `--init-token` and `--num-expected-peers`. [#60854][#60854] {% comment %}doc{% endcomment %} -- Optimized handling of multi-line SQL strings to avoid unwanted extra server roundtrips. [#61207][#61207] -- Added back support for [`cockroach dump --dump-mode=schema`](https://www.cockroachlabs.com/docs/v21.1/cockroach-dump). This command calls [`SHOW CREATE ALL TABLES`](https://www.cockroachlabs.com/docs/v21.1/show-create) and returns the output. This gives users a migration path to use in v21.1 while switching over to `SHOW CREATE ALL TABLES` for v21.2. When executing this, the user is warned that the command is being removed in v21.2 and they should use `SHOW CREATE ALL TABLES` instead. [#61871][#61871] {% comment %}doc{% endcomment %} - -

API endpoint changes

- -- Added the following new HTTP API endpoints: - - - `/api/v2/nodes/`: Lists all nodes in the cluster - - `/api/v2/nodes//ranges`: Lists all ranges on the specified node - - `/api/v2/ranges/hot/`: Lists hot ranges in the cluster - - `/api/v2/ranges//`: Describes range in more detail - - `/api/v2/health/`: Returns an HTTP 200 response if node is healthy. [#60952][#60952] - -

DB Console changes

- -- Manually enqueue range in a replica GC queue now properly respects the `SkipShouldQueue` option. This can be useful to force a GC of a specific Range. [#60619][#60619] {% comment %}doc{% endcomment %} -- Added a filter for full scans to the [**Statements** page](https://www.cockroachlabs.com/docs/v21.1/ui-statements-page). [#60670][#60670] {% comment %}doc{% endcomment %} -- The **Range Report** page on the DB Console will now also show each of the replica's types. [#61113][#61113] {% comment %}doc{% endcomment %} -- The **Transaction Type** column was removed from [**Statements** page](https://www.cockroachlabs.com/docs/v21.1/ui-statements-page). The sort order in **Statement** / **Transaction Table** views is now by execution count. The sort order in **Transaction Details** view is by latency. Column titles in statements and transactions tables now say "{Statement,Transaction} Time" instead of "Latency". The **Contention** column added to the **Statements** / **Transactions** pages. **Max Scratch Disk Usage** added to **Statements** and **Transaction Details** pages. [#61177][#61177] {% comment %}doc{% endcomment %} -- Added a full-table scan checkbox. [#61676][#61676] {% comment %}doc{% endcomment %} - -

Bug fixes

- -- Fixed a bug in the DB Console affecting status badges in Nodes list on **Cluster Overview** page. [#59636][#59636] -- Fixes a bug where backups would fail with an error when trying to read a backup that was written. [#59730][#59730] -- Fixes a bug where some import failures would cause tables to stay `OFFLINE`, when they should have been brought back to `PUBLIC`. [#59723][#59723] -- Fixed a bug that caused errors when joining two tables when one of the tables had a computed column. This bug was present since version v21.1.0-alpha.2 and not present in any production releases. [#59741][#59741] -- Statistics are now correctly generated for columns in multi-column GIN indexes and columns referenced in partial GIN index predicates. [#59687][#59687] -- Fixed a timestamp overflow bug that would overflow timestamps after the year 2038 by updating the Avro SDK used by changefeeds. [#59745][#59745] -- Fixed a very rare chance of inconsistent follower reads. [#59502][#59502] -- Previously if `RELEASE SAVEPOINT cockroach_restart` was followed by `ROLLBACK`, the `sql.txn.rollback.count` metric would be incremented. This was incorrect, since the transaction had already committed. Now that metric is not incremented in this case. [#59781][#59781] -- CockroachDB now explicitly verify descriptors are sequences. [#60159][#60159] -- The result of the `cockroach debug doctor` invocation during `cockroach debug zip` is now properly included inside the generated zip file. [#60529][#60529] -- Fixed a bug in the optimizer statistics code that could cause an unconstrained partial index scan to be preferred over a constrained scan of the same index. [#60516][#60516] -- Fixed a deficiency in the replication layer that could result in ranges becoming unavailable for prolonged periods of time (hours) when a write burst occurred under system overload. While unavailable, the range status page for the affected range would show a last index much larger than the committed index and no movement in these indexes on a quorum of the replicas. Note that this should be distinguished from the case in which enough replicas are offline to constitute a loss of quorum, where the replication layer can not make progress due to the loss of quorum itself. [#60581][#60581] -- Previously, retryable errors in the cleanup phase of the type schema changer wouldn't be retried automatically in the background. This is now fixed. [#60495][#60495] -- v20.2 introduced an ability to rebalance replicas between multiple stores on the same node. This change fixed a problem with that feature, where occasionally an intra-node rebalance would fail and a range would get stuck permanently under replicated. [#60546][#60546] -- Fixed a bug that would cause the value of the optional `into_db` parameter to `RESTORE` to be included in anonymized crash reports. [#60624][#60624] -- Fixed a bug that caused errors for some queries on tables with `GEOMETRY` or `GEOGRAPHY` GIN indexes with filters containing shapes with zero area. [#60598][#60598] -- CockroachDB previously didn't account for some RAM used when disk-spilling operations (like sorts and hash joins) were using the temporary storage in the vectorized execution engine. This could result in OOM crashes, especially when the rows are large in size. This has been fixed. [#60593][#60593] -- Fixed a bug where `ST_Node` would panic if passed in `MULTILINESTRING EMPTY`. [#60691][#60691] -- Fixed a bug that could result in crashes during tracing when using the `trace.debug.enable` cluster setting. [#60725][#60725] -- Fixed execution errors for some queries that use set operations (`UNION` / `EXCEPT` / `INTERSECT`) where a column has types of different widths on the two sides (e.g., `INT4` vs. `INT8`). [#60560][#60560] -- CockroachDB now avoids creating batches that exceed the raft command limit (64MB) when reverting ranges that contain very large keys. [#59716][#59716] -- Fixed a bug whereby high-latency global clusters could sometimes fall behind checkpointing resolved timestamps. [#60807][#60807] -- Added check privileges before changing table/database owner. [#60800][#60800] -- Fixed an internal error caused in some cases involving JSON objects and arrays in a `VALUES` clause. [#60767][#60767] -- Previously `DROP TYPE IF EXISTS` with one existent, and another non-existent type would cause an unhandled error. This is now fixed. [#60822][#60822] -- Fixed bug that could report that a protected timestamp limit was exceeded when the limit was disabled, if an error were to occur while protecting a record. [#60913][#60913] -- CockroachDB previously could encounter an internal error when performing `UNION` operation when the first input resulted only in NULL values and consequent inputs produce tuples, and this is now fixed. Only v21.1 alpha versions are affected. [#60827][#60827] -- Fixed a bug in `crdb_internal.unsafe_upsert_namespace_entry` related to tables and types in user-defined schemas. [#60510][#60510] -- Integers inside of tuples were not being encoded properly when using the binary format for retrieving data. This is now fixed, and the proper integer width is reported. [#61013][#61013] -- Blank-padded chars (e.g., `CHAR(3)`) were not being encoded correctly when returning results to the client. Now they correctly include blank-padding when appropriate. [#61013][#61013] -- Collated strings were not encoded with the proper type OID when sending results to the client if the OID was for the `char` type. This is now fixed. [#61013][#61013] -- Fixed a very rare, possible impossible in practice, bug where a range merge that applied through a Raft snapshot on the left-hand side range's leaseholder could allow that leaseholder to serve writes that invalidated reads from before the merge on the right-hand side. Release justification: bug fix [#60521][#60521] -- The `SHOW CREATE` output of a partitioned partial index now lists the `PARTITION BY` and `WHERE` clauses in the order accepted by the parser. [#60590][#60590] -- The `SHOW CREATE` output of a partial interleaved index now lists the `INTERLEAVED` and `WHERE` clauses in the order accepted by the parser. [#60590][#60590] -- Fix a bug where cluster restore would sometimes (very rarely) fail after retrying. [#60458][#60458] -- Previously, comparing a negative integer to an OID would fail to compare correctly because the integer was not converted to an unsigned representation first. this is now fixed for both comparisons and casts.[#61148][#61148] -- Previously, CockroachDB could not decode arrays of user-defined types when sent to the server in the binary format. Now it can. [#61165][#61165] -- `crdb_internal.jobs` virtual table is now populated in a paginated fashion, thus, alleviating memory related concerns when previously we could encounter OOM crash. [#60693][#60693] -- The `SHOW TABLES FROM database` command would always show a NULL `estimated_row_count` if inspecting a database that was not the current database. This is now fixed. [#61191][#61191] -- Fixed a bug where schema changes on databases and schemas could return a `relation [] does not exist` if they failed or were canceled and entered the reverting state. These jobs are not actually possible to revert. With this change, the correct error causing the job to fail will be returned, and the job will enter the failed state with an error indicating that the job could not be reverted. [#61159][#61159] -- CockroachDB now drops the default value when its dependent sequence is dropped. [#60744][#60744] -- Dropping and recreating a view/table/sequence in a transaction will now correctly error out if a conflicting object exists or if the drop is incomplete. [#61135][#61135] -- The non-server `cockroach` commands now recognize the new `--log` flag properly. This had been broken in one of the earlier v21.1 alpha releases. [#61232][#61232] -- Prepared statements would sometimes get the wrong type OID for placeholder arguments used in function parameters. This is now fixed. [#60949][#60949] -- Fixed a case where empty zone configurations get created for certain indexes during `ALTER PRIMARY KEY`. [#61235][#61235] -- Schema change jobs associated with databases and schemas can no longer be canceled. Such jobs cannot actually be reverted successfully, so cancelation had no benefit and could have caused namespace corruption. [#61210][#61210] -- Fixed `cockroach demo` commands nodeName error argument index handling. [#61246][#61246] -- CockroachDB no longer displays incorrect node statuses on **Metrics** page, when page just loaded. [#61000][#61000] -- Fixed an NPE observed with a SpanFromContext call in the stack trace. [#61124][#61124] -- Limit scans are no longer counted as full scans. [#61178][#61178] -- Selecting from `crdb_internal.create_statements` will now correctly populate the `database_name` when the virtual index is not used. [#61201][#61201] -- Fixed an internal error when `EXPLAIN`ing an `INSERT` with an input that was determined by the optimizer to produce no rows. [#61278][#61278] -- Fixed a rare deadlock where a series of lease transfers concurrent with a Range merge could block each other from ever completing. [#60905][#60905] -- `ALTER TYPE ... ADD VALUE` changes are picked up by the array type alias correctly. [#61288][#61288] -- Previously, the traces of cascades and checks could be incomplete. This is now fixed. [#61321][#61321] -- Creating interleaved partitioned indexes is now disallowed. Previously, the database would crash when trying to create one. [#61106][#61106] -- Dropping and re-creating a database in a transaction will now correctly error out if an object in a dropped state is detected. [#61361][#61361] [#61358][#61358] -- CockroachDB now uses the correct `FuncExpr` when encoding sequences. [#61428][#61428] -- Alter primary key was not idempotent, so logical equivalent changes to primary keys would unnecessarily create new indexes. This is now fixed. [#61345][#61345] -- Fixed a bug where `GRANT/REVOKE` on the `system.lease` table would result in a deadlock. [#61410][#61410] -- Fixed operation hangs when a node loses access to cluster RPC (e.g., after it has been decommissioned), and immediately return an error instead. [#61356][#61356] -- Fixed a bug from v21.1-alpha where a node decommissioning process could sometimes hang or fail when the decommission request was submitted via the node being decommissioned. [#61356][#61356] -- Fixed bug where zone configurations on indexes were not copied before the backfill of an `ALTER PRIMARY KEY`. They used to be copied afterwards instead. [#61300][#61300] -- Fixed a bug where random numbers generated as default expressions during `IMPORT` would collide a few hundred rows apart from each-other. [#61214][#61214] -- Fixed a bug that caused `UPSERT` and `INSERT..ON CONFLICT..DO UPDATE `statements to fail on tables with both partial indexes and foreign key references. This bug has been present since v20.2.0. [#61416][#61416] -- An `UPDATE..FROM` statement where the `FROM` clause contained column names that match table column names erred if the table had a partial index predicate referencing those columns. This bug, present since partial indexes were released in v20.2.0, has been fixed. [#61522][#61522] -- The names of custom types are no longer sent to Cockroach Labs in telemetry and crash reports. [#60806][#60806] -- Fixed a case where an invalid tuple comparison using ANY was causing an internal error; we now return "unsupported comparison operator". [#61647][#61647] -- Added better privilege checks when creating a changefeed. [#61709][#61709] -- Fixed privileges for system.`protected_ts_meta`. [#61842][#61842] -- The `indexdef` column in the `pg_indexes` table would always report that the index belonged to the public schema. Now it correctly reports user-defined schemas if necessary. [#61754][#61754] -- Fixed "command is too large" errors in some cases when using `EXPLAIN ANALYZE (DEBUG)` or statement diagnostics on complex queries. [#61909][#61909] -- Using `EXPLAIN (OPT, ENV)` on a query that references a table in a user-defined schema would fail previously. This is now fixed. [#61888][#61888] -- Fixed a bug that caused "column does not exist errors" in specific cases of `UPDATE..FROM` statements. The error only occurred when updating a `DECIMAL` column to a column in the `FROM` clause, and the column had a `CHECK` constraint or was referenced by a partial index predicate. [#61949][#61949] -- Previously the `idle_in_session_timeout` and `idle_in_transaction_session_timeout` settings would show the wrong value when using `SHOW`. They would instead show the value of the `statement_timeout` setting. This is now fixed. The functionality was already working correctly; this just fixes a display bug. [#61959][#61959] - -

Performance improvements

- -- Improved the performance of the `pg_table_is_visible` built-in function. [#59880][#59880] -- Added support for left and anti lookup joins in cases where the equi-join condition is not on the first column in the index, but the first column(s) are constrained to a small number of constant values using a CHECK constraint or an ENUM type. Planning a lookup join for these cases can significantly improve performance if the left input to the join is much smaller than the right-hand side. [#60302][#60302] -- The optimizer now knows that the unique columns in an implicitly partitioned unique index form a key. This can be used to enable certain optimizations and may result in better plans. [#60591][#60591] -- Improved the optimizer's ability to identify constant columns in scans. This can enable different types of optimizations and result in improved plans. [#60927][#60927] -- Follower reads no longer wait before redirecting to the leaseholder if they could not be served by a local follower due to an insufficient closed timestamp. [#60839][#60839] -- Updated the query used to build the virtual table `crdb_internal.table_row_statistics` so that it is always run at `AS OF SYSTEM TIME '-10s'`. This should reduce contention on the table and improve performance for transactions that rely on `crdb_internal.table_row_statistics`, such as `SHOW TABLES`. [#60953][#60953] -- Improved the optimizer's cost estimation of index scans that must visit multiple partitions. When an index has multiple partitions, the optimizer is now more likely to choose a constrained scan rather than a full index scan. This can lead to better plans and improved performance. It also improves the ability of the database to serve queries if one of the partitions is unavailable. [#61063][#61063] -- Optimized external sort to merge partition data faster in colexec. [#61056][#61056] -- If the session setting `locality_optimized_partitioned_index_scan` is enabled, the optimizer will try to plan scans known to produce at most one row using "locality optimized search". This optimization applies for `REGIONAL BY ROW` tables, and if enabled, it means that the execution engine will first search locally for the row before searching remote nodes. If the row is found in a local node, remote nodes will not be searched. [#60831][#60831] -- The optimizer now infers additional functional dependencies based on computed columns in tables. This may enable additional optimizations and lead to better query plans. [#61097][#61097] -- Removed uniqueness checks on the primary key for `REGIONAL BY ROW` tables with a computed region column that is a function of the primary key columns. Uniqueness checks are not necessary in this case since uniqueness can be suitably guaranteed by the primary index. Removing these checks improves performance of `INSERT`, `UPDATE`, and `UPSERT` statements. [#61097][#61097] -- The optimizer no longer plans uniqueness checks for columns in implicitly partitioned unique indexes when those columns are used as the arbiters to detect conflicts in `INSERT ... ON CONFLICT` statements. Unique checks are not needed in this case to enforce uniqueness. Removing these checks results in improved performance for `INSERT ... ON CONFLICT` statements. [#61184][#61184] -- The columns fetched for uniqueness checks of implicitly partitioned unique indexes are now pruned to only include columns necessary for determining uniqueness. [#61376][#61376] -- Introspection queries that use casts into the `REGPROC` pseudo-type are made much more efficient: they're now implemented as a constant-time lookup instead of an internal query. [#61211][#61211] -- Fixed cases where the optimizer was doing unnecessary full table scans when the table was very small (according to the last collected statistics). [#61805][#61805] -- The optimizer now estimates the cost of evaluating query filters more accurately for queries with a `LIMIT`. Previously, it was assumed that the filter would be evaluated on each input row. Now, the optimizer assumes that the filter will only be evaluated on the number of rows required to produce the LIMIT's number of rows after filtering. This may lead to more efficient query plans in some cases. [#61947][#61947] - -
- -

Contributors

- -This release includes 667 merged PRs by 70 authors. -We would like to thank the following contributors from the CockroachDB community: - -- Abdullah Islam (first-time contributor) -- Alan Acosta (first-time contributor) -- Kumar Akshay -- Max Neverov -- Miguel Novelo (first-time contributor) -- Ratnesh Mishra (first-time contributor) -- Tharun (first-time contributor) -- Ulf Adams (first-time contributor) -- alex-berger@gmx.ch (first-time contributor) -- leoric (first-time contributor) -- shikamaru (first-time contributor) - -
- -[#41367]: https://github.com/cockroachdb/cockroach/pull/41367 -[#41929]: https://github.com/cockroachdb/cockroach/pull/41929 -[#56954]: https://github.com/cockroachdb/cockroach/pull/56954 -[#57077]: https://github.com/cockroachdb/cockroach/pull/57077 -[#57170]: https://github.com/cockroachdb/cockroach/pull/57170 -[#57183]: https://github.com/cockroachdb/cockroach/pull/57183 -[#57827]: https://github.com/cockroachdb/cockroach/pull/57827 -[#58422]: https://github.com/cockroachdb/cockroach/pull/58422 -[#58688]: https://github.com/cockroachdb/cockroach/pull/58688 -[#58923]: https://github.com/cockroachdb/cockroach/pull/58923 -[#59086]: https://github.com/cockroachdb/cockroach/pull/59086 -[#59220]: https://github.com/cockroachdb/cockroach/pull/59220 -[#59396]: https://github.com/cockroachdb/cockroach/pull/59396 -[#59410]: https://github.com/cockroachdb/cockroach/pull/59410 -[#59490]: https://github.com/cockroachdb/cockroach/pull/59490 -[#59492]: https://github.com/cockroachdb/cockroach/pull/59492 -[#59502]: https://github.com/cockroachdb/cockroach/pull/59502 -[#59539]: https://github.com/cockroachdb/cockroach/pull/59539 -[#59571]: https://github.com/cockroachdb/cockroach/pull/59571 -[#59621]: https://github.com/cockroachdb/cockroach/pull/59621 -[#59624]: https://github.com/cockroachdb/cockroach/pull/59624 -[#59636]: https://github.com/cockroachdb/cockroach/pull/59636 -[#59676]: https://github.com/cockroachdb/cockroach/pull/59676 -[#59687]: https://github.com/cockroachdb/cockroach/pull/59687 -[#59692]: https://github.com/cockroachdb/cockroach/pull/59692 -[#59693]: https://github.com/cockroachdb/cockroach/pull/59693 -[#59709]: https://github.com/cockroachdb/cockroach/pull/59709 -[#59710]: https://github.com/cockroachdb/cockroach/pull/59710 -[#59716]: https://github.com/cockroachdb/cockroach/pull/59716 -[#59717]: https://github.com/cockroachdb/cockroach/pull/59717 -[#59723]: https://github.com/cockroachdb/cockroach/pull/59723 -[#59730]: https://github.com/cockroachdb/cockroach/pull/59730 -[#59732]: https://github.com/cockroachdb/cockroach/pull/59732 -[#59735]: https://github.com/cockroachdb/cockroach/pull/59735 -[#59741]: https://github.com/cockroachdb/cockroach/pull/59741 -[#59745]: https://github.com/cockroachdb/cockroach/pull/59745 -[#59750]: https://github.com/cockroachdb/cockroach/pull/59750 -[#59781]: https://github.com/cockroachdb/cockroach/pull/59781 -[#59790]: https://github.com/cockroachdb/cockroach/pull/59790 -[#59810]: https://github.com/cockroachdb/cockroach/pull/59810 -[#59815]: https://github.com/cockroachdb/cockroach/pull/59815 -[#59824]: https://github.com/cockroachdb/cockroach/pull/59824 -[#59828]: https://github.com/cockroachdb/cockroach/pull/59828 -[#59829]: https://github.com/cockroachdb/cockroach/pull/59829 -[#59831]: https://github.com/cockroachdb/cockroach/pull/59831 -[#59836]: https://github.com/cockroachdb/cockroach/pull/59836 -[#59851]: https://github.com/cockroachdb/cockroach/pull/59851 -[#59856]: https://github.com/cockroachdb/cockroach/pull/59856 -[#59858]: https://github.com/cockroachdb/cockroach/pull/59858 -[#59864]: https://github.com/cockroachdb/cockroach/pull/59864 -[#59865]: https://github.com/cockroachdb/cockroach/pull/59865 -[#59877]: https://github.com/cockroachdb/cockroach/pull/59877 -[#59880]: https://github.com/cockroachdb/cockroach/pull/59880 -[#59978]: https://github.com/cockroachdb/cockroach/pull/59978 -[#59989]: https://github.com/cockroachdb/cockroach/pull/59989 -[#59995]: https://github.com/cockroachdb/cockroach/pull/59995 -[#60154]: https://github.com/cockroachdb/cockroach/pull/60154 -[#60159]: https://github.com/cockroachdb/cockroach/pull/60159 -[#60160]: https://github.com/cockroachdb/cockroach/pull/60160 -[#60257]: https://github.com/cockroachdb/cockroach/pull/60257 -[#60281]: https://github.com/cockroachdb/cockroach/pull/60281 -[#60282]: https://github.com/cockroachdb/cockroach/pull/60282 -[#60302]: https://github.com/cockroachdb/cockroach/pull/60302 -[#60311]: https://github.com/cockroachdb/cockroach/pull/60311 -[#60379]: https://github.com/cockroachdb/cockroach/pull/60379 -[#60437]: https://github.com/cockroachdb/cockroach/pull/60437 -[#60442]: https://github.com/cockroachdb/cockroach/pull/60442 -[#60448]: https://github.com/cockroachdb/cockroach/pull/60448 -[#60458]: https://github.com/cockroachdb/cockroach/pull/60458 -[#60461]: https://github.com/cockroachdb/cockroach/pull/60461 -[#60467]: https://github.com/cockroachdb/cockroach/pull/60467 -[#60469]: https://github.com/cockroachdb/cockroach/pull/60469 -[#60490]: https://github.com/cockroachdb/cockroach/pull/60490 -[#60495]: https://github.com/cockroachdb/cockroach/pull/60495 -[#60497]: https://github.com/cockroachdb/cockroach/pull/60497 -[#60510]: https://github.com/cockroachdb/cockroach/pull/60510 -[#60511]: https://github.com/cockroachdb/cockroach/pull/60511 -[#60516]: https://github.com/cockroachdb/cockroach/pull/60516 -[#60519]: https://github.com/cockroachdb/cockroach/pull/60519 -[#60521]: https://github.com/cockroachdb/cockroach/pull/60521 -[#60529]: https://github.com/cockroachdb/cockroach/pull/60529 -[#60539]: https://github.com/cockroachdb/cockroach/pull/60539 -[#60546]: https://github.com/cockroachdb/cockroach/pull/60546 -[#60550]: https://github.com/cockroachdb/cockroach/pull/60550 -[#60560]: https://github.com/cockroachdb/cockroach/pull/60560 -[#60581]: https://github.com/cockroachdb/cockroach/pull/60581 -[#60590]: https://github.com/cockroachdb/cockroach/pull/60590 -[#60591]: https://github.com/cockroachdb/cockroach/pull/60591 -[#60592]: https://github.com/cockroachdb/cockroach/pull/60592 -[#60593]: https://github.com/cockroachdb/cockroach/pull/60593 -[#60594]: https://github.com/cockroachdb/cockroach/pull/60594 -[#60596]: https://github.com/cockroachdb/cockroach/pull/60596 -[#60598]: https://github.com/cockroachdb/cockroach/pull/60598 -[#60616]: https://github.com/cockroachdb/cockroach/pull/60616 -[#60619]: https://github.com/cockroachdb/cockroach/pull/60619 -[#60624]: https://github.com/cockroachdb/cockroach/pull/60624 -[#60641]: https://github.com/cockroachdb/cockroach/pull/60641 -[#60670]: https://github.com/cockroachdb/cockroach/pull/60670 -[#60691]: https://github.com/cockroachdb/cockroach/pull/60691 -[#60693]: https://github.com/cockroachdb/cockroach/pull/60693 -[#60708]: https://github.com/cockroachdb/cockroach/pull/60708 -[#60713]: https://github.com/cockroachdb/cockroach/pull/60713 -[#60725]: https://github.com/cockroachdb/cockroach/pull/60725 -[#60744]: https://github.com/cockroachdb/cockroach/pull/60744 -[#60748]: https://github.com/cockroachdb/cockroach/pull/60748 -[#60753]: https://github.com/cockroachdb/cockroach/pull/60753 -[#60758]: https://github.com/cockroachdb/cockroach/pull/60758 -[#60761]: https://github.com/cockroachdb/cockroach/pull/60761 -[#60767]: https://github.com/cockroachdb/cockroach/pull/60767 -[#60771]: https://github.com/cockroachdb/cockroach/pull/60771 -[#60772]: https://github.com/cockroachdb/cockroach/pull/60772 -[#60775]: https://github.com/cockroachdb/cockroach/pull/60775 -[#60784]: https://github.com/cockroachdb/cockroach/pull/60784 -[#60799]: https://github.com/cockroachdb/cockroach/pull/60799 -[#60800]: https://github.com/cockroachdb/cockroach/pull/60800 -[#60806]: https://github.com/cockroachdb/cockroach/pull/60806 -[#60807]: https://github.com/cockroachdb/cockroach/pull/60807 -[#60822]: https://github.com/cockroachdb/cockroach/pull/60822 -[#60825]: https://github.com/cockroachdb/cockroach/pull/60825 -[#60826]: https://github.com/cockroachdb/cockroach/pull/60826 -[#60827]: https://github.com/cockroachdb/cockroach/pull/60827 -[#60831]: https://github.com/cockroachdb/cockroach/pull/60831 -[#60832]: https://github.com/cockroachdb/cockroach/pull/60832 -[#60839]: https://github.com/cockroachdb/cockroach/pull/60839 -[#60847]: https://github.com/cockroachdb/cockroach/pull/60847 -[#60854]: https://github.com/cockroachdb/cockroach/pull/60854 -[#60901]: https://github.com/cockroachdb/cockroach/pull/60901 -[#60903]: https://github.com/cockroachdb/cockroach/pull/60903 -[#60905]: https://github.com/cockroachdb/cockroach/pull/60905 -[#60908]: https://github.com/cockroachdb/cockroach/pull/60908 -[#60913]: https://github.com/cockroachdb/cockroach/pull/60913 -[#60922]: https://github.com/cockroachdb/cockroach/pull/60922 -[#60923]: https://github.com/cockroachdb/cockroach/pull/60923 -[#60927]: https://github.com/cockroachdb/cockroach/pull/60927 -[#60938]: https://github.com/cockroachdb/cockroach/pull/60938 -[#60941]: https://github.com/cockroachdb/cockroach/pull/60941 -[#60949]: https://github.com/cockroachdb/cockroach/pull/60949 -[#60952]: https://github.com/cockroachdb/cockroach/pull/60952 -[#60953]: https://github.com/cockroachdb/cockroach/pull/60953 -[#60992]: https://github.com/cockroachdb/cockroach/pull/60992 -[#61000]: https://github.com/cockroachdb/cockroach/pull/61000 -[#61013]: https://github.com/cockroachdb/cockroach/pull/61013 -[#61018]: https://github.com/cockroachdb/cockroach/pull/61018 -[#61032]: https://github.com/cockroachdb/cockroach/pull/61032 -[#61041]: https://github.com/cockroachdb/cockroach/pull/61041 -[#61043]: https://github.com/cockroachdb/cockroach/pull/61043 -[#61047]: https://github.com/cockroachdb/cockroach/pull/61047 -[#61056]: https://github.com/cockroachdb/cockroach/pull/61056 -[#61063]: https://github.com/cockroachdb/cockroach/pull/61063 -[#61097]: https://github.com/cockroachdb/cockroach/pull/61097 -[#61105]: https://github.com/cockroachdb/cockroach/pull/61105 -[#61106]: https://github.com/cockroachdb/cockroach/pull/61106 -[#61113]: https://github.com/cockroachdb/cockroach/pull/61113 -[#61124]: https://github.com/cockroachdb/cockroach/pull/61124 -[#61127]: https://github.com/cockroachdb/cockroach/pull/61127 -[#61130]: https://github.com/cockroachdb/cockroach/pull/61130 -[#61132]: https://github.com/cockroachdb/cockroach/pull/61132 -[#61135]: https://github.com/cockroachdb/cockroach/pull/61135 -[#61139]: https://github.com/cockroachdb/cockroach/pull/61139 -[#61148]: https://github.com/cockroachdb/cockroach/pull/61148 -[#61159]: https://github.com/cockroachdb/cockroach/pull/61159 -[#61162]: https://github.com/cockroachdb/cockroach/pull/61162 -[#61165]: https://github.com/cockroachdb/cockroach/pull/61165 -[#61169]: https://github.com/cockroachdb/cockroach/pull/61169 -[#61170]: https://github.com/cockroachdb/cockroach/pull/61170 -[#61177]: https://github.com/cockroachdb/cockroach/pull/61177 -[#61178]: https://github.com/cockroachdb/cockroach/pull/61178 -[#61184]: https://github.com/cockroachdb/cockroach/pull/61184 -[#61191]: https://github.com/cockroachdb/cockroach/pull/61191 -[#61201]: https://github.com/cockroachdb/cockroach/pull/61201 -[#61207]: https://github.com/cockroachdb/cockroach/pull/61207 -[#61210]: https://github.com/cockroachdb/cockroach/pull/61210 -[#61211]: https://github.com/cockroachdb/cockroach/pull/61211 -[#61214]: https://github.com/cockroachdb/cockroach/pull/61214 -[#61221]: https://github.com/cockroachdb/cockroach/pull/61221 -[#61232]: https://github.com/cockroachdb/cockroach/pull/61232 -[#61235]: https://github.com/cockroachdb/cockroach/pull/61235 -[#61244]: https://github.com/cockroachdb/cockroach/pull/61244 -[#61246]: https://github.com/cockroachdb/cockroach/pull/61246 -[#61251]: https://github.com/cockroachdb/cockroach/pull/61251 -[#61278]: https://github.com/cockroachdb/cockroach/pull/61278 -[#61286]: https://github.com/cockroachdb/cockroach/pull/61286 -[#61288]: https://github.com/cockroachdb/cockroach/pull/61288 -[#61297]: https://github.com/cockroachdb/cockroach/pull/61297 -[#61300]: https://github.com/cockroachdb/cockroach/pull/61300 -[#61315]: https://github.com/cockroachdb/cockroach/pull/61315 -[#61321]: https://github.com/cockroachdb/cockroach/pull/61321 -[#61324]: https://github.com/cockroachdb/cockroach/pull/61324 -[#61326]: https://github.com/cockroachdb/cockroach/pull/61326 -[#61344]: https://github.com/cockroachdb/cockroach/pull/61344 -[#61345]: https://github.com/cockroachdb/cockroach/pull/61345 -[#61348]: https://github.com/cockroachdb/cockroach/pull/61348 -[#61353]: https://github.com/cockroachdb/cockroach/pull/61353 -[#61356]: https://github.com/cockroachdb/cockroach/pull/61356 -[#61358]: https://github.com/cockroachdb/cockroach/pull/61358 -[#61361]: https://github.com/cockroachdb/cockroach/pull/61361 -[#61362]: https://github.com/cockroachdb/cockroach/pull/61362 -[#61376]: https://github.com/cockroachdb/cockroach/pull/61376 -[#61386]: https://github.com/cockroachdb/cockroach/pull/61386 -[#61387]: https://github.com/cockroachdb/cockroach/pull/61387 -[#61410]: https://github.com/cockroachdb/cockroach/pull/61410 -[#61416]: https://github.com/cockroachdb/cockroach/pull/61416 -[#61427]: https://github.com/cockroachdb/cockroach/pull/61427 -[#61428]: https://github.com/cockroachdb/cockroach/pull/61428 -[#61494]: https://github.com/cockroachdb/cockroach/pull/61494 -[#61499]: https://github.com/cockroachdb/cockroach/pull/61499 -[#61514]: https://github.com/cockroachdb/cockroach/pull/61514 -[#61522]: https://github.com/cockroachdb/cockroach/pull/61522 -[#61523]: https://github.com/cockroachdb/cockroach/pull/61523 -[#61601]: https://github.com/cockroachdb/cockroach/pull/61601 -[#61647]: https://github.com/cockroachdb/cockroach/pull/61647 -[#61676]: https://github.com/cockroachdb/cockroach/pull/61676 -[#61709]: https://github.com/cockroachdb/cockroach/pull/61709 -[#61723]: https://github.com/cockroachdb/cockroach/pull/61723 -[#61733]: https://github.com/cockroachdb/cockroach/pull/61733 -[#61754]: https://github.com/cockroachdb/cockroach/pull/61754 -[#61805]: https://github.com/cockroachdb/cockroach/pull/61805 -[#61813]: https://github.com/cockroachdb/cockroach/pull/61813 -[#61826]: https://github.com/cockroachdb/cockroach/pull/61826 -[#61842]: https://github.com/cockroachdb/cockroach/pull/61842 -[#61844]: https://github.com/cockroachdb/cockroach/pull/61844 -[#61871]: https://github.com/cockroachdb/cockroach/pull/61871 -[#61876]: https://github.com/cockroachdb/cockroach/pull/61876 -[#61888]: https://github.com/cockroachdb/cockroach/pull/61888 -[#61909]: https://github.com/cockroachdb/cockroach/pull/61909 -[#61947]: https://github.com/cockroachdb/cockroach/pull/61947 -[#61949]: https://github.com/cockroachdb/cockroach/pull/61949 -[#61959]: https://github.com/cockroachdb/cockroach/pull/61959 -[#61962]: https://github.com/cockroachdb/cockroach/pull/61962 diff --git a/src/current/_includes/releases/v21.1/v21.1.0-beta.2.md b/src/current/_includes/releases/v21.1/v21.1.0-beta.2.md deleted file mode 100644 index 994aafe4e33..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.0-beta.2.md +++ /dev/null @@ -1,95 +0,0 @@ -## v21.1.0-beta.2 - -Release Date: March 30, 2021 - - - -

SQL language changes

- -

Multi-region changes

- -- Added validation that prevents users from updating the [zone configurations](https://www.cockroachlabs.com/docs/v21.1/configure-replication-zones) of [multi-region tables](https://www.cockroachlabs.com/docs/v21.1/multiregion-overview) without first setting the `override_multi_region_zone_config` [session variable](https://www.cockroachlabs.com/docs/v21.1/set-vars). [#62119][#62119] -- Discarding a zone configuration from a multi-region enabled entity is blocked behind the `override_multi_region_zone_config` [session variable](https://www.cockroachlabs.com/docs/v21.1/set-vars). [#62159][#62159] -- Reverted the change that added the `FORCE` keyword in [#61499][#61499] in favor of the `override_multi_region_zone_config` session variable. [#62119][#62119] -- Setting non-[multi-region](https://www.cockroachlabs.com/docs/v21.1/multiregion-overview) controlled fields on [zone configs](https://www.cockroachlabs.com/docs/v21.1/configure-replication-zones) before `ALTER DATABASE ... SET PRIMARY REGION` will now be preserved and have the same value after the `SET PRIMARY REGION` command is issued. [#62162][#62162] -- [Materialized views](https://www.cockroachlabs.com/docs/v21.1/views#materialized-views) in multi-region databases will now have a [`GLOBAL` table locality](https://www.cockroachlabs.com/docs/v21.1/set-locality#set-the-table-locality-to-global). [#62194][#62194] -- [Materialized views](https://www.cockroachlabs.com/docs/v21.1/views#materialized-views) which are in a database before the first [`ADD REGION`](https://www.cockroachlabs.com/docs/v21.1/add-region) will become [`GLOBAL`](https://www.cockroachlabs.com/docs/v21.1/set-locality#set-the-table-locality-to-global) on `ADD REGION`, in line with the behavior of `CREATE MATERIALIZED VIEW` on a [multi-region](https://www.cockroachlabs.com/docs/v21.1/multiregion-overview) database. [#62194][#62194] -- `ALTER DATABASE .. SET PRIMARY REGION` now requires both `CREATE` and `ZONECONFIG` privilege on all objects inside the database when [adding the first region to the database](https://www.cockroachlabs.com/docs/v21.1/add-region#examples). The same behavior applies for dropping the last region using `ALTER DATABASE ... DROP REGION`. [#62450][#62450] -- Removed the experimental multi-region locality syntaxes. [#62114][#62114] - -

General changes

- -- CockroachDB now stores information about contention on non-SQL keys. [#62041][#62041] -- Statement [diagnostics bundles](https://www.cockroachlabs.com/docs/v21.1/explain-analyze#debug-option) now contain output of `EXPLAIN (VEC)` and `EXPLAIN (VEC, VERBOSE)` commands for the statements. [#62049][#62049] -- Sampled execution stats are now available through [`crdb_internal.node_{statement,transaction}_statistics`](https://www.cockroachlabs.com/docs/v21.1/crdb-internal). [#62089][#62089] -- Increased the default value for the `sql.txn_stats.sample_rate` [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings) from 0 to 0.1. This means that from now on every statement has 10% probability of being sampled for the purposes of execution statistics. Note that no other criteria for sampling (such as query latency) are currently being utilized to decide whether to sample a statement or not. [#61815][#61815] -- Added the following [cluster settings](https://www.cockroachlabs.com/docs/v21.1/cluster-settings): `sql.defaults.statement_timeout`, which controls the default value for the `statement_timeout` [session setting](https://www.cockroachlabs.com/docs/v21.1/set-vars); `sql.defaults.idle_in_transaction_session_timeout`, which controls the default value for the `idle_in_transaction_session_timeout` timeout setting; `sql.defaults.idle_in_session_timeout`, which already existed, but is now a public cluster setting. [#62182][#62182] -- [`EXPLAIN`](https://www.cockroachlabs.com/docs/v21.1/explain) and [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v21.1/explain-analyze) now show how long ago [table statistics](https://www.cockroachlabs.com/docs/v21.1/cost-based-optimizer#table-statistics) were collected. [#61945][#61945] - -

Command-line changes

- -- Changed the formatting of namespace validation failures in `cockroach debug doctor` output. [#62245][#62245] - -

Bug fixes

- -- Fixed a bug where the `target` column of `crdb_internal.zones` would show names without properly accounting for user-defined schemas. [#62022][#62022] -- Added validation that prevents regions being dropped on [multi-region databases](https://www.cockroachlabs.com/docs/v21.1/multiregion-overview) when there are <= 3 regions left on the database. [#62162][#62162] -- Fixed a bug where [zone configurations](https://www.cockroachlabs.com/docs/v21.1/configure-replication-zones) were not being correctly dropped on the final `DROP REGION` of a multi-region database. [#62162][#62162] -- Fixed a bug where [`VIEW`s](https://www.cockroachlabs.com/docs/v21.1/views) and [`SEQUENCE`s](https://www.cockroachlabs.com/docs/v21.1/create-sequence) were not being allowed in multi-region databases. They will now default to the [`REGIONAL BY TABLE`](https://www.cockroachlabs.com/docs/v21.1/set-locality#set-the-table-locality-to-regional-by-table) locality. [#62176][#62176] -- Fixed a bug where the `pg_type_is_visible` built-in function did not correctly handle user-defined types. [#62225][#62225] -- Fixed a bug where casting an `OID` to a `regtype` did not work for user-defined types. [#62225][#62225] -- A Raft leader who loses quorum will now relinquish its range lease and remove the replica if the range is recreated elsewhere, e.g., via `Node.ResetQuorum()`. [#62103][#62103] -- Fixed a bug where `ClearRange` could leave behind stray write intents when separated intents were enabled, which could cause subsequent storage errors. [#62104][#62104] -- [`ALTER TABLE`](https://www.cockroachlabs.com/docs/v21.1/alter-table), [`ALTER VIEW`](https://www.cockroachlabs.com/docs/v21.1/alter-view), and [`ALTER SEQUENCE`](https://www.cockroachlabs.com/docs/v21.1/alter-sequence) can no longer be used to incorrectly create cross-DB references. [#62341][#62341] -- Disallowed adding columns of type `OIDVECTOR` or `INT2VECTOR` to a table in [`ALTER TABLE ... ADD COLUMN`](https://www.cockroachlabs.com/docs/v21.1/add-column) statements. These types are not allowed in user-created tables via [`CREATE TABLE`](https://www.cockroachlabs.com/docs/v21.1/create-table) and were previously erroneously allowed in `ALTER TABLE ... ADD COLUMN`. [#62180][#62180] -- CockroachDB now logs all unsupported `pgdump` statements across smaller log files that can be found in the subdirectory `import/(unsupported_schema_stmts|unsupported_data_stmts)/.log` [#62263][#62263] -- Fixed a bug where a [constraint](https://www.cockroachlabs.com/docs/v21.1/constraints) like `NOT NULL` or `CHECK` on a column made irrelevant by a [`DROP CONSTRAINT`](https://www.cockroachlabs.com/docs/v21.1/drop-constraint) statement in a later concurrent transaction would lead to errors / incorrect behaviour. [#62249][#62249] -- Fixed an internal error that could occur when [`REGIONAL BY ROW`](https://www.cockroachlabs.com/docs/v21.1/set-locality) tables were joined with other tables using a lookup or inverted join. The internal error was `"we expect that limited UNION ALL queries are only planned locally"`. [#62383][#62383] -- Fixed a bug where using `DROP REGION` on the last region of a multi-region database would not delete the global [zone configurations](https://www.cockroachlabs.com/docs/v21.1/configure-replication-zones) for [`GLOBAL` tables](https://www.cockroachlabs.com/docs/v21.1/set-locality#set-the-table-locality-to-global). [#62220][#62220] -- Fixed a bug where duplicate [`IMPORT`](https://www.cockroachlabs.com/docs/v21.1/import) job records may have been created, or `IMPORT` statements may have failed, when the actual job succeeded. [#62396][#62396] -- Fixed a bug where CockroachDB could collect execution statistics prematurely, which would result in incorrect stats (e.g., when running [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v21.1/explain-analyze)). [#62384][#62384] -- Fixed a bug where setting the `kv.closed_timestamp.target_duration` to 0 did not disable routing requests to [follower replicas](https://www.cockroachlabs.com/docs/v21.1/follower-reads). [#62439][#62439] -- Fixed a bug where a failed [restore from a backup](https://www.cockroachlabs.com/docs/v21.1/restore) including [user defined types](https://www.cockroachlabs.com/docs/v21.1/create-type) would require manual cleanup. [#62454][#62454] - -
- -

Contributors

- -This release includes 61 merged PRs by 21 authors. -We would like to thank the following contributors from the CockroachDB community: - -- Tharun - -
- -[#61499]: https://github.com/cockroachdb/cockroach/pull/61499 -[#61815]: https://github.com/cockroachdb/cockroach/pull/61815 -[#61945]: https://github.com/cockroachdb/cockroach/pull/61945 -[#62022]: https://github.com/cockroachdb/cockroach/pull/62022 -[#62041]: https://github.com/cockroachdb/cockroach/pull/62041 -[#62049]: https://github.com/cockroachdb/cockroach/pull/62049 -[#62089]: https://github.com/cockroachdb/cockroach/pull/62089 -[#62103]: https://github.com/cockroachdb/cockroach/pull/62103 -[#62104]: https://github.com/cockroachdb/cockroach/pull/62104 -[#62114]: https://github.com/cockroachdb/cockroach/pull/62114 -[#62119]: https://github.com/cockroachdb/cockroach/pull/62119 -[#62159]: https://github.com/cockroachdb/cockroach/pull/62159 -[#62162]: https://github.com/cockroachdb/cockroach/pull/62162 -[#62176]: https://github.com/cockroachdb/cockroach/pull/62176 -[#62180]: https://github.com/cockroachdb/cockroach/pull/62180 -[#62182]: https://github.com/cockroachdb/cockroach/pull/62182 -[#62194]: https://github.com/cockroachdb/cockroach/pull/62194 -[#62220]: https://github.com/cockroachdb/cockroach/pull/62220 -[#62225]: https://github.com/cockroachdb/cockroach/pull/62225 -[#62245]: https://github.com/cockroachdb/cockroach/pull/62245 -[#62249]: https://github.com/cockroachdb/cockroach/pull/62249 -[#62263]: https://github.com/cockroachdb/cockroach/pull/62263 -[#62341]: https://github.com/cockroachdb/cockroach/pull/62341 -[#62383]: https://github.com/cockroachdb/cockroach/pull/62383 -[#62384]: https://github.com/cockroachdb/cockroach/pull/62384 -[#62396]: https://github.com/cockroachdb/cockroach/pull/62396 -[#62409]: https://github.com/cockroachdb/cockroach/pull/62409 -[#62412]: https://github.com/cockroachdb/cockroach/pull/62412 -[#62439]: https://github.com/cockroachdb/cockroach/pull/62439 -[#62450]: https://github.com/cockroachdb/cockroach/pull/62450 -[#62454]: https://github.com/cockroachdb/cockroach/pull/62454 diff --git a/src/current/_includes/releases/v21.1/v21.1.0-beta.3.md b/src/current/_includes/releases/v21.1/v21.1.0-beta.3.md deleted file mode 100644 index 4f89bde62f0..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.0-beta.3.md +++ /dev/null @@ -1,71 +0,0 @@ -## v21.1.0-beta.3 - -Release Date: April 12, 2021 - - - -

Enterprise edition changes

- -- The `WITH avro_schema_prefix` option for Avro [changefeeds](https://www.cockroachlabs.com/docs/v21.1/create-changefeed) now sets `schema.namespace` [#61734][#61734] {% comment %}doc{% endcomment %} -- CockroachDB now fails fast when [Change Data Capture](https://www.cockroachlabs.com/docs/v21.1/stream-data-out-of-cockroachdb-using-changefeeds) writes are blocked. [#62756][#62756] {% comment %}doc{% endcomment %} - -

SQL language changes

- -

Multi-region SQL changes

- -- Users can now use a multi-region [`ALTER DATABASE`](https://www.cockroachlabs.com/docs/v21.1/alter-database) command if: - - - The user is an [`admin`](https://www.cockroachlabs.com/docs/v21.1/authorization#admin-role) user - - The user is the owner of the database. - - The user has [`CREATE`](https://www.cockroachlabs.com/docs/v21.1/authorization#privileges) privileges on the database. [#62528][#62528] {% comment %}doc{% endcomment %} -- Availability zones are now ordered when using the `SHOW REGIONS` set of commands. [#62619][#62619] {% comment %}doc{% endcomment %} - -

General SQL changes

- -- Added the `stub_catalog_tables` [session variable](https://www.cockroachlabs.com/docs/v21.1/set-vars), which is enabled by default. If disabled, querying an unimplemented [`pg_catalog`](https://www.cockroachlabs.com/docs/v21.1/pg-catalog) table will result in an error, as is the case in v20.2 and earlier. Otherwise, the query will simply return no rows. [#62621][#62621] {% comment %}doc{% endcomment %} - -

DB Console changes

- -- The [**Statements** page](https://www.cockroachlabs.com/docs/v21.1/ui-statements-page) now shows internal statements when the *all* filter option is selected. [#62677][#62677] {% comment %}doc{% endcomment %} - -

Bug fixes

- -- Fixed a bug that in rare circumstances could cause an implicitly committed (`STAGING`) transaction to be uncommitted if any unresolved intents were removed by a range clear (e.g., when cleaning up a dropped table). This bug fix is only effective with separated intents, which are disabled by default. [#62376][#62376] -- Added a `DuplicateObject` error code for when a user attempts to `ADD REGION` to a database where the region already exists. [#62491][#62491] -- Fixed an internal error that could occur during planning for queries involving tables with many columns and at least one [GIN index](https://www.cockroachlabs.com/docs/v21.1/inverted-indexes). The error, "estimated distinct count must be non-zero", was caused by an invalid pointer access in the cardinality estimation code. This has now been fixed. [#62545][#62545] -- Writing files to `userfile` would sometimes result in an error claiming that the `userfile` table already exists. This is now fixed. [#62544][#62544] -- When adding/dropping regions from a multi-region database, the user must now have privileges on all regional-by-row tables as these are implicitly re-partitioned under the hood. [#62612][#62612] -- Fixed an internal error caused by comparing collation names that had different upper/lower case characters. [#62637][#62637] -- Fixed a bug whereby [`ENUM`](https://www.cockroachlabs.com/docs/v21.1/enum) types which have large numbers of values would cause unexpected errors when attempting to read from tables with columns using that `ENUM` type. [#62210][#62210] -- Fixed a bug introduced in earlier v21.1 alpha releases which could cause panics when [dropping indexes](https://www.cockroachlabs.com/docs/v21.1/drop-index) on tables partitioned by [user-defined types](https://www.cockroachlabs.com/docs/v21.1/enum). [#62725][#62725] -- Fixed a bug from earlier v21.1 alpha releases whereby dropping an index on a table partitioned by a user-defined type and then [dropping the table](https://www.cockroachlabs.com/docs/v21.1/drop-table) and then [dropping the type](https://www.cockroachlabs.com/docs/v21.1/drop-type) before the GC TTL for the index has expired could result in a crash. [#62725][#62725] - -

Performance improvements

- -- Improved the performance of the [vectorized engine](https://www.cockroachlabs.com/docs/v21.1/vectorized-execution) when scanning fewer than 1024 rows at a time. [#62365][#62365] -- Improved logic in determining the configuration for data to avoid expensive work when there are a large number of [user-defined schemas](https://www.cockroachlabs.com/docs/v21.1/create-schema). [#62577][#62577] -- Addressed a performance regression from a past change regarding read-triggered compactions. [#62676][#62676] - -

Contributors

- -This release includes 37 merged PRs by 23 authors. - -[#61734]: https://github.com/cockroachdb/cockroach/pull/61734 -[#62210]: https://github.com/cockroachdb/cockroach/pull/62210 -[#62365]: https://github.com/cockroachdb/cockroach/pull/62365 -[#62376]: https://github.com/cockroachdb/cockroach/pull/62376 -[#62491]: https://github.com/cockroachdb/cockroach/pull/62491 -[#62528]: https://github.com/cockroachdb/cockroach/pull/62528 -[#62544]: https://github.com/cockroachdb/cockroach/pull/62544 -[#62545]: https://github.com/cockroachdb/cockroach/pull/62545 -[#62577]: https://github.com/cockroachdb/cockroach/pull/62577 -[#62606]: https://github.com/cockroachdb/cockroach/pull/62606 -[#62612]: https://github.com/cockroachdb/cockroach/pull/62612 -[#62619]: https://github.com/cockroachdb/cockroach/pull/62619 -[#62621]: https://github.com/cockroachdb/cockroach/pull/62621 -[#62637]: https://github.com/cockroachdb/cockroach/pull/62637 -[#62676]: https://github.com/cockroachdb/cockroach/pull/62676 -[#62677]: https://github.com/cockroachdb/cockroach/pull/62677 -[#62725]: https://github.com/cockroachdb/cockroach/pull/62725 -[#62733]: https://github.com/cockroachdb/cockroach/pull/62733 -[#62756]: https://github.com/cockroachdb/cockroach/pull/62756 diff --git a/src/current/_includes/releases/v21.1/v21.1.0-beta.4.md b/src/current/_includes/releases/v21.1/v21.1.0-beta.4.md deleted file mode 100644 index d1ef69146da..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.0-beta.4.md +++ /dev/null @@ -1,105 +0,0 @@ -## v21.1.0-beta.4 - -Release Date: April 19, 2021 - - - -

General changes

- -- Removed experimental feature `UNIQUE WITHOUT INDEX` from the documentation. [#63499][#63499] {% comment %}doc{% endcomment %} - -

SQL language changes

- -- The [`pg_get_partkeydef` built-in function](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators) is now implemented by always returning `NULL`. [#63149][#63149] {% comment %}doc{% endcomment %} -- CockroachDB now collects execution stats for all statements when seen for the first time. To disable this behavior, set the [`sql.txn_stats.sample_rate` cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings) to 0, which will disable all execution stats collection. [#63325][#63325] {% comment %}doc{% endcomment %} -- CockroachDB will now block the ability to set the initial `PRIMARY REGION` of a database if any [multi-region](https://www.cockroachlabs.com/docs/v21.1/multiregion-overview) fields on any zone configs in the database have been set. [#63354][#63354] {% comment %}doc{% endcomment %} -- CockroachDB now introduces a `pgcode` when attempting to [`DROP REGION`](https://www.cockroachlabs.com/docs/v21.1/multiregion-overview) when the region being dropped is the `PRIMARY REGION`. [#63354][#63354] {% comment %}doc{% endcomment %} -- Replaced the word "tuple" with its more user-friendly synonym "row" in [vectorized stats outputs](https://www.cockroachlabs.com/docs/v21.1/vectorized-execution). [#62956][#62956] {% comment %}doc{% endcomment %} -- Changed [`BACKUP`](https://www.cockroachlabs.com/docs/v21.1/take-full-and-incremental-backups) of [interleaved tables](https://www.cockroachlabs.com/docs/v21.1/interleave-in-parent) to require the `include_deprecated_interleaves` option as interleaved table backups will not be able to be restored in future versions of CockroachDB. [#63501][#63501] {% comment %}doc{% endcomment %} - -

Operational changes

- -- `RESTORE` now cannot [restore](https://www.cockroachlabs.com/docs/v21.1/take-full-and-incremental-backups) `BACKUP`s made by newer versions of CockroachDB. [#62398][#62398] {% comment %}doc{% endcomment %} - -

DB Console changes

- -The following statements now render correctly as events in the [DB Console](https://www.cockroachlabs.com/docs/v21.1/ui-overview) [#63141][#63141]: -- [`ALTER DATABASE ADD REGION`](https://www.cockroachlabs.com/docs/v21.1/add-region) -- [`ALTER DATABASE SET PRIMARY REGION`](https://www.cockroachlabs.com/docs/v21.1/set-locality) -- [`ALTER DATABASE ... SURVIVE ... FAILURE`](https://www.cockroachlabs.com/docs/v21.1/survive-failure) -- [`ALTER DATABASE DROP REGION`](https://www.cockroachlabs.com/docs/v21.1/drop-region) -- [`CREATE TYPE`](https://www.cockroachlabs.com/docs/v21.1/create-type) -- [`ALTER TYPE`](https://www.cockroachlabs.com/docs/v21.1/alter-type) -- [`DROP TYPE`](https://www.cockroachlabs.com/docs/v21.1/drop-type) - -

Bug fixes

- -- Fixed a bug present in earlier 21.1 versions where [`BACKUP`s](https://www.cockroachlabs.com/docs/v21.1/take-full-and-incremental-backups) would produce an error when they should be able to backup the underlying data. [#63095][#63095] -- [Dropping a foreign key](https://www.cockroachlabs.com/docs/v21.1/drop-constraint) that was added in the same transaction no longer triggers an internal error. This bug has been present since at least version 20.1. [#62879][#62879] -- Fixed a bug where an `ALTER TABLE ... ADD COLUMN ... UNIQUE` statement would cause an error if the table had a [`PARTITION ALL BY` or `REGIONAL BY ROW`](https://www.cockroachlabs.com/docs/v21.1/multiregion-overview) definition. [#63189][#63189] -- Fixed a bug in earlier 21.1 versions where `CREATE TABLE LIKE` would copy a [`VIRTUAL` column](https://www.cockroachlabs.com/docs/v21.1/computed-columns) from the source table as a `STORED` column in the destination table. [#63172][#63172] -- CockroachDB now returns an error when trying to perform a [backup](https://www.cockroachlabs.com/docs/v21.1/take-full-and-incremental-backups) of a cluster that was taken on another tenant. [#63223][#63223] -- Fixed a bug where index backfill data may have been missed by [`BACKUP`](https://www.cockroachlabs.com/docs/v21.1/take-full-and-incremental-backups) in incremental backups. [#63221][#63221] -- Fixed a bug where [`REGIONAL BY ROW` zone configs](https://www.cockroachlabs.com/docs/v21.1/multiregion-overview) were dropped before `REGIONAL BY ROW` changes are finalized. This caused a bug when the `REGIONAL BY ROW` transformation fail. [#63274][#63274] -- Fixed a case where implicitly partitioned columns (e.g., from [`REGIONAL BY ROW` tables](https://www.cockroachlabs.com/docs/v21.1/multiregion-overview) and hash-sharded indexes) previously showed as `implicit = false` when using `SHOW INDEXES` or querying `information_schema.pg_indexes`. [#63275][#63275] -- Fixed an error that could occur when performing an [`UPSERT`](https://www.cockroachlabs.com/docs/v21.1/upsert) on a [`REGIONAL BY ROW` table](https://www.cockroachlabs.com/docs/v21.1/multiregion-overview) with no secondary indexes or foreign keys. The error, 'missing "crdb_region" primary key column', has now been fixed. [#63257][#63257] -- Fixed a bug where tables that were created by CockroachDB 19.x or older that included foreign key constraints and were [backed up](https://www.cockroachlabs.com/docs/v21.1/take-full-and-incremental-backups) with the `revision_history` option would be malformed when restored by a CockroachDB 20.x cluster if the `RESTORE` used the `AS OF SYSTEM TIME` option. [#63267][#63267] -- Fixed a bug in [user-defined schemas](https://www.cockroachlabs.com/docs/v21.1/schema-design-schema) where the dropping of any schema would prevent creation of schemas with the name of the database and would corrupt existing schemas of that name. [#63395][#63395] -- Fixed a bug in previous CockroachDB 21.1 releases where CockroachDB would sometimes return the output in an incorrect order if a query containing hash-aggregation was executed via the [vectorized execution engine](https://www.cockroachlabs.com/docs/v21.1/vectorized-execution) and spilled to temporary storage. [#63408][#63408] -- Fixed a bug where [incremental cluster backups](https://www.cockroachlabs.com/docs/v21.1/take-full-and-incremental-backups) may have missed data written to tables while they were `OFFLINE`. In practice this could have occurred if a [`RESTORE`](https://www.cockroachlabs.com/docs/v21.1/restore) or [`IMPORT`](https://www.cockroachlabs.com/docs/v21.1/import) was running across incremental backups. [#63304][#63304] -- CockroachDB now includes more anonymized data from SQL statements in telemetry updates and crash reports. [#63482][#63482] -- Fixed a rare issue that caused replica divergence. If this occurred the divergence was reported by the [replica consistency checker](https://www.cockroachlabs.com/docs/v21.1/architecture/replication-layer), typically within 24 hours of occurrence, and caused the nodes to terminate. [#63473][#63473] - -

Performance improvements

- -- Improved performance of reverting [`IMPORT INTO`](https://www.cockroachlabs.com/docs/v21.1/import-into) jobs that `IMPORT INTO` empty tables. [#63220][#63220] - -

Miscellaneous improvements

- -- Made the Kafka library used in [changefeeds](https://www.cockroachlabs.com/docs/v21.1/stream-data-out-of-cockroachdb-using-changefeeds) configurable using the `kafka_sink_config` option to enable latency versus throughput configurations. [#63361][#63361] -- Connected the [changefeed](https://www.cockroachlabs.com/docs/v21.1/stream-data-out-of-cockroachdb-using-changefeeds) memory monitor to the parent SQL monitor to ensure that changefeeds do not try to use more memory than is available to the SQL server. [#63409][#63409] - -

Contributors

- -This release includes 50 merged PRs by 20 authors. - -[#62398]: https://github.com/cockroachdb/cockroach/pull/62398 -[#62879]: https://github.com/cockroachdb/cockroach/pull/62879 -[#62956]: https://github.com/cockroachdb/cockroach/pull/62956 -[#62968]: https://github.com/cockroachdb/cockroach/pull/62968 -[#62971]: https://github.com/cockroachdb/cockroach/pull/62971 -[#63095]: https://github.com/cockroachdb/cockroach/pull/63095 -[#63141]: https://github.com/cockroachdb/cockroach/pull/63141 -[#63149]: https://github.com/cockroachdb/cockroach/pull/63149 -[#63172]: https://github.com/cockroachdb/cockroach/pull/63172 -[#63189]: https://github.com/cockroachdb/cockroach/pull/63189 -[#63220]: https://github.com/cockroachdb/cockroach/pull/63220 -[#63221]: https://github.com/cockroachdb/cockroach/pull/63221 -[#63223]: https://github.com/cockroachdb/cockroach/pull/63223 -[#63257]: https://github.com/cockroachdb/cockroach/pull/63257 -[#63267]: https://github.com/cockroachdb/cockroach/pull/63267 -[#63274]: https://github.com/cockroachdb/cockroach/pull/63274 -[#63275]: https://github.com/cockroachdb/cockroach/pull/63275 -[#63304]: https://github.com/cockroachdb/cockroach/pull/63304 -[#63325]: https://github.com/cockroachdb/cockroach/pull/63325 -[#63354]: https://github.com/cockroachdb/cockroach/pull/63354 -[#63361]: https://github.com/cockroachdb/cockroach/pull/63361 -[#63395]: https://github.com/cockroachdb/cockroach/pull/63395 -[#63402]: https://github.com/cockroachdb/cockroach/pull/63402 -[#63403]: https://github.com/cockroachdb/cockroach/pull/63403 -[#63408]: https://github.com/cockroachdb/cockroach/pull/63408 -[#63409]: https://github.com/cockroachdb/cockroach/pull/63409 -[#63473]: https://github.com/cockroachdb/cockroach/pull/63473 -[#63482]: https://github.com/cockroachdb/cockroach/pull/63482 -[#63499]: https://github.com/cockroachdb/cockroach/pull/63499 -[#63501]: https://github.com/cockroachdb/cockroach/pull/63501 -[1c89925eb]: https://github.com/cockroachdb/cockroach/commit/1c89925eb -[32b5b8587]: https://github.com/cockroachdb/cockroach/commit/32b5b8587 -[33816b3fd]: https://github.com/cockroachdb/cockroach/commit/33816b3fd -[56088535f]: https://github.com/cockroachdb/cockroach/commit/56088535f -[57b9589e9]: https://github.com/cockroachdb/cockroach/commit/57b9589e9 -[6394ff543]: https://github.com/cockroachdb/cockroach/commit/6394ff543 -[6ebecfd38]: https://github.com/cockroachdb/cockroach/commit/6ebecfd38 -[71cacc783]: https://github.com/cockroachdb/cockroach/commit/71cacc783 -[abc4eb5ac]: https://github.com/cockroachdb/cockroach/commit/abc4eb5ac -[fc7249f82]: https://github.com/cockroachdb/cockroach/commit/fc7249f82 diff --git a/src/current/_includes/releases/v21.1/v21.1.0-beta.5.md b/src/current/_includes/releases/v21.1/v21.1.0-beta.5.md deleted file mode 100644 index a7cbcc0234c..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.0-beta.5.md +++ /dev/null @@ -1,88 +0,0 @@ -## v21.1.0-beta.5 - -Release Date: April 29, 2021 - - - -

Docker image

- -{% include copy-clipboard.html %} -~~~shell -$ docker pull cockroachdb/cockroach-unstable:v21.1.0-beta.5 -~~~ - -

Backward-incompatible changes

- -- The internal representation of the `voter_constraints` [zone configuration](https://www.cockroachlabs.com/docs/v21.1/configure-replication-zones) attribute (new in v21.1) has been altered in a way that is partially incompatible with the representation used by previous v21.1 betas (and the alphas that include this attribute). This means that users who directly set the `voter_constraints` attribute to an empty list will lose those constraints and will have to reset them. [#63674][#63674] {% comment %}doc{% endcomment %} - -

General changes

- -- Upgraded the CockroachDB binary to Go 1.15.10. [#63865][#63865] {% comment %}doc{% endcomment %} - -

Enterprise edition changes

- -- [Changefeeds](https://www.cockroachlabs.com/docs/v21.1/create-changefeed) will now fail on any [regional by row table](https://www.cockroachlabs.com/docs/v21.1/multiregion-overview#regional-by-row-tables) with an error, `CHANGEFEED cannot target REGIONAL BY ROW tables: .` This is to prevent unexpected behavior in changefeeds until they offer full support for this type of table. [#63542][#63542] {% comment %}doc{% endcomment %} - -

SQL language changes

- -- [`RESTORE`](https://www.cockroachlabs.com/docs/v21.1/restore) now re-validates restored indexes if they were restored from an [incremental backup](https://www.cockroachlabs.com/docs/v21.1/take-full-and-incremental-backups#incremental-backups) that was taken while the index was being created. [#63320][#63320] {% comment %}doc{% endcomment %} -- The `sql.distsql.temp_storage.workmem` [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings) is now marked as public and is included in the documentation. It determines how much RAM a single operation of a single query can use before it must spill to temporary storage. Note the operations that do not support the disk spilling will ignore this setting and are subject only to the [`--max-sql-memory`](https://www.cockroachlabs.com/docs/v21.1/cockroach-start#flags) startup argument. [#63997][#63997] {% comment %}doc{% endcomment %} -- SQL executor data validation queries spawned by a schema change or a [`RESTORE`](https://www.cockroachlabs.com/docs/v21.1/restore) will now use [vectorized](https://www.cockroachlabs.com/docs/v21.1/vectorized-execution) query execution and [DistSQL](https://www.cockroachlabs.com/docs/v21.1/architecture/sql-layer#distsql) optimization if these are enabled in the [cluster settings](https://www.cockroachlabs.com/docs/v21.1/cluster-settings) `sql.defaults.vectorize` and `sql.defaults.distsql`, respectively. This may improve the speed of these queries. [#64004][#64004] {% comment %}doc{% endcomment %} - -

Bug fixes

- -- Allow the leases of offline descriptors to be cached, preventing issues with lease acquisitions during bulk operations (backup and restore operations). [#63558][#63558] -- Fixed bugs where [`TRUNCATE`](https://www.cockroachlabs.com/docs/v21.1/truncate) concurrent with index construction and other schema changes could result in corruption. [#63143][#63143] -- Fixed a panic condition which could occur in cases after a [`RESTORE`](https://www.cockroachlabs.com/docs/v21.1/restore) of a table with [user-defined types](https://www.cockroachlabs.com/docs/v21.1/enum). [#63549][#63549] -- CockroachDB now prevents a panic condition and offers a graceful error when [spatial function](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators#spatial-functions) `ST_Segmentize` attempts to generate an extremely large number of points on a [`GEOGRAPHY`](https://www.cockroachlabs.com/docs/v21.1/spatial-glossary#geography). [#63758][#63758] -- Previously, running the `ST_Simplify` [spatial function](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators#spatial-functions) on a non-numeric value would cause the node to crash. This is now resolved. [#63797][#63797] -- CockroachDB now uses the existing primary key to validate indexes built for [`ALTER PRIMARY KEY`](https://www.cockroachlabs.com/docs/v21.1/alter-primary-key) changes. [#63609][#63609] -- Fixed occasional stalls and excessive CPU usage under macOS Big Sur when building CockroachDB with Go 1.14 or newer. [#63789][#63789] -- Fixed a bug where [crdb_internal.validate_multi_region_zone_configs()](https://github.com/cockroachdb/cockroach/blob/master/docs/generated/sql/functions.md#multi-region-functions) would fail during a [`REGIONAL BY ROW`](https://www.cockroachlabs.com/docs/v21.1/set-locality) locality transition. [#63834][#63834] -- Fixed an internal error that could occur when executing queries using a [GIN index](https://www.cockroachlabs.com/docs/v21.1/inverted-indexes). The error was an index out of range error, and could occur in rare cases when a filter or join predicate contained at least two JSON, Array, Geometry or Geography expressions that were combined with `AND`. This has now been fixed. [#63811][#63811] -- Fixed a bug leading to crashes with the error message `writing below closed ts`. [#63861][#63861] -- Previously, if a user altered a table to [`REGIONAL BY ROW`](https://www.cockroachlabs.com/docs/v21.1/set-locality) when a region was being dropped, and the drop failed and had to be rolled back, it could have resulted in the [regional by row table](https://www.cockroachlabs.com/docs/v21.1/multiregion-overview#regional-by-row-tables) missing a partition for this region. This is now fixed. [#63793][#63793] -- Prevent an internal error `use of enum metadata before hydration as an enum` when querying or showing ranges from tables with user-defined types as their `PRIMARY KEY`. [#63878][#63878] -- Fixed a theoretical issue in index backfills that could result in stale entries that would likely fail validation. [#64044][#64044] -- CockroachDB now correctly accounts for used memory when closing compressed files. [#63917][#63917] - -

Performance improvements

- -- CockroachDB now limits a series of heap allocations when serving read-only queries. [#63972][#63972] -- CockroachDB now limits the amount of memory that can be used in internal buffers for Kafka and cloud sinks. [#63611][#63611] - -
- -

Contributors

- -This release includes 48 merged PRs by 23 authors. -We would like to thank the following contributors from the CockroachDB community: - -- Miguel Novelo (first-time contributor) -- Rupesh Harode (first-time contributor) - -
- -[#63143]: https://github.com/cockroachdb/cockroach/pull/63143 -[#63320]: https://github.com/cockroachdb/cockroach/pull/63320 -[#63542]: https://github.com/cockroachdb/cockroach/pull/63542 -[#63549]: https://github.com/cockroachdb/cockroach/pull/63549 -[#63558]: https://github.com/cockroachdb/cockroach/pull/63558 -[#63609]: https://github.com/cockroachdb/cockroach/pull/63609 -[#63611]: https://github.com/cockroachdb/cockroach/pull/63611 -[#63674]: https://github.com/cockroachdb/cockroach/pull/63674 -[#63758]: https://github.com/cockroachdb/cockroach/pull/63758 -[#63768]: https://github.com/cockroachdb/cockroach/pull/63768 -[#63789]: https://github.com/cockroachdb/cockroach/pull/63789 -[#63793]: https://github.com/cockroachdb/cockroach/pull/63793 -[#63797]: https://github.com/cockroachdb/cockroach/pull/63797 -[#63811]: https://github.com/cockroachdb/cockroach/pull/63811 -[#63834]: https://github.com/cockroachdb/cockroach/pull/63834 -[#63861]: https://github.com/cockroachdb/cockroach/pull/63861 -[#63865]: https://github.com/cockroachdb/cockroach/pull/63865 -[#63878]: https://github.com/cockroachdb/cockroach/pull/63878 -[#63917]: https://github.com/cockroachdb/cockroach/pull/63917 -[#63949]: https://github.com/cockroachdb/cockroach/pull/63949 -[#63972]: https://github.com/cockroachdb/cockroach/pull/63972 -[#63997]: https://github.com/cockroachdb/cockroach/pull/63997 -[#64004]: https://github.com/cockroachdb/cockroach/pull/64004 diff --git a/src/current/_includes/releases/v21.1/v21.1.0-rc.1.md b/src/current/_includes/releases/v21.1/v21.1.0-rc.1.md deleted file mode 100644 index 11745912fbc..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.0-rc.1.md +++ /dev/null @@ -1,36 +0,0 @@ -## v21.1.0-rc.1 - -Release Date: May 5, 2021 - - - -

SQL language changes

- -- CockroachDB no longer allows [`ADD REGION`](https://www.cockroachlabs.com/docs/v21.1/add-region) or `DROP REGION` statements if a [`REGIONAL BY ROW`](https://www.cockroachlabs.com/docs/v21.1/multiregion-overview) table has index changes underway, or if a table is transitioning to or from `REGIONAL BY ROW`. [#64255][#64255] {% comment %}doc{% endcomment %} -- CockroachDB now prevents index modification on `REGIONAL BY ROW` tables and locality to or from `REGIONAL BY ROW` changes while an `ADD REGION` or `DROP REGION` statement is being executed. [#64255][#64255] {% comment %}doc{% endcomment %} - -

Bug fixes

- -- Fixed a scenario in which a rapid sequence of [range splits](https://www.cockroachlabs.com/docs/v21.1/architecture/distribution-layer#range-splits) could trigger a storm of [Raft snapshots](https://www.cockroachlabs.com/docs/v21.1/architecture/replication-layer#snapshots). This would be accompanied by log messages of the form "would have dropped incoming MsgApp, but allowing due to ..." and tended to occur as part of [`RESTORE`](https://www.cockroachlabs.com/docs/v21.1/restore)/[`IMPORT`](https://www.cockroachlabs.com/docs/v21.1/import) operations. [#64202][#64202] -- Read-write contention on [`GLOBAL` tables](https://www.cockroachlabs.com/docs/v21.1/multiregion-overview) no longer has a potential to thrash without making progress. [#64215][#64215] -- Previously, if a [`DROP INDEX`](https://www.cockroachlabs.com/docs/v21.1/drop-index) failed during a `REGIONAL BY ROW` transition, the index could have been re-inserted back into the `REGIONAL BY ROW` table but would be invalid if it was [hash-sharded](https://www.cockroachlabs.com/docs/v21.1/hash-sharded-indexes) or [partitioned](https://www.cockroachlabs.com/docs/v21.1/multiregion-overview). This bug is now fixed. [#64255][#64255] -- Fixed a rare bug present in [v21.1 beta versions]({% link releases/index.md %}#testing-releases) that could cause rapid range splits and merges on a `GLOBAL` table to lead to a stuck leaseholder replica. The situation is no longer possible. [#64304][#64304] -- Fixed a bug in previous v21.1 beta versions that allowed the store rebalancer to spuriously down-replicate a range during normal operation. [#64303][#64303] -- CockroachDB now prevents some out-of-memory conditions caused by [schema change](https://www.cockroachlabs.com/docs/v21.1/online-schema-changes) validations concurrent with other high-memory-use queries. [#64307][#64307] -- Fixed a bug present since [v21.1.0-alpha.1](#v21-1-0-alpha-1) that could cause cascading [`DELETE`](https://www.cockroachlabs.com/docs/v21.1/delete)s with [subqueries](https://www.cockroachlabs.com/docs/v21.1/subqueries) to error. [#64278][#64278] -- Fixed a bug that caused store information to be incorrectly redacted from the [CockroachDB logs](https://www.cockroachlabs.com/docs/v21.1/logging-overview), when logging was configured with redaction. [#64338][#64338] -- Previously, the remote flows of execution in the [vectorized engine](https://www.cockroachlabs.com/docs/v21.1/vectorized-execution) could take a long time to shut down if a node participating in the plan died. This bug is now fixed. [#64219][#64219] - -

Contributors

- -This release includes 14 merged PRs by 14 authors. - -[#64202]: https://github.com/cockroachdb/cockroach/pull/64202 -[#64215]: https://github.com/cockroachdb/cockroach/pull/64215 -[#64219]: https://github.com/cockroachdb/cockroach/pull/64219 -[#64255]: https://github.com/cockroachdb/cockroach/pull/64255 -[#64278]: https://github.com/cockroachdb/cockroach/pull/64278 -[#64303]: https://github.com/cockroachdb/cockroach/pull/64303 -[#64304]: https://github.com/cockroachdb/cockroach/pull/64304 -[#64307]: https://github.com/cockroachdb/cockroach/pull/64307 -[#64338]: https://github.com/cockroachdb/cockroach/pull/64338 diff --git a/src/current/_includes/releases/v21.1/v21.1.0-rc.2.md b/src/current/_includes/releases/v21.1/v21.1.0-rc.2.md deleted file mode 100644 index 10942680e02..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.0-rc.2.md +++ /dev/null @@ -1,26 +0,0 @@ -## v21.1.0-rc.2 - -Release Date: May 5, 2021 - - - -

Enterprise edition changes

- -- [Changefeeds](https://www.cockroachlabs.com/docs/v21.1/stream-data-out-of-cockroachdb-using-changefeeds) now reliably fail when [`IMPORT INTO`](https://www.cockroachlabs.com/docs/v21.1/import-into) is run against a targeted table, as change data capture is [not supported](https://www.cockroachlabs.com/docs/v21.1/known-limitations#change-data-capture) for this action. [#64372][#64372] - -

Bug fixes

- -- Fixed a correctness bug, which caused partitioned index scans to omit rows where the value of the first index column was `NULL`. This bug was present since v19.2.0. [#64046][#64046] -- [`IMPORT`](https://www.cockroachlabs.com/docs/v21.1/import) and [`RESTORE`](https://www.cockroachlabs.com/docs/v21.1/restore) jobs that were in progress during a cluster backup will now be canceled when that cluster backup is restored. This fixes a bug where these restored jobs may have assumed to make progress that was not captured in the backup. [#64352][#64352] -- Fixed a race condition where read-only requests during replica removal (for example, during range merges or rebalancing) could be evaluated on the removed replica, returning an empty result. [#64370][#64370] -- Fixed a bug where encryption-at-rest metadata was not synced and could become corrupted during a hard reset. [#64473][#64473] - -

Contributors

- -This release includes 5 merged PRs by 6 authors. - -[#64046]: https://github.com/cockroachdb/cockroach/pull/64046 -[#64352]: https://github.com/cockroachdb/cockroach/pull/64352 -[#64370]: https://github.com/cockroachdb/cockroach/pull/64370 -[#64372]: https://github.com/cockroachdb/cockroach/pull/64372 -[#64473]: https://github.com/cockroachdb/cockroach/pull/64473 diff --git a/src/current/_includes/releases/v21.1/v21.1.0.md b/src/current/_includes/releases/v21.1/v21.1.0.md deleted file mode 100644 index fe0163a1a2a..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.0.md +++ /dev/null @@ -1,124 +0,0 @@ -## v21.1.0 - -Release Date: May 18, 2021 - -With the release of CockroachDB v21.1, we've made a variety of flexibility, performance, and compatibility improvements. Check out a [summary of the most significant user-facing changes](#v21-1-0-feature-summary) and then [upgrade to CockroachDB v21.1](https://www.cockroachlabs.com/docs/v21.1/upgrade-cockroach-version). - -To learn more: - -- Read the [v21.1 blog post](https://www.cockroachlabs.com/blog/cockroachdb-21-1-release/). -- Join us on May 19 for a [livestream on why multi-region applications](https://www.cockroachlabs.com/webinars/the-cockroach-hour-multi-region/) matter and how our Product and Engineering teams partnered to make them simple in v21.1. - - - -

CockroachCloud

- -- Get a free v21.1 cluster on CockroachCloud. -- Learn about recent updates to CockroachCloud in the [CockroachCloud Release Notes]({% link releases/cloud.md %}). - -

Feature summary

- -This section summarizes the most significant user-facing changes in v21.1.0. For a complete list of features and changes, including bug fixes and performance improvements, see the [release notes]({% link releases/index.md %}#testing-releases) for previous testing releases. You can also search for [what's new in v21.1 in our docs](https://www.cockroachlabs.com/docs/search?query=new+in+v21.1). - -{{site.data.alerts.callout_info}} -"Core" features are freely available in the core version and do not require an enterprise license. "Enterprise" features require an [enterprise license](https://www.cockroachlabs.com/get-cockroachdb/enterprise/). [CockroachCloud clusters](https://cockroachlabs.cloud/) include all enterprise features. You can also use [`cockroach demo`](https://www.cockroachlabs.com/docs/v21.1/cockroach-demo) to test enterprise features in a local, temporary cluster. -{{site.data.alerts.end}} - -- [SQL](#v21-1-0-sql) -- [Recovery and I/O](#v21-1-0-recovery-and-i-o) -- [Database Operations](#v21-1-0-database-operations) -- [Backward-incompatible changes](#v21-1-0-backward-incompatible-changes) -- [Deprecations](#v21-1-0-deprecations) -- [Known limitations](#v21-1-0-known-limitations) -- [Education](#v21-1-0-education) - - - -

SQL

- -Version | Feature | Description ---------|---------|------------ -Enterprise | **Multi-Region Improvements** | It is now much easier to leverage CockroachDB's low-latency and resilient multi-region capabilities. For an introduction to the high-level concepts, see the [Multi-Region Overview](https://www.cockroachlabs.com/docs/v21.1/multiregion-overview). For further details and links to related SQL statements, see [Choosing a Multi-Region Configuration](https://www.cockroachlabs.com/docs/v21.1/choosing-a-multi-region-configuration), [When to Use ZONE vs. REGION Survival Goals](https://www.cockroachlabs.com/docs/v21.1/when-to-use-zone-vs-region-survival-goals), and [When to Use REGIONAL vs. GLOBAL Tables](https://www.cockroachlabs.com/docs/v21.1/when-to-use-regional-vs-global-tables). For a demonstration of these capabilities using a local cluster, see the [Multi-Region Tutorial](https://www.cockroachlabs.com/docs/v21.1/demo-low-latency-multi-region-deployment). Finally, for details about related architectural enhancements, see [Non-Voting Replicas](https://www.cockroachlabs.com/docs/v21.1/architecture/replication-layer#non-voting-replicas) and [Non-Blocking Transactions](https://www.cockroachlabs.com/docs/v21.1/architecture/transaction-layer#non-blocking-transactions). -Enterprise | **Automatic Follower Reads for Read-Only Transactions** | You can now force all read-only transactions in a session to use [follower reads](https://www.cockroachlabs.com/docs/v21.1/follower-reads) by setting the new [`default_transaction_use_follow_reads` session variable](https://www.cockroachlabs.com/docs/v21.1/show-vars#supported-variables) to `on`. -Core | **Query Observability Improvements** | [`EXPLAIN`](https://www.cockroachlabs.com/docs/v21.1/explain) and [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v21.1/explain-analyze) responses have been unified and extended with additional details, including automatic statistics-backed row estimates for `EXPLAIN`, and maximum memory usage, network usage, nodes used per operator, and rows used per operator for `EXPLAIN ANALYZE`. `EXPLAIN ANALYZE` now outputs a text-based statement plan tree by default, showing statistics about the statement processors at each phase of the statement.

The [Transactions Page](https://www.cockroachlabs.com/docs/v21.1/ui-transactions-page) and [Statements Page](https://www.cockroachlabs.com/docs/v21.1/ui-statements-page) of the DB Console also include such details as well the mean average time statements were in contention with other transactions within a specified time interval. The [SQL Dashboard](https://www.cockroachlabs.com/docs/v21.1/ui-sql-dashboard) has been expanded with additional graphs for latency, contention, memory, and network traffic. The [SQL Tuning with `EXPLAIN`](https://www.cockroachlabs.com/docs/v21.1/sql-tuning-with-explain) tutorial and [Optimize Statement Performance](https://www.cockroachlabs.com/docs/v21.1/make-queries-fast) guidance have been updated to leverage these improvements. -Core | **Inverted Joins** | CockroachDB now supports [inverted joins](https://www.cockroachlabs.com/docs/v21.1/joins#inverted-joins), which force the optimizer to use a GIN index on the right side of the join. Inverted joins can only be used with `INNER` and `LEFT` joins. -Core | **Partial GIN Indexes** | You can now create a [partial GIN index](https://www.cockroachlabs.com/docs/v21.1/inverted-indexes#partial-gin-indexes) on a subset of `JSON`, `ARRAY`, or geospatial container column data. -Core | **Virtual Computed Columns** | You can now create [virtual computed columns](https://www.cockroachlabs.com/docs/v21.1/computed-columns), which are not stored on disk and are recomputed as the column data in the expression changes. -Core | **Dropping Values in User-Defined Types** | It's now possible to [drop values in user-defined types](https://www.cockroachlabs.com/docs/v21.1/alter-type#drop-a-value-in-a-user-defined-type). -Core | **Sequence Caching** | You can now create a sequence with the [`CACHE`](https://www.cockroachlabs.com/docs/v21.1/create-sequence#cache-sequence-values-in-memory) keyword to have the sequence cache its values in memory. -Core | **Changing Sequence & View Ownership** | You can use the new `OWNER TO` parameter to change the owner of a [sequence](https://www.cockroachlabs.com/docs/v21.1/alter-sequence) or [view](https://www.cockroachlabs.com/docs/v21.1/alter-view). -Core | **Show `CREATE` Statements for the Current Database** | You can now use [`SHOW CREATE ALL TABLES`](https://www.cockroachlabs.com/docs/v21.1/show-create#show-the-statements-needed-to-recreate-all-tables-views-and-sequences-in-the-current-database) to return the `CREATE` statements for all of the tables, views, and sequences in the current database. -Core | **Storage of Z/M Coordinates for Spatial Objects** | You can now store a third dimension coordinate (`Z`), a measure coordinate (`M`), or both (`ZM`) with [spatial objects](https://www.cockroachlabs.com/docs/v21.1/spatial-features#spatial-objects). Note, however, that CockroachDB's [spatial indexing](https://www.cockroachlabs.com/docs/v21.1/spatial-indexes) is still based on the 2D coordinate system. This means that the Z/M dimension is not index accelerated when using spatial predicates, and some spatial functions ignore the Z/M dimension, with transformations discarding the Z/M value. -Core | **Third-Party Tool Support** | [Spatial libraries for Hibernate, ActiveRecord, and Django](https://www.cockroachlabs.com/docs/v21.1/spatial-data#orm-compatibility) are now fully compatible with CockroachDB's spatial features. The [DataGrip IDE](https://www.cockroachlabs.com/docs/v21.1/third-party-database-tools#integrated-development-environments-ides) and [Liquibase schema migration tool](https://www.cockroachlabs.com/docs/v21.1/third-party-database-tools#schema-migration-tools) are also now supported. -Core | **Connection Pooling Guidance** | Creating the appropriate size pool of connections is critical to gaining maximum performance in an application. For guidance on sizing, validating, and using connection pools with CockroachDB, as well as examples for Java and Go applications, see [Use Connection Pools](https://www.cockroachlabs.com/docs/v21.1/connection-pooling). -Core | **PostgreSQL 13 Compatibility** | CockroachDB is now wire-compatible with PostgreSQL 13. For more information, see [PostgreSQL Compatibility](https://www.cockroachlabs.com/docs/v21.1/postgresql-compatibility). - -

Recovery and I/O

- -Version | Feature | Description ---------|---------|------------ -Enterprise | **Changefeed Topic Naming Improvements** | New [`CHANGEFEED` options](https://www.cockroachlabs.com/docs/v21.1/create-changefeed#options) give you more control over topic naming: The `full_table_name` option lets you use a fully-qualified table name in topics, subjects, schemas, and record output instead of the default table name, and can prevent unintended behavior when the same table name is present in multiple databases. The `avro_schema_prefix` option lets you use a fully-qualified schema name for a table instead of the default table name, and makes it possible for multiple databases or clusters to share the same schema registry when the same table name is present in multiple databases. -Core | **Running Jobs Asynchronously** | You can use the new `DETACHED` option to run [`BACKUP`](https://www.cockroachlabs.com/docs/v21.1/backup#options), [`RESTORE`](https://www.cockroachlabs.com/docs/v21.1/restore#options), and [`IMPORT`](https://www.cockroachlabs.com/docs/v21.1/import#import-options) jobs asynchronously and receive a job ID immediately rather than waiting for the job to finish. This option enables you to run such jobs within transactions. -Core | **Import from Local Dump File** | The new [`cockroach import`](https://www.cockroachlabs.com/docs/v21.1/cockroach-import) command imports a database or table from a local `PGDUMP` or `MYSQLDUMP` file into a running cluster. This is useful for quick imports of 15MB or smaller. For larger imports, use the [`IMPORT`](https://www.cockroachlabs.com/docs/v21.1/import) statement. -Core | **Additional Import Options** | New [`IMPORT` options](https://www.cockroachlabs.com/docs/v21.1/import#import-options) give you more control over the import process's behavior: The `row_limit` option limits the number of rows to import, which is useful for finding errors quickly before executing a more time- and resource-consuming import; the `ignore_unsupported_statements` option ignores SQL statements in `PGDUMP` files that are unsupported by CockroachDB; and the `log_ignored_statements` option logs unsupported statements to cloud storage or userfile storage when `ignore_unsupported_statements` is enabled. -Core | **Re-validating Indexes During `RESTORE`** | Incremental backups created by v20.2.2 and prior v20.2.x releases or v20.1.4 and prior v20.1.x releases may include incomplete data for indexes that were in the process of being created. Therefore, when incremental backups taken by these versions are restored by v21.1.0, any indexes created during those incremental backups will be re-validated by [`RESTORE`](https://www.cockroachlabs.com/docs/v21.1/restore#restore-from-incremental-backups). - -

Database Operations

- -Version | Feature | Description ---------|---------|------------ -Core | **Logging Improvements** | Log events are now organized into [logging channels](https://www.cockroachlabs.com/docs/v21.1/logging-overview) that address different [use cases](https://www.cockroachlabs.com/docs/v21.1/logging-use-cases). Logging channels can be freely mapped to log sinks and routed to destinations outside CockroachDB (including external log collectors). All logging aspects, including message format (e.g., JSON), are now [configurable](https://www.cockroachlabs.com/docs/v21.1/configure-logs) via YAML. -Core | **Cluster API v2.0** | This [new API](https://www.cockroachlabs.com/docs/v21.1/cluster-api) for monitoring clusters and nodes builds on prior endpoints, offering a consistent REST interface that's easier to use with your choice of tooling. The API offers a streamlined authentication process and developer-friendly [reference documentation](https://www.cockroachlabs.com/docs/api/cluster/v2). -Core | **OpenShift-certified
Kubernetes Operator** | You can now [deploy CockroachDB on the Red Hat OpenShift platform](https://www.cockroachlabs.com/docs/v21.1/deploy-cockroachdb-with-kubernetes-openshift) using the latest OpenShift-certified Kubernetes Operator. -Core | **Auto TLS Certificate Setup** | Using the new [`cockroach connect` command](https://www.cockroachlabs.com/docs/v21.1/auto-tls), you can now let CockroachDB handle the creation and distribution among nodes of a cluster's CA (certificate authority) and node certificates. Note that this feature is an alpha release with core functionality that may not meet your requirements. -Core | **Built-in Timezone Library** | The CockroachDB binary now includes a copy of the [`tzdata` library](https://www.cockroachlabs.com/docs/v21.1/recommended-production-settings#dependencies), which is required by certain features that use time zone data. If CockroachDB cannot find the `tzdata` library externally, it will now use this built-in copy. - -

Backward-incompatible changes

- -Before [upgrading to CockroachDB v21.1](https://www.cockroachlabs.com/docs/v21.1/upgrade-cockroach-version), be sure to review the following backward-incompatible changes and adjust your deployment as necessary. - -- Rows containing empty arrays in [`ARRAY`](https://www.cockroachlabs.com/docs/v21.1/array) columns are now contained in [GIN indexes](https://www.cockroachlabs.com/docs/v21.1/inverted-indexes). This change is backward-incompatible because prior versions of CockroachDB will not be able to recognize and decode keys for empty arrays. Note that rows containing `NULL` values in an indexed column will still not be included in GIN indexes. -- Concatenation between a non-null argument and a null argument is now typed as string concatenation, whereas it was previously typed as array concatenation. This means that the result of `NULL || 1` will now be `NULL` instead of `{1}`. To preserve the old behavior, the null argument can be casted to an explicit type. -- The payload fields for certain event types in `system.eventlog` have been changed and/or renamed. Note that the payloads in `system.eventlog` were undocumented, so no guarantee was made about cross-version compatibility to this point. The list of changes includes (but is not limited to): - - `TargetID` has been renamed to `NodeID` for `node_join`. - - `TargetID` has been renamed to `TargetNodeID` for `node_decommissioning` / `node_decommissioned` / `node_recommissioned`. - - `NewDatabaseName` has been renamed to `NewDatabaseParent` for `convert_to_schema`. - - `grant_privilege` and `revoke_privilege` have been removed; they are replaced by `change_database_privilege`, `change_schema_privilege`, `change_type_privilege`, and `change_table_privilege`. Each event only reports a change for one user/role, so the `Grantees` field was renamed to `Grantee`. - - Each `drop_role` event now pertains to a single [user/role](https://www.cockroachlabs.com/docs/v21.1/authorization#sql-users). -- The connection and authentication logging enabled by the [cluster settings](https://www.cockroachlabs.com/docs/v21.1/cluster-settings) `server.auth_log.sql_connections.enabled` and `server.auth_log.sql_sessions.enabled` was previously using a text format which was hard to parse and integrate with external monitoring tools. This has been changed to use the standard notable event mechanism, with standardized payloads. The output format is now structured; see the [reference documentation](https://www.cockroachlabs.com/docs/v21.1/eventlog) for details about the supported event types and payloads. -- The format for SQL audit, execution, and query logs has changed from a crude space-delimited format to JSON. To opt out of this new behavior and restore the pre-v21.1 logging format, you can set the [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings) `sql.log.unstructured_entries.enabled` to `true`. -- The [`cockroach debug ballast`](https://www.cockroachlabs.com/docs/v21.1/cockroach-debug-ballast) command now refuses to overwrite the target ballast file if it already exists. This change is intended to prevent mistaken uses of the `ballast` command by operators. Scripts that integrate `cockroach debug ballast` can consider adding a `rm` command. -- Removed the `kv.atomic_replication_changes.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings). All replication changes on a range now use joint-consensus. -- Currently, [changefeeds](https://www.cockroachlabs.com/docs/v21.1/stream-data-out-of-cockroachdb-using-changefeeds) connected to [Kafka versions < v1.0](https://docs.confluent.io/platform/current/installation/versions-interoperability.html) are not supported in CockroachDB v21.1. - -

Deprecations

- -- The CLI flags `--log-dir`, `--log-file-max-size`, `--log-file-verbosity`, and `--log-group-max-size` are deprecated. Logging configuration can now be specified via the `--log` parameter. See the [Logging](https://www.cockroachlabs.com/docs/v21.1/logging-overview) documentation for details. -- The client-side command `\show` for the [SQL shell](https://www.cockroachlabs.com/docs/v21.1/cockroach-sql#commands) is deprecated in favor of the new command `\p`. This prints the contents of the query buffer entered so far. -- Currently, Google Cloud Storage (GCS) connections default to the `cloudstorage.gs.default.key` [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings). This default behavior will no longer be supported in v21.2. If you are relying on this default behavior, we recommend adjusting your queries and scripts to now specify the `AUTH` parameter you want to use. Similarly, if you are using the `cloudstorage.gs.default.key` cluster setting to authorize your GCS connection, we recommend switching to use `AUTH=specified` or `AUTH=implicit`. `AUTH=specified` will be the default behavior in v21.2 and beyond. - -

Known limitations

- -For information about new and unresolved limitations in CockroachDB v21.1, with suggested workarounds where applicable, see [Known Limitations](https://www.cockroachlabs.com/docs/v21.1/known-limitations). - -

Education

- -Area | Topic | Description ------|-------|------------ -Cockroach University | **New Intro Courses** | [Introduction to Distributed SQL and CockroachDB](https://university.cockroachlabs.com/courses/course-v1:crl+intro-to-distributed-sql-and-cockroachdb+self-paced/about) teaches you the core concepts behind distributed SQL databases and describes how CockroachDB fits into this landscape. [Practical First Steps with CockroachDB](https://university.cockroachlabs.com/courses/course-v1:crl+practical-first-steps-with-crdb+self-paced/about) is a hands-on sequel that gives you the tools to get started with CockroachDB. -Cockroach University | **New Java Course** | [Fundamentals of CockroachDB for Java Developers](https://university.cockroachlabs.com/courses/course-v1:crl+fundamentals-of-crdb-for-java-devs+self-paced/about) guides you through building a full-stack vehicle-sharing app in Java using the popular Spring Data JPA framework with Spring Boot and a CockroachCloud Free cluster as the backend. -Cockroach University | **New Query Performance Course** | [CockroachDB Query Performance for Developers](https://university.cockroachlabs.com/courses/course-v1:crl+cockroachdb-query-performance-for-devs+self-paced/about) teaches you key CockroachDB features and skills to improve application performance and functionality, such as analyzing a query execution plan, using indexes to avoid expensive full table scans, improving sorting performance, and efficiently querying fields in JSON records. -Docs | **Quickstart** | Documented the simplest way to [get started with CockroachDB](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart) for testing and app development by using CockroachCloud Free. -Docs | **Developer Guidance** | Published more comprehensive, task-oriented [guidance for developers](https://www.cockroachlabs.com/docs/v21.1/developer-guide-overview) building applications on CockroachDB, including connecting to a cluster, designing a database schema, reading and writing data, optimizing query performance, and debugging applications. -Docs | **Connection Pooling** | Added guidance on [planning, configuring, and using connection pools](https://www.cockroachlabs.com/docs/v21.1/connection-pooling) with CockroachDB, as well as examples for Java and Go applications. -Docs | **Sample Apps on
CockroachCloud Free** | Updated various Java, Python, Node.js, Ruby, and Go [sample app tutorials](https://www.cockroachlabs.com/docs/v21.1/example-apps) to offer CockroachCloud Free as the backend. -Docs | **Licensing FAQs** | Updated the [Licensing FAQ](https://www.cockroachlabs.com/docs/v21.1/licensing-faqs) to explain our licensing types, how features align to licenses, how to perform basic tasks around licenses (e.g., obtain, set, verify, monitor, renew), and other common questions. -Docs | **Product Limits** | Added [object sizing and scaling considerations](https://www.cockroachlabs.com/docs/v21.1/schema-design-overview#object-size-and-scaling-considerations), including specific hard limits imposed by CockroachDB and practical limits based on our performance testing and observations. -Docs | **System Catalogs** | Documented important [internal system catalogs](https://www.cockroachlabs.com/docs/v21.1/system-catalogs) that provide non-stored data to client applications. diff --git a/src/current/_includes/releases/v21.1/v21.1.1.md b/src/current/_includes/releases/v21.1/v21.1.1.md deleted file mode 100644 index 1612bc6a0d5..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.1.md +++ /dev/null @@ -1,163 +0,0 @@ -## v21.1.1 - -Release Date: May 24, 2021 - - - -

General changes

- -- Disabled read-triggered compactions to avoid instances where the [storage engine](https://www.cockroachlabs.com/docs/v21.1/architecture/storage-layer) would compact excessively. [#65345][#65345] - -

SQL language changes

- -- Fixed [Julian date](https://wikipedia.org/wiki/Julian_day) parsing logic for wrong formats. [#63540][#63540] -- The error payload returned to the client when a [`DATE`](https://www.cockroachlabs.com/docs/v21.1/date)/[`TIME`](https://www.cockroachlabs.com/docs/v21.1/time) conversion fails now contains more details about the difference between the values provided and the values that are expected. [#63540][#63540] {% comment %}doc{% endcomment %} -- Introduced `ALTER TABLE ... ALTER COLUMN SET [VISIBLE|NOT VISIBLE]`, which marks columns as visible/not visible. [#63881][#63881] {% comment %}doc{% endcomment %} -- When using [`ALTER TABLE ... LOCALITY REGIONAL BY ROW`](https://www.cockroachlabs.com/docs/v21.1/set-locality), we would previously verify uniqueness of the new table, which was an unnecessary operation. This verification has been removed, improving the performance of updating localities to or from `REGIONAL BY ROW`. [#63880][#63880] {% comment %}doc{% endcomment %} -- Improved cancellation behavior for [DistSQL](https://www.cockroachlabs.com/docs/v21.1/architecture/sql-layer) flows. [#65047][#65047] {% comment %}doc{% endcomment %} -- [`ST_EstimatedExtent`](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators#spatial-functions) now always returns `NULL`. This allows [GeoServer](http://geoserver.org/) to make progress in certain cases, and is a valid default return value for the function. [#65098][#65098] {% comment %}doc{% endcomment %} -- Implemented `ST_Envelope` for [`Box2D`](https://www.cockroachlabs.com/docs/v21.1/spatial-glossary#data-types). [#65098][#65098] {% comment %}doc{% endcomment %} -- Implemented a subset of variants for `ST_AsTWKB`, which encodes a geometry into a [`TWKB`](https://github.com/TWKB/Specification/blob/master/twkb.md) format. This allows the use of [GeoServer](https://docs.geoserver.org/latest/en/user/) with CRDB if the user selects "PreserveTopology" for their "Method used to simplify geometries" option on the "Store" page. [#65098][#65098] {% comment %}doc{% endcomment %} -- Implemented `ST_Simplify` with [`preserveCollapsed`](https://postgis.net/docs/ST_Simplify.html) support. This unblocks the use of GeoServer with the default settings. [#65098][#65098] {% comment %}doc{% endcomment %} -- [Lookup joins](https://www.cockroachlabs.com/docs/v21.1/joins) on indexes with [computed columns](https://www.cockroachlabs.com/docs/v21.1/computed-columns) which are also either constrained by [`CHECK`](https://www.cockroachlabs.com/docs/v21.1/check) constraints or use an [`ENUM`](https://www.cockroachlabs.com/docs/v21.1/enum) data type may now choose a more optimal plan. [#65361][#65361] {% comment %}doc{% endcomment %} -- Floating point infinity values are now formatted as `Infinity` (or `-Infinity` if negative). This is for compatibility with PostgresSQL. [#65334][#65334] {% comment %}doc{% endcomment %} -- [`INSERT INTO ... ON CONFLICT ... DO UPDATE SET`](https://www.cockroachlabs.com/docs/v21.1/insert) statements without predicates now acquire locks using the `FOR UPDATE` locking mode during their initial row scan, which improves performance for contended workloads. This behavior is configurable using the `enable_implicit_select_for_update` [session variable](https://www.cockroachlabs.com/docs/v21.1/set-vars) and the `sql.defaults.implicit_select_for_update.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings). [#65363][#65363] {% comment %}doc{% endcomment %} -- [`ST_GeomFromGeoJSON(string)`](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators#spatial-functions) is now marked as the preferred overload, meaning it will resolve correctly in more contexts. [#65442][#65442] {% comment %}doc{% endcomment %} - -

Operational changes

- -- [Replica garbage collection](https://www.cockroachlabs.com/docs/v21.1/architecture/storage-layer#garbage-collection) now checks replicas against the range descriptor every 12 hours (down from 10 days) to see if they should be removed. Replicas that fail to notice they have been removed from a range will therefore linger for at most 12 hours rather than 10 days. [#64589][#64589] - -

Command-line changes

- -- The `--help` text for [`--log`](https://www.cockroachlabs.com/docs/v21.1/configure-logs) now references the fact that the flag accepts YAML syntax and also points to the `cockroach debug check-log-config` command. [#64948][#64948] -- The new parameter `--log-config-file` simplifies the process of loading the logging configuration from a YAML file. Instead of passing the content of the file via the `--log` flag (e.g., `--log=$(cat file.yaml)`), it is now possible to pass the path to the file using `--log-config-file=file.yaml`.
**Note:** Each occurrence of `--log` and `--log-config-file` on the command line overrides the configuration set from previous occurrences. [#64948][#64948] {% comment %}doc{% endcomment %} -- The prefixes displayed before connection URLs when [`cockroach demo`](https://www.cockroachlabs.com/docs/v21.1/cockroach-demo) starts up have been updated to better align with the output of [`cockroach start`](https://www.cockroachlabs.com/docs/v21.1/cockroach-start). [#63535][#63535] {% comment %}doc{% endcomment %} -- The flag `--empty` for `cockroach demo` has been renamed to `--no-example-database`. `--empty` is still recognized but is marked as deprecated. Additionally, the user can now set the environment variable `COCKROACH_NO_EXAMPLE_DATABASE` to obtain this behavior automatically in every new demo session. [#63535][#63535] {% comment %}doc{% endcomment %} -- CockroachDB no longer supports the `\demo add` and `\demo shutdown` commands for `cockroach demo` in `--global` configurations. [#63535][#63535] {% comment %}doc{% endcomment %} -- Added a note when starting up a `--global` demo cluster that the `--global` configuration is experimental. [#63535][#63535] {% comment %}doc{% endcomment %} -- The SQL shell (`cockroach demo`, [`cockroach sql`](https://www.cockroachlabs.com/docs/v21.1/cockroach-sql)) now attempts to better format values that are akin to time/date values, as well as floating-point numbers. [#63541][#63541] {% comment %}doc{% endcomment %} -- [`cockroach debug zip`](https://www.cockroachlabs.com/docs/v21.1/cockroach-debug-zip) now attempts to pull data from multiple nodes concurrently, up to 15 nodes at a time. This change is meant to accelerate the data collection when a cluster contains multiple nodes. This behavior can be changed with the new command-line flag `--concurrency`. [#64705][#64705] {% comment %}doc{% endcomment %} -- The format of the informational messages printed by `cockroach debug zip` , when concurrent execution is enabled. [#64705][#64705] {% comment %}doc{% endcomment %} -- The new command `cockroach debug list-files` show the list of files that can be retrieved via the `cockroach debug zip` command. It supports the `--nodes` and `--exclude-nodes` parameters in the same way as `cockroach debug zip`. [#64705][#64705] {% comment %}doc{% endcomment %} -- It is now possible to fine-tune which files get retrieved from the server nodes by the `cockroach debug zip` command, using the new flag `--include-files` and `--exclude-files`. These flags take glob patterns that are applied to the file names server-side. For example, to include only log files, use `--include-files='*.log'`. The command `cockroach debug list-files` also accepts these flags and can thus be used to experiment with the new flags before running the `cockroach debug zip` command. [#64705][#64705] {% comment %}doc{% endcomment %} -- The `cockroach debug zip` command now retrieves only the log files, goroutine dumps and heap profiles pertaining to the last 48 hours prior to the command invocation. This behavior is supported entirely client-side, which means that it is not necessary to upgrade the server nodes to put these newly-configurable limits in place. The other data items retrieved by `cockroach debug zip` are not affected by this time limit. This behavior can be customized by the two new flags `--files-from` and `--files-until`. Both are optional. See the command-line help text for details. The two new flags are also supported by `cockroach debug list-files`. It is advised to experiment with `list-files` prior to issuing a `debug zip` command that may retrieve a large amount of data. [#64705][#64705] {% comment %}doc{% endcomment %} - -

DB Console changes

- -- A [new metric](https://www.cockroachlabs.com/docs/v21.1/ui-overview) for the average number of runnable goroutines per CPU is now present in the runtime graphs. [#64750][#64750] {% comment %}doc{% endcomment %} -- The Console now uses a new library for line graphs that renders metrics more efficiently. Customers with large clusters can now load and interact with metrics much faster than before. [#64479][#64479] -- Placed a legend under charts on [metrics page](https://www.cockroachlabs.com/docs/v21.1/ui-overview-dashboard), if more than 10 series are being displayed [#64479][#64479] {% comment %}doc{% endcomment %} - -

Bug fixes

- -- Fixed a bug in the artificial latencies introduced by the `--global` flag to [`cockroach demo`](https://www.cockroachlabs.com/docs/v21.1/cockroach-demo). [#63535][#63535] -- Fixed a bug where multiple concurrent invocations of [`cockroach debug zip`](https://www.cockroachlabs.com/docs/v21.1/cockroach-debug-zip) could yield cluster instability. This bug had been present since CockroachDB v20.1. [#64083][#64083] -- When a [`STRING`](https://www.cockroachlabs.com/docs/v21.1/string) value is converted to [`TIME`](https://www.cockroachlabs.com/docs/v21.1/time)/[`DATE`](https://www.cockroachlabs.com/docs/v21.1/date)/[`TIMESTAMP`](https://www.cockroachlabs.com/docs/v21.1/timestamp), and the `STRING` value contains invalid entries, the error messages reported now more clearly report which fields are missing or undesired. [#63540][#63540] -- Fixed a bug where view expressions created using an [`ARRAY`](https://www.cockroachlabs.com/docs/v21.1/array) [`ENUM`](https://www.cockroachlabs.com/docs/v21.1/enum) without a name for the column could cause failures when dropping unrelated `ENUM` values. [#64272][#64272] -- Fixed a bug causing an internal error in rare circumstances when executing queries via the [vectorized engine](https://www.cockroachlabs.com/docs/v21.1/vectorized-execution) that operate on columns of [`BOOL`](https://www.cockroachlabs.com/docs/v21.1/bool), [`BYTES`](https://www.cockroachlabs.com/docs/v21.1/bytes), [`INT`](https://www.cockroachlabs.com/docs/v21.1/int), and [`FLOAT`](https://www.cockroachlabs.com/docs/v21.1/float) types that have a mix of [`NULL`](https://www.cockroachlabs.com/docs/v21.1/null-handling) and non-`NULL` values. [#62915][#62915] -- Fixed a bug causing CockroachDB to either return an error or crash when comparing an infinite `DATE` coming from a subquery against a `TIMESTAMP`. [#64074][#64074] -- CockroachDB now should crash less often due to out-of-memory conditions caused by the [subqueries](https://www.cockroachlabs.com/docs/v21.1/subqueries) returning multiple rows of large size. [#64727][#64727] -- Previously, the [session trace](https://www.cockroachlabs.com/docs/v21.1/show-trace) could contain entries that corresponded to the previous trace (i.e., `SET TRACING=ON` didn't properly reset the trace). Now this is fixed. [#64945][#64945] -- Previously, CockroachDB could incorrectly cast [integers](https://www.cockroachlabs.com/docs/v21.1/int) of larger widths to integers of smaller widths (e.g., `INT8` to `INT2`) when the former was out of range for the latter. Now this is fixed. [#65035][#65035] -- Fixed a race condition where read-write requests during replica removal (e.g., during range merges or rebalancing) could be evaluated on the removed replica. [#64598][#64598] -- [`BACKUP`](https://www.cockroachlabs.com/docs/v21.1/backup) no longer resolves intents one-by-one. This eliminates the need to run a high-priority query to cleanup intents to unblock `BACKUP` in the case of intent buildup. [#64881][#64881] -- Fixed an internal error that could occur during planning when a query used the output of an [`UPDATE`](https://www.cockroachlabs.com/docs/v21.1/update)'s `RETURNING` clause, and one or more of the columns in the `RETURNING` clause were from a table specified in the `FROM` clause of the `UPDATE` (i.e., not from the table being updated). [#62960][#62960] -- Fixed an index-out-of-range error that could occur when `crdb_internal_mvcc_timestamp` was selected as part of a [`view`](https://www.cockroachlabs.com/docs/v21.1/views). It is now possible to select `crdb_internal_mvcc_timestamp` as part of a view as long as it is aliased with a different name. [#63632][#63632] -- Fixed a bug in the application of environment variables to populate defaults for certain [command-line flags](https://www.cockroachlabs.com/docs/v21.1/cockroach-commands), for example `COCKROACH_URL` for `--url`, has been fixed. [#63539][#63539] -- Fixed a stack overflow that can happen in some corner cases involving [partial indexes](https://www.cockroachlabs.com/docs/v21.1/partial-indexes) with predicates containing `(x IS NOT NULL)`. [#64738][#64738] -- Providing a constant value as an [`ORDER BY`](https://www.cockroachlabs.com/docs/v21.1/order-by) value in an ordered-set [aggregate](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators#aggregate-functions), such as `percentile_dist` or `percentile_cont`, no longer returns an error. This bug has been present since order-set aggregates were added in version 20.2. [#64902][#64902] -- Queries that reference tables with [`GEOMETRY` or `GEOGRAPHY`](https://www.cockroachlabs.com/docs/v21.1/spatial-glossary#data-types) [GIN indexes](https://www.cockroachlabs.com/docs/v21.1/inverted-indexes) and that call [geospatial functions](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators#spatial-functions) with constant [`NULL`](https://www.cockroachlabs.com/docs/v21.1/null-handling) values cast to a type, like `NULL::GEOMETRY` or `NULL::FLOAT8`, no longer error. This bug was present since 20.2. [#63003][#63003] -- Fixed a bug causing CockroachDB to incorrectly calculate the latency from the remote nodes when the latency info was shown on the [`EXPLAIN ANALYZE (DISTSQL)`](https://www.cockroachlabs.com/docs/v21.1/explain-analyze) diagrams. [#63089][#63089] -- Fixed a bug causing the [`ZONECONFIG` privilege](https://www.cockroachlabs.com/docs/v21.1/authorization#privileges) on tables and databases to be incorrectly interpreted as `USAGE`, which could corrupt a table and/or database because `USAGE` is an invalid privilege for tables and databases. [#65160][#65160] -- Fixed a bug which could cause a panic when running an `EXECUTE` of a previously-prepared statement with a `REGCLASS` or `REGTYPE` parameter or a [user-defined type](https://www.cockroachlabs.com/docs/v21.1/enum) argument after running [`BEGIN AS OF SYSTEM TIME`](https://www.cockroachlabs.com/docs/v21.1/as-of-system-time) with an invalid timestamp. [#65150][#65150] -- Fixed a bug which could cause a panic when issuing a query referencing a [user-defined type](https://www.cockroachlabs.com/docs/v21.1/enum) as a placeholder. [#65150][#65150] -- Fixed a bug introduced in 20.2 that caused rows to be incorrectly de-duplicated from a scan with a non-unique index. [#65284][#65284] -- Fixed a bug where interval math on a [`TIMESTAMPTZ`](https://www.cockroachlabs.com/docs/v21.1/timestamp) value on a DST boundary would incorrectly add or subtract an extra hour. [#65095][#65095] -- Fixed a bug where `date_trunc` on a [`TIME`](https://www.cockroachlabs.com/docs/v21.1/time) value on a DST boundary could switch timezones and produce the incorrect result. [#65095][#65095] -- Improved memory utilization under some write-heavy workloads, added better logging to [storage engine](https://www.cockroachlabs.com/docs/v21.1/architecture/storage-layer) to surface compaction type, and persisted previously-missing Pebble options in `OPTIONS` file. [#65308][#65308] -- Fixed a bug causing `revision_history` cluster [backups](https://www.cockroachlabs.com/docs/v21.1/backup) to not include dropped databases. This means that, previously, dropped databases could not be restored from backups that were taken after the database was dropped. [#65314][#65314] -- Fixed a bug causing [`SHOW CREATE TABLE`](https://www.cockroachlabs.com/docs/v21.1/show-create) output to not display the [zone configurations](https://www.cockroachlabs.com/docs/v21.1/configure-zone) of a table or index if there were no partitions, even if there were zone configurations on the index or table. [#65176][#65176] -- Previously, concatenating a non-[`STRING`](https://www.cockroachlabs.com/docs/v21.1/string) value with a `STRING` value would not use the normal `STRING` representation of the non-`STRING` value. Now it does, so `true || 'string'` returns `truestring` instead of `tstring`. [#65331][#65331] -- Large [`SELECT FOR UPDATE`](https://www.cockroachlabs.com/docs/v21.1/selection-queries) scans will no longer prevent the memory associated with their entire result set from being reclaimed by the Go garbage collector for the lifetime of the locks that they acquire. [#65359][#65359] -- Fixed a rare race that could lead to a 3-second stall before a [Raft leader](https://www.cockroachlabs.com/docs/v21.1/architecture/replication-layer) was elected on a Range immediately after it was split off from its left-hand neighbor. [#65356][#65356] -- Fixed a bug where `SHOW CREATE TABLE` would show the zone configurations of a table of the same name from a different schema. [#65368][#65368] -- [`BACKUP`](https://www.cockroachlabs.com/docs/v21.1/backup), [`RESTORE`](https://www.cockroachlabs.com/docs/v21.1/restore), and [`IMPORT`](https://www.cockroachlabs.com/docs/v21.1/import) are now more resilient to node failures and will retry automatically. [#65391][#65391] -- Previously, replica rebalancing could sometimes rebalance to stores on dead nodes. This bug is now fixed. [#65428][#65428] - -

Performance improvements

- -- The optimizer now always prefers to plan a [locality-optimized](https://www.cockroachlabs.com/docs/v21.1/multiregion-overview) scan over a regular scan when possible. This may enable the execution engine to avoid communicating with remote nodes, thus reducing query latency. [#65088][#65088] -- The optimizer will now try to plan anti lookup joins using "locality-optimized search". This optimization applies for anti lookup joins into [`REGIONAL BY ROW`](https://www.cockroachlabs.com/docs/v21.1/multiregion-overview#regional-by-row-tables) tables (i.e., the right side of the join is a `REGIONAL BY ROW` table), and if enabled, it means that the execution engine will first search locally for matching rows before searching remote nodes. If a matching row is found in a local node, remote nodes will not be searched. This optimization may improve the performance of [foreign key](https://www.cockroachlabs.com/docs/v21.1/foreign-key) checks when rows are inserted or updated in a table that references a foreign key in a `REGIONAL BY ROW` table. [#63118][#63118] -- Certain queries containing ` IN ()` conditions now run faster. [#63866][#63866] -- Improved intent cleanup performance for aborted transactions. [#64588][#64588] -- Adjusted the estimated cost of locality-optimized anti joins in the optimizer so that they are always chosen over non-locality-optimized anti joins when possible. This makes it more likely that queries involving anti joins (such as inserts with foreign key checks) can avoid visiting remote regions. This results in lower latency. [#65131][#65131] -- The optimizer can now avoid full table scans for queries with a `LIMIT` and [`ORDER BY`](https://www.cockroachlabs.com/docs/v21.1/order-by) clause, where the `ORDER BY` columns form a prefix on an index in a `REGIONAL BY ROW` table (excluding the hidden `crdb_region` column). Instead of a full table scan, at most `LIMIT` rows are scanned per region. [#65287][#65287] - -
- -

Contributors

- -This release includes 100 merged PRs by 33 authors. -We would like to thank the following contributors from the CockroachDB community: - -- Kumar Akshay -- Mohammad Aziz (first-time contributor) -- kurokochin (first-time contributor) - -
- -[#62915]: https://github.com/cockroachdb/cockroach/pull/62915 -[#62960]: https://github.com/cockroachdb/cockroach/pull/62960 -[#63003]: https://github.com/cockroachdb/cockroach/pull/63003 -[#63089]: https://github.com/cockroachdb/cockroach/pull/63089 -[#63118]: https://github.com/cockroachdb/cockroach/pull/63118 -[#63535]: https://github.com/cockroachdb/cockroach/pull/63535 -[#63539]: https://github.com/cockroachdb/cockroach/pull/63539 -[#63540]: https://github.com/cockroachdb/cockroach/pull/63540 -[#63541]: https://github.com/cockroachdb/cockroach/pull/63541 -[#63632]: https://github.com/cockroachdb/cockroach/pull/63632 -[#63866]: https://github.com/cockroachdb/cockroach/pull/63866 -[#63880]: https://github.com/cockroachdb/cockroach/pull/63880 -[#63881]: https://github.com/cockroachdb/cockroach/pull/63881 -[#64074]: https://github.com/cockroachdb/cockroach/pull/64074 -[#64083]: https://github.com/cockroachdb/cockroach/pull/64083 -[#64272]: https://github.com/cockroachdb/cockroach/pull/64272 -[#64479]: https://github.com/cockroachdb/cockroach/pull/64479 -[#64588]: https://github.com/cockroachdb/cockroach/pull/64588 -[#64589]: https://github.com/cockroachdb/cockroach/pull/64589 -[#64598]: https://github.com/cockroachdb/cockroach/pull/64598 -[#64705]: https://github.com/cockroachdb/cockroach/pull/64705 -[#64727]: https://github.com/cockroachdb/cockroach/pull/64727 -[#64738]: https://github.com/cockroachdb/cockroach/pull/64738 -[#64750]: https://github.com/cockroachdb/cockroach/pull/64750 -[#64881]: https://github.com/cockroachdb/cockroach/pull/64881 -[#64902]: https://github.com/cockroachdb/cockroach/pull/64902 -[#64945]: https://github.com/cockroachdb/cockroach/pull/64945 -[#64948]: https://github.com/cockroachdb/cockroach/pull/64948 -[#65035]: https://github.com/cockroachdb/cockroach/pull/65035 -[#65047]: https://github.com/cockroachdb/cockroach/pull/65047 -[#65088]: https://github.com/cockroachdb/cockroach/pull/65088 -[#65095]: https://github.com/cockroachdb/cockroach/pull/65095 -[#65098]: https://github.com/cockroachdb/cockroach/pull/65098 -[#65131]: https://github.com/cockroachdb/cockroach/pull/65131 -[#65150]: https://github.com/cockroachdb/cockroach/pull/65150 -[#65160]: https://github.com/cockroachdb/cockroach/pull/65160 -[#65176]: https://github.com/cockroachdb/cockroach/pull/65176 -[#65284]: https://github.com/cockroachdb/cockroach/pull/65284 -[#65287]: https://github.com/cockroachdb/cockroach/pull/65287 -[#65308]: https://github.com/cockroachdb/cockroach/pull/65308 -[#65314]: https://github.com/cockroachdb/cockroach/pull/65314 -[#65331]: https://github.com/cockroachdb/cockroach/pull/65331 -[#65334]: https://github.com/cockroachdb/cockroach/pull/65334 -[#65345]: https://github.com/cockroachdb/cockroach/pull/65345 -[#65356]: https://github.com/cockroachdb/cockroach/pull/65356 -[#65357]: https://github.com/cockroachdb/cockroach/pull/65357 -[#65358]: https://github.com/cockroachdb/cockroach/pull/65358 -[#65359]: https://github.com/cockroachdb/cockroach/pull/65359 -[#65361]: https://github.com/cockroachdb/cockroach/pull/65361 -[#65363]: https://github.com/cockroachdb/cockroach/pull/65363 -[#65368]: https://github.com/cockroachdb/cockroach/pull/65368 -[#65391]: https://github.com/cockroachdb/cockroach/pull/65391 -[#65428]: https://github.com/cockroachdb/cockroach/pull/65428 -[#65442]: https://github.com/cockroachdb/cockroach/pull/65442 diff --git a/src/current/_includes/releases/v21.1/v21.1.10.md b/src/current/_includes/releases/v21.1/v21.1.10.md deleted file mode 100644 index 9e0a4bf0459..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.10.md +++ /dev/null @@ -1,15 +0,0 @@ -## v21.1.10 - -Release Date: October 7, 2021 - - - -

Bug fixes

- -- Fixed a bug that caused the [optimizer](https://www.cockroachlabs.com/docs/v21.1/cost-based-optimizer) to erroneously discard [`WHERE` filters](https://www.cockroachlabs.com/docs/v21.1/selection-queries) when executing prepared statements, causing incorrect results to be returned. This bug was present since [v21.1.9](v21.1.html#v21-1-9). [#71116][#71116]. - -

Contributors

- -This release includes 1 merged PRs by 1 author. - -[#71116]: https://github.com/cockroachdb/cockroach/pull/71116 diff --git a/src/current/_includes/releases/v21.1/v21.1.11.md b/src/current/_includes/releases/v21.1/v21.1.11.md deleted file mode 100644 index fa24ed4afbf..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.11.md +++ /dev/null @@ -1,90 +0,0 @@ -## v21.1.11 - -Release Date: October 18, 2021 - - - -

Enterprise edition changes

- -- Fixed a bug where [`CHANGEFEED`](https://www.cockroachlabs.com/docs/v21.1/create-changefeed)s would fail to correctly handle a [primary key](https://www.cockroachlabs.com/docs/v21.1/primary-key) change. [#69926][#69926] -- [`CHANGEFEED`](https://www.cockroachlabs.com/docs/v21.1/create-changefeed)s no longer fail when started on [`REGIONAL BY ROW`](https://www.cockroachlabs.com/docs/v21.1/set-locality#set-the-table-locality-to-regional-by-row) tables. Note that in `REGIONAL BY ROW` tables, the [`crdb_region`](https://www.cockroachlabs.com/docs/v21.1/set-locality#crdb_region) column becomes part of the primary [index](https://www.cockroachlabs.com/docs/v21.1/indexes). Thus, changing an existing table to `REGIONAL BY ROW` will trigger a [changefeed](https://www.cockroachlabs.com/docs/v21.1/create-changefeed) backfill with new messages emitted using the new composite [primary key](https://www.cockroachlabs.com/docs/v21.1/primary-key). [#70022][#70022] -- Fixed a bug that could have led to duplicate instances of a single [changefeed](https://www.cockroachlabs.com/docs/v21.1/create-changefeed) [job](https://www.cockroachlabs.com/docs/v21.1/show-jobs) running for prolonged periods of time. [#70926][#70926] - -

SQL language changes

- -- Added `crdb_internal.(node|cluster)_distsql_flows` virtual tables that expose the information about the flows of the [DistSQL](https://www.cockroachlabs.com/docs/v21.1/architecture/sql-layer#distsql) execution scheduled on remote nodes. These tables do not include information about the non-distributed queries nor about local flows (from the perspective of the gateway node of the query). [#66332][#66332] -- Added new metrics to track a [schema](https://www.cockroachlabs.com/docs/v21.1/online-schema-changes) [job](https://www.cockroachlabs.com/docs/v21.1/show-jobs) failure (`sql.schema_changer.errors.all`, `sql.schema_changer.errors.constraint_violation`, `sql.schema_changer.errors.uncategorized`), with errors inside the `crdb_internal.feature_usage` table. [#70621][#70621] -- Fixed a bug where [`LINESTRINGZ`](https://www.cockroachlabs.com/docs/v21.1/linestring), `LINESTRINGZM`, and `LINESTRINGM` could not be used as column types. [#70749][#70749] - -

Operational changes

- -- Added the [cluster settings](https://www.cockroachlabs.com/docs/v21.1/cluster-settings) `sql.defaults.transaction_rows_written_log`, `sql.defaults.transaction_rows_written_err`, `sql.defaults.transaction_rows_read_log`, and `sql.defaults.transaction_rows_read_err` (as well as the corresponding [session variables](https://www.cockroachlabs.com/docs/v21.1/set-vars#supported-variables)). These settings determine the "size" of the [transactions](https://www.cockroachlabs.com/docs/v21.1/transactions) in written and read rows that are logged to the `SQL_PERF` [logging channel](https://www.cockroachlabs.com/docs/v21.1/logging-overview). Note that the internal queries used by CockroachDB cannot error out, but can be logged instead to the `SQL_INTERNAL_PERF` [logging channel](https://www.cockroachlabs.com/docs/v21.1/logging-overview). The "written" limits apply to [`INSERT`](https://www.cockroachlabs.com/docs/v21.1/insert), `INSERT INTO SELECT FROM`, [`INSERT ON CONFLICT`](https://www.cockroachlabs.com/docs/v21.1/insert), [`UPSERT`](https://www.cockroachlabs.com/docs/v21.1/upsert), [`UPDATE`](https://www.cockroachlabs.com/docs/v21.1/update), and [`DELETE`](https://www.cockroachlabs.com/docs/v21.1/delete) whereas the "read" limits apply to [`SELECT` statements](https://www.cockroachlabs.com/docs/v21.1/selection-queries), in addition to all of the others listed. These limits will not apply to [`CREATE TABLE AS SELECT`](https://www.cockroachlabs.com/docs/v21.1/create-table), [`IMPORT`](https://www.cockroachlabs.com/docs/v21.1/import), [`TRUNCATE`](https://www.cockroachlabs.com/docs/v21.1/truncate), [`DROP TABLE`](https://www.cockroachlabs.com/docs/v21.1/drop-table), [`ALTER TABLE`](https://www.cockroachlabs.com/docs/v21.1/alter-table), [`BACKUP`](https://www.cockroachlabs.com/docs/v21.1/backup), [`RESTORE`](https://www.cockroachlabs.com/docs/v21.1/restore), or [`CREATE STATISTICS`](https://www.cockroachlabs.com/docs/v21.1/create-statistics) statements. Note that enabling the `transaction_rows_read_err` setting comes at the cost of disabling the usage of the auto commit optimization for the mutation statements in [implicit transactions](https://www.cockroachlabs.com/docs/v21.1/transactions#individual-statements). [#70175][#70175] -- Adjusted the meaning of the recently introduced [session variables](https://www.cockroachlabs.com/docs/v21.1/set-vars#supported-variables) `transaction_rows_written_err` and `transaction_rows_read_err` (as well as the corresponding `_log` variables) to indicate the largest number of rows that is still allowed. Prior to this change, reaching the limit would result in an error; now an error results only if the limit is exceeded. [#70175][#70175] -- Added the [session variable](https://www.cockroachlabs.com/docs/v21.1/set-vars#supported-variables) `large_full_scan_rows`, as well as the corresponding [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings) `sql.defaults.large_full_scan_rows`. This setting determines which tables are considered "large" for the purposes of enabling the `disallow_full_table_scans` feature to reject full table/index scans of such "large" tables. The default value for the new setting is `0`, meaning that the previous behavior of rejecting all full table/index scans is kept. Internally-issued queries aren't affected, and the new setting has no impact when the `disallow_full_table_scans` setting is not enabled. [#70294][#70294] -- CockroachDB now records a [log event](https://www.cockroachlabs.com/docs/v21.1/eventlog) and increments a counter when removing an expired session. [#68538][#68538] - -

Command-line changes

- -- Version details have been added to all JSON formatted [log entries](https://www.cockroachlabs.com/docs/v21.1/logging-overview). [#70451][#70451] - -

DB Console changes

- -- Renamed references to the [UI console](https://www.cockroachlabs.com/docs/v21.1/ui-overview) from "Admin UI" to "DB Console". [#70870][#70870] - -

Bug fixes

- -- Fixed a bug where cluster revision history [backups](https://www.cockroachlabs.com/docs/v21.1/backup) may have included dropped descriptors in the "current" snapshot of descriptors on the cluster. [#69650][#69650] -- Fixed a regression in [statistics](https://www.cockroachlabs.com/docs/v21.1/cost-based-optimizer#table-statistics) estimation in the optimizer for very large tables. The bug, which has been present since [v20.2.14]({% link releases/v20.2.md %}#v20-2-14) and [v21.1.7](v21.1.html#v21-1-7), could cause the optimizer to severely underestimate the number of rows returned by an expression. [#69953][#69953] -- Fixed a bug that can cause prolonged unavailability due to [lease transfer](https://www.cockroachlabs.com/docs/v21.1/architecture/replication-layer#epoch-based-leases-table-data) of a replica that may be in need of a [Raft](https://www.cockroachlabs.com/docs/v21.1/architecture/replication-layer#raft) snapshot. [#69964][#69964] -- Fixed a bug where, after a temporary node outage, other nodes in the cluster could fail to connect to the restarted node due to their circuit breakers not resetting. This would manifest in the [logs](https://www.cockroachlabs.com/docs/v21.1/logging-overview) via messages of the form `unable to dial nXX: breaker open`, where `XX` is the ID of the restarted node. Note that such errors are expected for nodes that are truly unreachable, and may still occur around the time of the restart, but for no longer than a few seconds. [#70311][#70311] -- [`RESTORE`](https://www.cockroachlabs.com/docs/v21.1/restore) will now correctly ignore dropped databases that may have been included in cluster [backups](https://www.cockroachlabs.com/docs/v21.1/backup) with revision history. [#69791][#69791] -- Fixed a bug where if tracing was enabled (using the `sql.trace.txn.enable_threshold` [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings)), the [statement diagnostics](https://www.cockroachlabs.com/docs/v21.1/explain-analyze#explain-analyze-debug) collection (`EXPLAIN ANALYZE (DEBUG)`) would not work. [#70035][#70035] -- Fixed a bug in full cluster [restores](https://www.cockroachlabs.com/docs/v21.1/restore) where dropped descriptor revisions would cause the restore [job](https://www.cockroachlabs.com/docs/v21.1/show-jobs) to fail. [#69654][#69654] -- Fixed a bug where [schema changes](https://www.cockroachlabs.com/docs/v21.1/online-schema-changes) that included both a column addition and [primary key](https://www.cockroachlabs.com/docs/v21.1/primary-key) change in the same [transaction](https://www.cockroachlabs.com/docs/v21.1/transactions) resulted in a failed [changefeed](https://www.cockroachlabs.com/docs/v21.1/create-changefeed). [#70022][#70022] -- Fixed a bug which prevented proper copying of [partitions](https://www.cockroachlabs.com/docs/v21.1/partitioning) and [zone configurations](https://www.cockroachlabs.com/docs/v21.1/configure-replication-zones) when de-interleaving a table with [ALTER PRIMARY KEY](https://www.cockroachlabs.com/docs/v21.1/alter-primary-key) when the columns did not appear in exactly the same order in the parent and child tables. [#70695][#70695] -- Fixed a bug where the exit status of the [`cockroach` command](https://www.cockroachlabs.com/docs/v21.1/cockroach-commands) did not follow the previously-documented table of exit status codes when an error occurred during command startup. Only errors occurring after startup were reported using the correct code. This bug had existed since reference exit status codes were introduced. [#70675][#70675] -- DNS unavailability during range 1 leaseholder loss will no longer cause significant latency increases for queries and other operations. [#70134][#70134] -- Fixed an issue in the [Pebble](https://www.cockroachlabs.com/docs/v21.1/architecture/storage-layer#pebble) storage engine where a key could be dropped from an LSM snapshot if the key was deleted by a range tombstone after the snapshot was acquired. [#70967][#70967] -- Fixed an internal error with [joins](https://www.cockroachlabs.com/docs/v21.1/joins) that are both `LATERAL` and `NATURAL`/`USING`. [#70800][#70800] -- Fixed `Z` and `M` coordinate columns causing a panic for the [`geometry_columns`](https://www.cockroachlabs.com/docs/v21.1/spatial-glossary#geometry_columns) and [`geography_columns`](https://www.cockroachlabs.com/docs/v21.1/spatial-glossary#geography_columns) tables. [#70813][#70813] -- Fixed a bug that could cause a CockroachDB node to deadlock upon startup in extremely rare cases. If encountered, a stack trace generated by `SIGQUIT` would have shown the function `makeStartLine()` near the top. This bug had existed since [v21.1]({% link releases/v21.1.md %}#v21-1-0). [#71408][#71408] - -

Performance improvements

- -- The conversion performance of [WKT](https://www.cockroachlabs.com/docs/v21.1/well-known-text) to a spatial type is slightly improved. [#70181][#70181] - -

Miscellaneous

- -- Added a new `as_json` option which renders [backup](https://www.cockroachlabs.com/docs/v21.1/backup) manifests as JSON values. [#70298][#70298] - -

Contributors

- -This release includes 41 merged PRs by 24 authors. - -[#66332]: https://github.com/cockroachdb/cockroach/pull/66332 -[#68538]: https://github.com/cockroachdb/cockroach/pull/68538 -[#69650]: https://github.com/cockroachdb/cockroach/pull/69650 -[#69654]: https://github.com/cockroachdb/cockroach/pull/69654 -[#69791]: https://github.com/cockroachdb/cockroach/pull/69791 -[#69926]: https://github.com/cockroachdb/cockroach/pull/69926 -[#69953]: https://github.com/cockroachdb/cockroach/pull/69953 -[#69964]: https://github.com/cockroachdb/cockroach/pull/69964 -[#70022]: https://github.com/cockroachdb/cockroach/pull/70022 -[#70035]: https://github.com/cockroachdb/cockroach/pull/70035 -[#70134]: https://github.com/cockroachdb/cockroach/pull/70134 -[#70175]: https://github.com/cockroachdb/cockroach/pull/70175 -[#70181]: https://github.com/cockroachdb/cockroach/pull/70181 -[#70294]: https://github.com/cockroachdb/cockroach/pull/70294 -[#70298]: https://github.com/cockroachdb/cockroach/pull/70298 -[#70311]: https://github.com/cockroachdb/cockroach/pull/70311 -[#70451]: https://github.com/cockroachdb/cockroach/pull/70451 -[#70621]: https://github.com/cockroachdb/cockroach/pull/70621 -[#70675]: https://github.com/cockroachdb/cockroach/pull/70675 -[#70695]: https://github.com/cockroachdb/cockroach/pull/70695 -[#70749]: https://github.com/cockroachdb/cockroach/pull/70749 -[#70800]: https://github.com/cockroachdb/cockroach/pull/70800 -[#70813]: https://github.com/cockroachdb/cockroach/pull/70813 -[#70870]: https://github.com/cockroachdb/cockroach/pull/70870 -[#70926]: https://github.com/cockroachdb/cockroach/pull/70926 -[#70967]: https://github.com/cockroachdb/cockroach/pull/70967 -[#71408]: https://github.com/cockroachdb/cockroach/pull/71408 diff --git a/src/current/_includes/releases/v21.1/v21.1.12.md b/src/current/_includes/releases/v21.1/v21.1.12.md deleted file mode 100644 index 164e9900b00..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.12.md +++ /dev/null @@ -1,86 +0,0 @@ -## v21.1.12 - -Release Date: November 15, 2021 - - - -

Security updates

- -- Added the `--external-io-enable-non-admin-implicit-access` flag to [cluster-starting `cockroach` commands](https://www.cockroachlabs.com/docs/v21.1/cockroach-start). This flag removes the [admin-only](https://www.cockroachlabs.com/docs/v21.1/authorization#admin-role) restriction on interacting with arbitrary network endpoints and allows implicit authorization for operations such as [`BACKUP`](https://www.cockroachlabs.com/docs/v21.1/backup), [`IMPORT`](https://www.cockroachlabs.com/docs/v21.1/import), or [`EXPORT`](https://www.cockroachlabs.com/docs/v21.1/export). [#71794][#71794] - -

SQL language changes

- -- The [`pg_index` table](https://www.cockroachlabs.com/docs/v21.1/pg-catalog) now populates the `indpred` column for [partial indexes](https://www.cockroachlabs.com/docs/v21.1/partial-indexes). This column was previously `NULL` for partial indexes. [#70897][#70897] -- Added [`crdb_internal` tables](https://www.cockroachlabs.com/docs/v21.1/crdb-internal) `cross_db_references` and `interleaved` for detecting cross-database references and interleaved objects. [#72298][#72298] -- Statements with multiple [`INSERT ON CONFLICT`](https://www.cockroachlabs.com/docs/v21.1/insert), [`UPSERT`](https://www.cockroachlabs.com/docs/v21.1/upsert), [`UPDATE`](https://www.cockroachlabs.com/docs/v21.1/update), or [`DELETE`](https://www.cockroachlabs.com/docs/v21.1/delete) [subqueries](https://www.cockroachlabs.com/docs/v21.1/subqueries) that modify the same table are now disallowed, as these statements can cause data corruption if they modify the same row multiple times. At the risk of data corruption, you can allow these statements by setting the `sql.multiple_modifications_of_table.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings) to `true`. To check for corruption, use the `EXPERIMENTAL SCRUB` command. For example: `EXPERIMENTAL SCRUB TABLE t WITH OPTIONS INDEX ALL;`. [#71621][#71621] - -

Operational changes

- -- `IMPORT` now allows non-admin access to some previously-restricted network endpoints on clusters started with the `--external-io-enable-non-admin-implicit-access` flag. [#72444][#72444] -- A new implementation of `BACKUP` file handling is now available with the [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings) `bulkio.backup.experimental_21_2_mode.enabled` set to `true`. [#71830][#71830] - -

DB Console changes

- -- Non-admin users of the DB Console now have the ability to view the [Cluster Overview page](https://www.cockroachlabs.com/docs/v21.1/ui-cluster-overview-page). Users without the [admin role](https://www.cockroachlabs.com/docs/v21.1/authorization#admin-role) can see data about their nodes, but information such as command line arguments, environment variables, and node IP addresses and DNS names is hidden. [#71719][#71719] -- Replicas waiting for garbage collection were preventing the [Range Report page](https://www.cockroachlabs.com/docs/v21.1/ui-debug-pages) from loading due to a JavaScript error. The page now loads and displays an empty "Replica Type" while in this state. [#71745][#71745] - -

Bug fixes

- -- Fixed a bug causing CockroachDB to encounter an internal error when executing a [zigzag join](https://www.cockroachlabs.com/docs/v21.1/experimental-features) in some cases. [#71255][#71255] -- Fixed a bug causing CockroachDB to incorrectly read the data of a [unique secondary index](https://www.cockroachlabs.com/docs/v21.1/unique) that used to be a primary index that was created via `ALTER PRIMARY KEY` in 21.1.x or prior versions. [#71587][#71587] -- CockroachDB now avoids dialing nodes in performance-critical code paths, which could cause substantial latency when encountering unresponsive nodes (e.g., when a VM or server is shut down). [#70488][#70488] -- Fixed a bug causing the [TPC-C workload](https://www.cockroachlabs.com/docs/v21.1/cockroach-workload) to improperly assign workers to the local partitions in a [multi-region setup](https://www.cockroachlabs.com/docs/v21.1/multiregion-overview). [#71753][#71753] -- Fixed a bug that caused internal errors when collecting statistics on tables with virtual [computed columns](https://www.cockroachlabs.com/docs/v21.1/computed-columns). [#71284][#71284] -- Fixed a bug causing CockroachDB to crash when network connectivity was impaired in some cases. The stack trace (in [`cockroach-stderr.log`](https://www.cockroachlabs.com/docs/v21.1/logging)) would contain `server.(*statusServer).NodesUI`. [#71719][#71719] -- Fixed a panic that could occur with invalid GeoJSON input using [`ST_GeomFromGeoJSON/ST_GeogFromGeoJSON`](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators). [#71308][#71308] -- Fixed a bug causing cluster [backups](https://www.cockroachlabs.com/docs/v21.1/backup) to back up opt-out system tables unexpectedly. [#71922][#71922] -- Fixed a bug that caused [`ALTER COLUMN TYPE`](https://www.cockroachlabs.com/docs/v21.1/alter-column) statements to fail unexpectedly. [#71166][#71166] -- Connection timeout for `grpc` connections is now set to `20s` to match the pre-20.2 default value. [#71516][#71516] -- Fixed a bug that prevented the rollback of [`ALTER PRIMARY KEY`](https://www.cockroachlabs.com/docs/v21.1/alter-primary-key) when the old primary key was interleaved. [#71852][#71852] -- Fixed a bug that caused incorrect results for some queries that utilized a [zigzag join](https://www.cockroachlabs.com/docs/v21.1/experimental-features). The bug could only reproduce on tables with at least two multi-column indexes with nullable columns. The bug was present since version 19.2.0. [#71847][#71847] -- Fixed a bug causing `IMPORT` statements to incorrectly reset progress upon resumption. [#72086][#72086] -- Fixed a bug causing schema changes running during node shutdown to fail permanently. [#71558][#71558] -- Fixed an incorrect "no data source matches prefix" error for queries that use a set-returning function on the right-hand side of a [`JOIN`](https://www.cockroachlabs.com/docs/v21.1/joins) (unless `LATERAL` is explicitly specified). [#71443][#71443] -- Long running [`ANALYZE`](https://www.cockroachlabs.com/docs/v21.1/explain-analyze) statements no longer result in GC TTL errors. [#69599][#69599] -- `IMPORT INTO` no longer crashes when encountering unresolved write intents. [#71982][#71982] -- Fixed a bug causing tracing to external tracers to inadvertently stop after the Enqueue Range or the Allocator [debug pages](https://www.cockroachlabs.com/docs/v21.1/ui-debug-pages) were used. [#72463][#72463] -- Fixed a bug which prevented the Data Distribution debug page from working on clusters which were upgraded from 19.2 or earlier. [#72507][#72507] - -

Performance improvements

- -- Slightly reduced `ANALYZE` and [`CREATE STATISTICS`](https://www.cockroachlabs.com/docs/v21.1/create-statistics) statements memory usage. [#71771][#71771] -- To reduce transient memory usage, CockroachDB now performs intent cleanup during garbage collection in batches as they are found instead of performing a single cleanup at the end of the garbage collection cycle. [#67590][#67590] - -

Contributors

- -This release includes 51 merged PRs by 23 authors. - -[#67590]: https://github.com/cockroachdb/cockroach/pull/67590 -[#69599]: https://github.com/cockroachdb/cockroach/pull/69599 -[#70488]: https://github.com/cockroachdb/cockroach/pull/70488 -[#70897]: https://github.com/cockroachdb/cockroach/pull/70897 -[#71166]: https://github.com/cockroachdb/cockroach/pull/71166 -[#71255]: https://github.com/cockroachdb/cockroach/pull/71255 -[#71284]: https://github.com/cockroachdb/cockroach/pull/71284 -[#71308]: https://github.com/cockroachdb/cockroach/pull/71308 -[#71443]: https://github.com/cockroachdb/cockroach/pull/71443 -[#71516]: https://github.com/cockroachdb/cockroach/pull/71516 -[#71558]: https://github.com/cockroachdb/cockroach/pull/71558 -[#71587]: https://github.com/cockroachdb/cockroach/pull/71587 -[#71621]: https://github.com/cockroachdb/cockroach/pull/71621 -[#71719]: https://github.com/cockroachdb/cockroach/pull/71719 -[#71745]: https://github.com/cockroachdb/cockroach/pull/71745 -[#71753]: https://github.com/cockroachdb/cockroach/pull/71753 -[#71771]: https://github.com/cockroachdb/cockroach/pull/71771 -[#71794]: https://github.com/cockroachdb/cockroach/pull/71794 -[#71830]: https://github.com/cockroachdb/cockroach/pull/71830 -[#71847]: https://github.com/cockroachdb/cockroach/pull/71847 -[#71852]: https://github.com/cockroachdb/cockroach/pull/71852 -[#71922]: https://github.com/cockroachdb/cockroach/pull/71922 -[#71982]: https://github.com/cockroachdb/cockroach/pull/71982 -[#72086]: https://github.com/cockroachdb/cockroach/pull/72086 -[#72270]: https://github.com/cockroachdb/cockroach/pull/72270 -[#72298]: https://github.com/cockroachdb/cockroach/pull/72298 -[#72444]: https://github.com/cockroachdb/cockroach/pull/72444 -[#72463]: https://github.com/cockroachdb/cockroach/pull/72463 -[#72507]: https://github.com/cockroachdb/cockroach/pull/72507 diff --git a/src/current/_includes/releases/v21.1/v21.1.13.md b/src/current/_includes/releases/v21.1/v21.1.13.md deleted file mode 100644 index d99b24a85ef..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.13.md +++ /dev/null @@ -1,100 +0,0 @@ -## v21.1.13 - -Release Date: January 10, 2022 - - - -

SQL language changes

- -- `"visible"` is now usable as a table or column name without extra quoting. [#72996][#72996] -- The `create_type_statements` table now has an index on `descriptor_id`. [#73672][#73672] -- Added a new `stmt` column to the `crdb_internal.(cluster|node)_distsql_flows` virtual table. This column is populated on a best-effort basis. [#73582][#73582] -- The [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings) that controls the default value for the [session setting](https://www.cockroachlabs.com/docs/v21.1/set-vars) `reorder_joins_limit`, called `sql.defaults.reorder_joins_limit`, is now public and included in the docs. [#73891][#73891] -- Added escape character processing to constraint span generation. Previously, escaped-like lookups resulted in incorrect results. [#74258][#74258] - -

Operational changes

- -- Added a new `bulkio.ingest.flush_delay` cluster setting to act as a last-resort option to manually slow bulk-writing processes if needed for cluster stability. This should only be used if there is no better suited back-pressure mechanism available for the contended resource. [#73757][#73757] - -

DB Console changes

- -- Fixed drag-to-zoom on custom charts. [#72589][#72589] -- Node events will now display a permission error rather than an internal server error when a user does not have admin privileges to view events. [#72464][#72464] -- The absolute links on the [Advanced Debug](https://www.cockroachlabs.com/docs/v21.1/ui-debug-pages) page within the DB Console have been updated to relative links. This will enable these links to work with the superuser dashboard in the Cloud Console. [#73122][#73122] - -

Bug fixes

- -- Fixed a bug which allowed [computed columns](https://www.cockroachlabs.com/docs/v21.1/computed-columns) to also have [`DEFAULT` expressions](https://www.cockroachlabs.com/docs/v21.1/default-value). [#73192][#73192] -- Fixed a bug causing [`RESTORE`](https://www.cockroachlabs.com/docs/v21.1/restore) to sometimes map OIDs to invalid types in certain circumstances containing user-defined types. [#73120][#73120] -- [`BACKUP WITH revision_history`](https://www.cockroachlabs.com/docs/v21.1/backup) would previously fail on an upgraded but un-finalized cluster. Now, it should succeed. [#73110][#73110] -- Fixed a bug causing CockroachDB to not set the `TableOID` and `TableAttributeNumber` attributes of `RowDescription` message of pgwire protocol in some cases (these values would be left as 0). [#72449][#72449] -- Fixed a bug causing CockroachDB to encounter an internal error or crash when some queries involving tuples with [`ENUM`](https://www.cockroachlabs.com/docs/v21.1/enum) values were executed in a distributed manner. [#72481][#72481] -- Fixed a bug causing usernames in [`ALTER TABLE ... OWNER TO`](https://www.cockroachlabs.com/docs/v21.1/alter-table) to not be normalized to lower case. [#72658][#72658] -- Previously, `atttypmod` in `pg_catalog.pg_attributes` for `DECIMAL` types with precision but no width was incorrectly "-1". This is now populated correctly. [#72075][#72075] -- Corrected how `type` displays for ZM shapes `geometry_columns` to match PostGIS output. This previously incorrectly included the Z/M lettering. [#71434][#71434] -- Corrected how `type` displays in `geography_columns` to better match PostGIS. This was previously in the wrong casing. [#71434][#71434] -- The `setval` built-in function previously did not invalidate cached sequence values. This is fixed now. [#71822][#71822] -- Fixed a bug preventing tuple type labels from being propagated across queries when run under DistSQL. [#70391][#70391] -- Fixed a bug whereby setting the `CACHE` for a sequence to 1 was ignored. Before this change [`ALTER SEQUENCE ... CACHE 1`](https://www.cockroachlabs.com/docs/v21.1/alter-sequence) would succeed but would not modify the cache value. [#71448][#71448] -- When using [`COPY FROM .. BINARY`](https://www.cockroachlabs.com/docs/v21.1/copy-from), the correct format code will now be returned. [#69255][#69255] -- Fixed a bug causing `COPY FROM ... BINARY` to return an error if the input data was split across different messages. [#69255][#69255] -- Fixed a bug causing `COPY FROM ... CSV` to require each `CopyData` message to be split at the boundary of a record. This was a bug since the `COPY` protocol allows messages to be split at arbitrary points. [#69255][#69255] -- Fixed a bug causing `COPY FROM ... CSV` to not correctly handle octal byte escape sequences such as `\011` when using a [`BYTEA`](https://www.cockroachlabs.com/docs/v21.1/bytes) column. [#69255][#69255] -- Manually enqueueing ranges via the DB Console will no longer crash nodes that contain an uninitialized replica for the enqueued range. [#73038][#73038] -- Fixed a crash with message "attempting to propose command writing below closed timestamp" that could occur, typically on overloaded systems experiencing non-cooperative lease transfers. [#73166][#73166] -- The `txnwaitqueue.pusher.waiting` metric no longer over-reports the number of pushing transactions in some cases. [#71743][#71743] -- Fixed a rare condition that could cause a range merge to get stuck waiting on itself. The symptom of this deadlock was a goroutine stuck in `handleMergeInProgressError` for tens of minutes. [#72049][#72049] -- Fixed bugs causing [`CREATE TABLE AS`](https://www.cockroachlabs.com/docs/v21.1/create-table-as) and [`CREATE MATERIALIZED VIEW`](https://www.cockroachlabs.com/docs/v21.1/create-view) to panic if the `SELECT` query was an internal table requiring internal database state. [#73620][#73620] -- Fixed a rare internal error, "estimated row count must be non-zero", which could occur when planning queries using a GIN index. This error could occur if the histogram on the GIN index showed that there were no rows. [#73353][#73353] -- Fixed an internal error, "empty Datums being compared to other", that could occur during planning for some `SELECT` queries over tables that included a `DEFAULT` partition value in a `PARTITION BY LIST` clause. This bug had been present since 21.1.0. The bug does not exist on versions 20.2.x and earlier. [#73663][#73663] -- Fixed a bug in database and schema restore cleanup that resulted in a dangling descriptor entry on job failure. [#73412][#73412] -- Fixed a bug with the ungraceful shutdown of distributed queries in some rare cases. [#73959][#73959] -- Fixed a bug causing CockroachDB to return a spurious "context canceled" error for a query that actually succeeded in extremely rare cases. [#73959][#73959] -- Fixed a bug which caused corruption of [partial indexes](https://www.cockroachlabs.com/docs/v21.2/partial-indexes), which could cause incorrect query results. The bug was only present when two or more partial indexes in the same table had identical `WHERE` clauses. This bug had been present since version 21.1.0. [#74475][#74475] - -

Performance improvements

- -- Improved [`IMPORT INTO`](https://www.cockroachlabs.com/docs/v21.1/import) performance in cases where it encounters large numbers of unresolved write intents. [#72272][#72272] -- The performance of transaction deadlock detection is now more stable even with significant transaction contention. [#71743][#71743] -- Bulk ingestion of small write batches (e.g., index backfill into a large number of ranges) is now throttled, to avoid buildup of read amplification and associated performance degradation. Concurrency is controlled by the new cluster setting `kv.bulk_io_write.concurrent_addsstable_as_writes_requests`. [#74072][#74072] - -

Contributors

- -This release includes 52 merged PRs by 26 authors. - -[#69255]: https://github.com/cockroachdb/cockroach/pull/69255 -[#70391]: https://github.com/cockroachdb/cockroach/pull/70391 -[#71434]: https://github.com/cockroachdb/cockroach/pull/71434 -[#71448]: https://github.com/cockroachdb/cockroach/pull/71448 -[#71743]: https://github.com/cockroachdb/cockroach/pull/71743 -[#71822]: https://github.com/cockroachdb/cockroach/pull/71822 -[#72049]: https://github.com/cockroachdb/cockroach/pull/72049 -[#72075]: https://github.com/cockroachdb/cockroach/pull/72075 -[#72272]: https://github.com/cockroachdb/cockroach/pull/72272 -[#72449]: https://github.com/cockroachdb/cockroach/pull/72449 -[#72464]: https://github.com/cockroachdb/cockroach/pull/72464 -[#72481]: https://github.com/cockroachdb/cockroach/pull/72481 -[#72589]: https://github.com/cockroachdb/cockroach/pull/72589 -[#72658]: https://github.com/cockroachdb/cockroach/pull/72658 -[#72756]: https://github.com/cockroachdb/cockroach/pull/72756 -[#72996]: https://github.com/cockroachdb/cockroach/pull/72996 -[#73038]: https://github.com/cockroachdb/cockroach/pull/73038 -[#73110]: https://github.com/cockroachdb/cockroach/pull/73110 -[#73120]: https://github.com/cockroachdb/cockroach/pull/73120 -[#73122]: https://github.com/cockroachdb/cockroach/pull/73122 -[#73166]: https://github.com/cockroachdb/cockroach/pull/73166 -[#73192]: https://github.com/cockroachdb/cockroach/pull/73192 -[#73353]: https://github.com/cockroachdb/cockroach/pull/73353 -[#73412]: https://github.com/cockroachdb/cockroach/pull/73412 -[#73582]: https://github.com/cockroachdb/cockroach/pull/73582 -[#73620]: https://github.com/cockroachdb/cockroach/pull/73620 -[#73663]: https://github.com/cockroachdb/cockroach/pull/73663 -[#73672]: https://github.com/cockroachdb/cockroach/pull/73672 -[#73757]: https://github.com/cockroachdb/cockroach/pull/73757 -[#73891]: https://github.com/cockroachdb/cockroach/pull/73891 -[#73959]: https://github.com/cockroachdb/cockroach/pull/73959 -[#74072]: https://github.com/cockroachdb/cockroach/pull/74072 -[#74124]: https://github.com/cockroachdb/cockroach/pull/74124 -[#74205]: https://github.com/cockroachdb/cockroach/pull/74205 -[#74258]: https://github.com/cockroachdb/cockroach/pull/74258 -[#74475]: https://github.com/cockroachdb/cockroach/pull/74475 diff --git a/src/current/_includes/releases/v21.1/v21.1.14.md b/src/current/_includes/releases/v21.1/v21.1.14.md deleted file mode 100644 index b845db985e1..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.14.md +++ /dev/null @@ -1,56 +0,0 @@ -## v21.1.14 - -Release Date: February 9, 2022 - - - -

SQL language changes

- -- Collated strings may now have a locale that is a language tag, followed by a `-u-` suffix, followed by anything else. For example, any locale with a prefix of `en-US-u-` is now considered valid. [#74754][#74754] - -

Command-line changes

- -- Fixed the CLI help text for [`ALTER DATABASE`](https://www.cockroachlabs.com/docs/v21.1/alter-database) to show correct options for [`ADD REGION`](https://www.cockroachlabs.com/docs/v21.1/add-region) and [`DROP REGION`](https://www.cockroachlabs.com/docs/v21.1/drop-region), and include some missing options such as [`CONFIGURE ZONE`](https://www.cockroachlabs.com/docs/v21.1/configure-zone). [#75075][#75075] - -

Bug fixes

- -- Fixed a panic when attempting to access the hottest ranges (e.g., via the `/_status/hotranges` endpoint) before initial statistics had been gathered. [#74514][#74514] -- Servers no longer crash due to panics in HTTP handlers. [#74533][#74533] -- Previously, running [`IMPORT TABLE ... PGDUMP DATA`](https://www.cockroachlabs.com/docs/v21.1/import) with a [`COPY FROM`](https://www.cockroachlabs.com/docs/v21.1/copy-from) statement in the dump file that had fewer target columns than the inline table definition would result in a nil pointer exception. This is now fixed. [#74452][#74452] -- Previously, a doubly nested [`ENUM`](https://www.cockroachlabs.com/docs/v21.1/enum) in a DistSQL query would not be hydrated on remote nodes, resulting in panic. This is now fixed. [#74527][#74527] -- Error messages produced during [import](https://www.cockroachlabs.com/docs/v21.1/import) are now truncated. Previously, import could potentially generate large error messages that could not be persisted to the jobs table, resulting in a failed import never entering the failed state and instead retrying repeatedly. [#73335][#73335] -- Fixed a bug where deleting data via schema changes (e.g., when dropping an index or table) could fail with a `command too large` error. [#74797][#74797] -- Fixed panics that were possible in some distributed queries using [`ENUM`](https://www.cockroachlabs.com/docs/v21.1/enum)s in join predicates. [#75086][#75086] -- Fixed a bug which caused errors in rare cases when trying to divide [`INTERVAL`](https://www.cockroachlabs.com/docs/v21.1/interval) values by `INT4` or `INT2` values. [#75091][#75091] -- Fixed a bug that caused internal errors when altering the primary key of a table. The bug was only present if the table had a partial index with a predicate that referenced a virtual computed column. This bug was present since virtual computed columns were added in v21.1.0. [#75193][#75193] -- Fixed a bug that could occur when a [`TIMETZ`](https://www.cockroachlabs.com/docs/v21.1/time) column was indexed, and a query predicate constrained that column using a `<` or `>` operator with a `TIMETZ` constant. If the column contained values with time zones that did not match the time zone of the `TIMETZ` constant, it was possible that not all matching values could be returned by the query. Specifically, the results may not have included values within one microsecond of the predicate's absolute time. This bug was introduced when the `TIMETZ` datatype was first added in v20.1. It exists on all releases of v20.1, v20.2, v21.1, and v21.2 prior to this patch. [#75173][#75173] -- Fixed an internal error, `estimated row count must be non-zero`, that could occur during planning for queries over a table with a [`TIMETZ`](https://www.cockroachlabs.com/docs/v21.1/time) column. This error was due to a faulty assumption in the statistics estimation code about ordering of `TIMETZ` values, which has now been fixed. The error could occur when `TIMETZ` values used in the query had a different time zone offset than the `TIMETZ` values stored in the table. [#75173][#75173] -- Previously, during [restore](https://www.cockroachlabs.com/docs/v21.1/restore), CockroachDB would not insert a `system.namespace` entry for synthetic public schemas. This is now fixed. [#74760][#74760] -- [`RESTORE ... FROM LATEST IN`](https://www.cockroachlabs.com/docs/v21.1/restore) now works to restore the latest backup from a collection without needing to first inspect the collection to supply its actual path. [#75437][#75437] -- Fixed a bug that caused internal errors in queries with set operations, like `UNION`, when corresponding columns on either side of the set operation were not the same. This error only occurred with a limited set of types. This bug is present in v20.2.6+, v21.1.0+, and v21.2.0+. [#75294][#75294] -- The `CancelSession` endpoint now correctly propagates gateway metadata when forwarding requests. [#75886][#75886] - -

Contributors

- -This release includes 27 merged PRs by 16 authors. - -[#73335]: https://github.com/cockroachdb/cockroach/pull/73335 -[#74452]: https://github.com/cockroachdb/cockroach/pull/74452 -[#74514]: https://github.com/cockroachdb/cockroach/pull/74514 -[#74527]: https://github.com/cockroachdb/cockroach/pull/74527 -[#74533]: https://github.com/cockroachdb/cockroach/pull/74533 -[#74754]: https://github.com/cockroachdb/cockroach/pull/74754 -[#74760]: https://github.com/cockroachdb/cockroach/pull/74760 -[#74797]: https://github.com/cockroachdb/cockroach/pull/74797 -[#74893]: https://github.com/cockroachdb/cockroach/pull/74893 -[#75075]: https://github.com/cockroachdb/cockroach/pull/75075 -[#75086]: https://github.com/cockroachdb/cockroach/pull/75086 -[#75091]: https://github.com/cockroachdb/cockroach/pull/75091 -[#75173]: https://github.com/cockroachdb/cockroach/pull/75173 -[#75193]: https://github.com/cockroachdb/cockroach/pull/75193 -[#75294]: https://github.com/cockroachdb/cockroach/pull/75294 -[#75437]: https://github.com/cockroachdb/cockroach/pull/75437 -[#75886]: https://github.com/cockroachdb/cockroach/pull/75886 -[#75891]: https://github.com/cockroachdb/cockroach/pull/75891 -[66bc0ab38]: https://github.com/cockroachdb/cockroach/commit/66bc0ab38 -[eeb15df70]: https://github.com/cockroachdb/cockroach/commit/eeb15df70 diff --git a/src/current/_includes/releases/v21.1/v21.1.15.md b/src/current/_includes/releases/v21.1/v21.1.15.md deleted file mode 100644 index 13899302ff7..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.15.md +++ /dev/null @@ -1,30 +0,0 @@ -## v21.1.15 - -Release Date: February 14, 2022 - -This page lists additions and changes in v21.1.15 since v21.1.14. - - - -

Enterprise edition changes

- -- Kafka sinks now support larger messages, up to 2GB in size. [#76322][#76322] - -

SQL language changes

- -- Non-admin users can now use the [`SHOW RANGES`](https://www.cockroachlabs.com/docs/v21.1/show-ranges) statement if the `ZONECONFIG` privilege is granted. [#76072][#76072] -- `ST_MakePolygon` is now disallowed from making empty polygons from empty linestrings. This is not allowed in PostGIS. [#76256][#76256] - -

Bug fixes

- -- Mixed dimension linestrings are now prohibited in `ST_MakePolygon`. [#76256][#76256] -- Fixed a bug which could cause nodes to crash when truncating abnormally large Raft logs. [#75980][#75980] - -

Contributors

- -This release includes 6 merged PRs by 6 authors. - -[#75980]: https://github.com/cockroachdb/cockroach/pull/75980 -[#76072]: https://github.com/cockroachdb/cockroach/pull/76072 -[#76256]: https://github.com/cockroachdb/cockroach/pull/76256 -[#76322]: https://github.com/cockroachdb/cockroach/pull/76322 diff --git a/src/current/_includes/releases/v21.1/v21.1.16.md b/src/current/_includes/releases/v21.1/v21.1.16.md deleted file mode 100644 index 5d2ba39b741..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.16.md +++ /dev/null @@ -1,32 +0,0 @@ -## v21.1.16 - -Release Date: March 7, 2022 - - - -

Bug fixes

- -- Fixed a bug that caused the query optimizer to omit join filters in rare cases when reordering joins, which could result in incorrect query results. This bug was present since v20.2. [#76621][#76621] -- Fixed a bug where some rows within query results could be omitted if the query references a column within a composite key with a null value. Previously, CockroachDB could incorrectly omit a row from query results against a table with multiple column families when that row contains a NULL value when a composite type ([FLOAT](https://www.cockroachlabs.com/docs/v21.1/float), [DECIMAL](https://www.cockroachlabs.com/docs/v21.1/decimal), [COLLATED STRING](https://www.cockroachlabs.com/docs/v21.1/collate), or any arrays of such a type) is included in the [PRIMARY KEY](https://www.cockroachlabs.com/docs/v21.1/primary-key). For the bug to occur, a composite column from the PRIMARY KEY must be included in any column family other than the first one. [#76637][#76637] -- Fixed a race condition that, in rare circumstances, could cause a node to panic with `unexpected Stopped processor` during shutdown. [#76828][#76828] -- There is now a 1 hour timeout when sending Raft snapshots to avoid stalled snapshot transfers. Stalled snapshot transfers could prevent Raft log truncation, thus growing the Raft log very large. This timeout is configurable via the `COCKROACH_RAFT_SEND_SNAPSHOT_TIMEOUT` environment variable. [#76830][#76830] -- [`CASE` expressions](https://www.cockroachlabs.com/docs/v21.1/scalar-expressions#simple-case-expressions) with branches that result in types that cannot be cast to a common type now result in a user-facing error instead of an internal error. [#76618][#76618] -- A bug has been fixed that could corrupt [indexes](https://www.cockroachlabs.com/docs/v21.1/indexes) containing [virtual columns](https://www.cockroachlabs.com/docs/v21.1/computed-columns) or expressions. The bug only occurred when the index's table had a [foreign key reference](https://www.cockroachlabs.com/docs/v21.1/foreign-key) to another table with an [`ON DELETE CASCADE`](https://www.cockroachlabs.com/docs/v21.1/foreign-key#foreign-key-actions) action, and a row was deleted in the referenced table. This bug was present since virtual columns were added in v21.1.0. [#77057][#77057] - -

Performance improvements

- -- Fixed a bug in the [histogram estimation code](https://www.cockroachlabs.com/docs/v21.1/cost-based-optimizer#control-histogram-collection) that could cause the optimizer to think a scan of a multi-column index would produce 0 rows, when in fact it would produce many rows. This could cause the optimizer to choose a suboptimal plan. This bug has now been fixed, making it less likely for the optimizer to choose a suboptimal plan when multiple multi-column indexes are available. [#76557][#76557] -- The accuracy of [histogram](https://www.cockroachlabs.com/docs/v21.1/cost-based-optimizer#control-histogram-collection) calculations for [`BYTES`](https://www.cockroachlabs.com/docs/v21.1/bytes) types has been improved. As a result, the optimizer should generate more efficient query plans in some cases. [#76797][#76797] - -

Contributors

- -This release includes 10 merged PRs by 6 authors. - -[#76557]: https://github.com/cockroachdb/cockroach/pull/76557 -[#76618]: https://github.com/cockroachdb/cockroach/pull/76618 -[#76621]: https://github.com/cockroachdb/cockroach/pull/76621 -[#76637]: https://github.com/cockroachdb/cockroach/pull/76637 -[#76797]: https://github.com/cockroachdb/cockroach/pull/76797 -[#76828]: https://github.com/cockroachdb/cockroach/pull/76828 -[#76830]: https://github.com/cockroachdb/cockroach/pull/76830 -[#77057]: https://github.com/cockroachdb/cockroach/pull/77057 diff --git a/src/current/_includes/releases/v21.1/v21.1.17.md b/src/current/_includes/releases/v21.1/v21.1.17.md deleted file mode 100644 index 62468b3fd86..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.17.md +++ /dev/null @@ -1,57 +0,0 @@ -## v21.1.17 - -Release Date: April 4, 2022 - - - -

Security updates

- -- Users can enable HSTS headers to be set on all HTTP requests, which force browsers to upgrade to HTTPS without a redirect. This is controlled by setting the `server.hsts.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings), which is `false` by default, to `true`. [#77863][#77863] - -

Enterprise edition changes

- -- Currently executing [schedules](https://www.cockroachlabs.com/docs/v21.1/manage-a-backup-schedule) are cancelled immediately when the jobs scheduler is disabled. [#77314][#77314] - -

SQL language changes

- -- Added a `sql.auth.resolve_membership_single_scan.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings), which changes the query for an internal role membership cache. Previously the code would recursively look up each role in the membership hierarchy, leading to multiple queries. With the setting on, it uses a single query. This setting is `false` by default. [#77624][#77624] - -

Operational changes

- -- The `cockroach debug tsdump` command now downloads histogram timeseries it silently omitted previously. [#78054][#78054] - -

Command-line changes

- -- The `cockroach debug tsdump` command now allows viewing timeseries data even in cases of node failure by allowing users to rerun the command with the import filename set to `"-"`. [#78057][#78057] -- Fixed a bug where starting [`cockroach demo`](https://www.cockroachlabs.com/docs/v21.1/cockroach-demo) with the `--global` flag would not simulate latencies correctly when combined with the `--insecure` flag. [#78173][#78173] - -

Bug fixes

- -- Fixed a bug where draining nodes in a cluster without shutting them down could stall foreground traffic in the cluster. [#77494][#77494] -- Fixed a bug that caused errors when attempting to create table statistics with [`CREATE STATISTICS`](https://www.cockroachlabs.com/docs/v21.1/create-statistics) or `ANALYZE` for a table containing an index which indexed only [virtual computed columns](https://www.cockroachlabs.com/docs/v21.1/computed-columns). [#77566][#77566] -- Added a limit of seven concurrent asynchronous consistency checks per store, with an upper timeout of one hour. This prevents abandoned consistency checks from building up in some circumstances, which could lead to increasing disk usage as they held onto [Pebble](https://www.cockroachlabs.com/docs/v21.1/architecture/storage-layer#pebble) snapshots. [#77612][#77612] -- Fixed a bug where the **Details** page was not loading for statements whose app name contains `/` were not properly loading their **Details** pages. [#77946][#77946] -- Fixed a memory leak in the [Pebble](https://www.cockroachlabs.com/docs/v21.1/architecture/storage-layer#pebble) block cache. [#78262][#78262] -- Fixed a bug that caused internal errors when `COALESCE` and `IF` [expressions](https://www.cockroachlabs.com/docs/v21.1/scalar-expressions) had inner expressions with different types that could not be cast to a common type. [#78345][#78345] -- Fixed a bug that caused errors when trying to evaluate queries with `NULL` values annotated as a tuple type, such as `NULL:::RECORD`. [#78638][#78638] -- Fixed a bug that caused the [optimizer](https://www.cockroachlabs.com/docs/v21.1/cost-based-optimizer) to generate invalid query plans which could result in incorrect query results. The bug, which has been present since version v21.1.0, can appear if all of the following conditions are true: 1) the query contains a semi-join, such as queries in the form: `SELECT * FROM t1 WHERE EXISTS (SELECT * FROM t2 WHERE t1.a = t2.a);`, 2) the inner table has an index containing the equality column, like `t2.a` in the example query, 3) the index contains one or more columns that prefix the equality column, and 4) the prefix columns are `NOT NULL` and are constrained to a set of constant values via a `CHECK` constraint or an `IN` condition in the filter. [#78976][#78976] - -

Contributors

- -This release includes 20 merged PRs by 15 authors. - -[#77314]: https://github.com/cockroachdb/cockroach/pull/77314 -[#77494]: https://github.com/cockroachdb/cockroach/pull/77494 -[#77566]: https://github.com/cockroachdb/cockroach/pull/77566 -[#77612]: https://github.com/cockroachdb/cockroach/pull/77612 -[#77624]: https://github.com/cockroachdb/cockroach/pull/77624 -[#77863]: https://github.com/cockroachdb/cockroach/pull/77863 -[#77946]: https://github.com/cockroachdb/cockroach/pull/77946 -[#78054]: https://github.com/cockroachdb/cockroach/pull/78054 -[#78057]: https://github.com/cockroachdb/cockroach/pull/78057 -[#78173]: https://github.com/cockroachdb/cockroach/pull/78173 -[#78262]: https://github.com/cockroachdb/cockroach/pull/78262 -[#78296]: https://github.com/cockroachdb/cockroach/pull/78296 -[#78345]: https://github.com/cockroachdb/cockroach/pull/78345 -[#78638]: https://github.com/cockroachdb/cockroach/pull/78638 -[#78976]: https://github.com/cockroachdb/cockroach/pull/78976 diff --git a/src/current/_includes/releases/v21.1/v21.1.18.md b/src/current/_includes/releases/v21.1/v21.1.18.md deleted file mode 100644 index 8d0af879551..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.18.md +++ /dev/null @@ -1,25 +0,0 @@ -## v21.1.18 - -Release Date: April 12, 2022 - - - - -

Bug fixes

- -- Fixed a bug where restores of data with multiple [column families](https://www.cockroachlabs.com/docs/v21.1/column-families) could be split illegally (within a single SQL row). This could result in temporary data unavailability until the ranges on either side of the invalid split are merged. [#79207][#79207] -- Fixed a bug where [`SHOW SCHEMAS FROM `](https://www.cockroachlabs.com/docs/v21.1/show-schemas) would not include user-defined schemas. [#79306][#79306] -- Previously, [`LIMIT`](https://www.cockroachlabs.com/docs/v21.1/limit-offset) queries with an [`ORDER BY`](https://www.cockroachlabs.com/docs/v21.1/order-by) clause which scan the index of a virtual system tables, such as `pg_type`, could return incorrect results. This has been corrected by teaching the [optimizer](https://www.cockroachlabs.com/docs/v21.1/cost-based-optimizer) that `LIMIT` operations cannot be pushed into ordered scans of virtual indexes. [#79466][#79466] -- Fixed a bug that caused the [optimizer](https://www.cockroachlabs.com/docs/v21.1/cost-based-optimizer) to generate invalid query plans which could result in incorrect query results. The bug, present since version v21.1.0, can appear if all of the following conditions are true: - - The query contains a semi-join, e.g., with the format `SELECT * FROM a WHERE EXISTS (SELECT * FROM b WHERE a.a @> b.b)`. - - The inner table has a multi-column [inverted index](https://www.cockroachlabs.com/docs/v21.1/inverted-indexes) containing the inverted column in the filter. - - The index prefix columns are constrained to a set of values via the filter or a [`CHECK`](https://www.cockroachlabs.com/docs/v21.1/check) constraint, e.g., with an `IN` operator. In the case of a `CHECK` constraint, the column is `NOT NULL`. [#79508][#79508] - -

Contributors

- -This release includes 12 merged PRs by 10 authors. - -[#79207]: https://github.com/cockroachdb/cockroach/pull/79207 -[#79306]: https://github.com/cockroachdb/cockroach/pull/79306 -[#79466]: https://github.com/cockroachdb/cockroach/pull/79466 -[#79508]: https://github.com/cockroachdb/cockroach/pull/79508 diff --git a/src/current/_includes/releases/v21.1/v21.1.19.md b/src/current/_includes/releases/v21.1/v21.1.19.md deleted file mode 100644 index 830c834701b..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.19.md +++ /dev/null @@ -1,41 +0,0 @@ -## v21.1.19 - -Release Date: May 9, 2022 - - - -

Enterprise edition changes

- -- Fixed a bug where backups in the base directory of a Google Storage bucket would not be discovered by [`SHOW BACKUPS`](https://www.cockroachlabs.com/docs/v21.1/show-backup). These backups will now appear correctly. [#80509][#80509] - -

SQL language changes

- -- Previously, a user could run an `AS OF SYSTEM TIME` [incremental backup](https://www.cockroachlabs.com/docs/v21.1/take-full-and-incremental-backups#incremental-backups) with an end time earlier than the previous backup's end time, which could lead to an out of order incremental backup chain. An incremental backup will now fail if the `AS OF SYSTEM TIME` is less than the previous backup's end time. [#80500][#80500] - -

DB Console changes

- -- Added a dropdown filter on the Node Diagnostics page to view by active, decommissioned, or all nodes. [#80340][#80340] -- Added an alert banner on Overview list page for staggered node versions [#80748][#80748] - -

Bug fixes

- -- Fixed a bug that caused an internal error when the inner expression of a column access expression evaluated to `NULL`. For example, evaluation of the expression `(CASE WHEN b THEN ((ROW(1) AS a)) ELSE NULL END).a` would error when `b` is `false`. This bug has been present since v19.1 or earlier. [#79527][#79527] -- Fixed a bug that caused an error when accessing a named column of a labelled tuple. The bug only occurred when an expression could produce one of several different tuples. For example, `(CASE WHEN true THEN (ROW(1) AS a) ELSE (ROW(2) AS a) END).a` would fail to evaluate. Although present in previous versions, it was impossible to encounter due to limitations that prevented using tuples in this way. [#79527][#79527] -- Addressed an issue where automatic encryption-at-rest data key rotation would get disabled after a node restart without a store key rotation. [#80171][#80171] -- The timeout when checking for Raft application of upgrade migrations has been increased from 5 seconds to 1 minute, and is now controllable via the [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings) `kv.migration.migrate_application.timeout`. This makes migrations much less likely to fail in clusters with ongoing rebalancing activity during upgrade migrations. [#80754][#80754] -- Fixed a bug where, in rare circumstances, CockroachDB could incorrectly evaluate queries with `ORDER BY` clause when the prefix of ordering was already provided by the index ordering of the scanned table. [#80732][#80732] -- Fixed a goroutine leak when internal rangefeed clients received certain kinds of retryable errors. [#80795][#80795] - -

Contributors

- -This release includes 18 merged PRs by 13 authors. - -[#79527]: https://github.com/cockroachdb/cockroach/pull/79527 -[#80171]: https://github.com/cockroachdb/cockroach/pull/80171 -[#80340]: https://github.com/cockroachdb/cockroach/pull/80340 -[#80500]: https://github.com/cockroachdb/cockroach/pull/80500 -[#80509]: https://github.com/cockroachdb/cockroach/pull/80509 -[#80732]: https://github.com/cockroachdb/cockroach/pull/80732 -[#80748]: https://github.com/cockroachdb/cockroach/pull/80748 -[#80754]: https://github.com/cockroachdb/cockroach/pull/80754 -[#80795]: https://github.com/cockroachdb/cockroach/pull/80795 diff --git a/src/current/_includes/releases/v21.1/v21.1.2.md b/src/current/_includes/releases/v21.1/v21.1.2.md deleted file mode 100644 index 65ea773287a..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.2.md +++ /dev/null @@ -1,111 +0,0 @@ -## v21.1.2 - -Release Date: June 7, 2021 - - - -

General changes

- -- Added [multi-region workloads](https://www.cockroachlabs.com/docs/v21.1/movr-flask-overview) for `cockroach demo movr --geo-partitioned-replicas`. Setting `--multi-region` enables for multi-region workloads, and setting `--survive` allows for surviving `AZ` or `REGION` failures. Setting `--infer-crdb-region-column` also infers the `crdb_region` for `REGIONAL BY ROW` tables. [#65642][#65642] {% comment %}doc{% endcomment %} -- [Changefeeds](https://www.cockroachlabs.com/docs/v21.1/create-changefeed) now better handle slow or unavailable sinks by treating "memory exceeded" errors as retryable. [#65387][#65387] {% comment %}doc{% endcomment %} - -

SQL language changes

- -- Added the `crdb_internal.lost_descriptors_with_data` [function](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators) to show descriptors that have no entries but data left behind. [#65462][#65462] {% comment %}doc{% endcomment %} -- Added the `crdb_internal.force_delete_table_data` [function](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators) which allows table data to be cleaned up only using a descriptor ID for cases of table descriptor corruption. [#65462][#65462] {% comment %}doc{% endcomment %} -- The statement type ("tag") is now also included alongside the full text of the SQL query in the various structured log entries produced when query execution is being [logged](https://www.cockroachlabs.com/docs/v21.1/logging-overview). [#65554][#65554] {% comment %}doc{% endcomment %} -- CockroachDB now returns a SQL Notice if a [`CREATE TABLE IF NOT EXISTS`](https://www.cockroachlabs.com/docs/v21.1/create-table) command is used to create a `TABLE` and the `TABLE` already exists. [#65636][#65636] {% comment %}doc{% endcomment %} -- The `SHOW FULL TABLE SCANS` statement was added to CockroachDB. [#65671][#65671] {% comment %}doc{% endcomment %} -- CockroachDB now returns a SQL Notice if a [`CREATE TYPE IF NOT EXISTS`](https://www.cockroachlabs.com/docs/v21.1/create-type) command is used to create a type and the type already exists. [#65635][#65635] {% comment %}doc{% endcomment %} -- Added a `chunk_size` option to [`EXPORT INTO CSV`](https://www.cockroachlabs.com/docs/v21.1/export) to control the target CSV file size. [#65388][#65388] {% comment %}doc{% endcomment %} -- SQL stats can now be cleared using the `crdb_internal.reset_sql_stats()` function. [#65674][#65674] {% comment %}doc{% endcomment %} -- CockroachDB now supports [`ALTER DATABASE ... ADD REGION IF NOT EXISTS ...`](https://www.cockroachlabs.com/docs/v21.1/add-region) which does not cause an error when adding a region that is already in the database. [#65752][#65752] {% comment %}doc{% endcomment %} -- CockroachDB now outputs a clearer error message when running [`ALTER DATABASE ... ADD REGION ...`](https://www.cockroachlabs.com/docs/v21.1/add-region) if the region is an undefined region. Previously, the error message for not having a region defined on a database resulted in an error about enums. [#65752][#65752] {% comment %}doc{% endcomment %} -- Added the [`ALTER DATABASE ... DROP REGION IF EXISTS ...`](https://www.cockroachlabs.com/docs/v21.1/drop-region) statement syntax, which does not error if dropping a region that is not defined on the database. [#65752][#65752] {% comment %}doc{% endcomment %} -- Fixed a bug where transitioning from locality `REGIONAL BY ROW` to `GLOBAL` or `REGIONAL BY TABLE` could mistakenly remove a zone configuration on an index which has no multi-region fields set. [#65833][#65833] {% comment %}doc{% endcomment %} -- CockroachDB now only blocks a [zone configuration DISCARD](https://www.cockroachlabs.com/docs/v21.1/configure-zone) on a multi-region table, index, or partition if the multi-region abstractions created the zone configuration. [#65834][#65834] {% comment %}doc{% endcomment %} - -

Operational changes

- -- [Range metrics](https://www.cockroachlabs.com/docs/v21.1/ui-replication-dashboard) are now gathered from the leaseholder (if live) rather than the first available range replica. This avoids scenarios where a stale replica may yield incorrect metrics, in particular over/underreplication markers. [#64590][#64590] {% comment %}doc{% endcomment %} - -

DB Console changes

- -- Fixed [Jobs page](https://www.cockroachlabs.com/docs/v21.1/ui-jobs-page) crash while using pagination and improved its performance. [#65723][#65723] -- Fixed a typo on the Network tooltip on the [Statements page](https://www.cockroachlabs.com/docs/v21.1/ui-statements-page). [#65605][#65605] -- Fixed a missing node ID in the rejoin [event message](https://www.cockroachlabs.com/docs/v21.1/ui-runtime-dashboard#events-panel) [#65806][#65806] -- Sorts on tables now pick up the correct value from the URL. [#65605][#65605] - -

Bug fixes

- -- Fixed a bug where a certain percentage of cases in which a node could have served a [follower read](https://www.cockroachlabs.com/docs/v21.1/follower-reads) were not handled correctly, resulting in the node routing the request to another nearby node for no reason. [#65471][#65471] -- The [`has_database_privilege`](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators) function now correctly will check privileges on databases that are not the current database being used by the session. [#65534][#65534] -- Fixed a bug where CockroachDB would previously crash when attempting to create a table using [`CREATE TABLE ... AS`](https://www.cockroachlabs.com/docs/v21.1/create-table-as) syntax where the `AS` clause selects from `crdb_internal.node_statement_statistics`, `crdb_internal.node_transaction_statistics`, or `crdb_internal.node_txn_stats` virtual tables. [#65542][#65542] -- Fixed a bug which allowed [index definitions](https://www.cockroachlabs.com/docs/v21.1/indexes) with redundant columns, which led to unnecessary storage usage. This bug can notably manifest itself with [`ALTER TABLE`](https://www.cockroachlabs.com/docs/v21.1/alter-table) statements which alter the primary index on a partitioned table. This bug has been present for a long time in theory, but in practice would only appear in CockroachDB since version 21.1.0. [#65482][#65482] -- Fixed a bug where binary [`TIMETZ`](https://www.cockroachlabs.com/docs/v21.1/time) values were not being decoded correctly when being sent as a parameter in the wire protocol. [#65341][#65341] -- Fixed a race condition during transaction cleanup that could leave old transaction records behind until [MVCC garbage collection](https://www.cockroachlabs.com/docs/v21.1/architecture/storage-layer#mvcc). [#65383][#65383] -- Improved [transaction cleanup](https://www.cockroachlabs.com/docs/v21.1/architecture/transaction-layer) for disconnected clients, to reduce intent buildup. [#65383][#65383] -- Added the ability to change the `COMMENT` on a column after using [`ALTER TYPE`](https://www.cockroachlabs.com/docs/v21.1/alter-type) on that column. [#65698][#65698] -- [Scheduled backup](https://www.cockroachlabs.com/docs/v21.1/create-schedule-for-backup) with [interleaved tables](https://www.cockroachlabs.com/docs/v21.1/interleave-in-parent) can now be created with the `include_deprecated_interleaves` option. [#65731][#65731] -- Fixed a bug where `ST_Node` on a [`LINESTRING`](https://www.cockroachlabs.com/docs/v21.1/linestring) with the same repeated points results in an error. [#65700][#65700] -- Calling [`get_bit` or `set_bit`](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators) on a byte array argument now goes to the correct index of the underlying bit string, in order to match the behavior of Postgres. [#65786][#65786] -- Fixed a bug where [`ALTER DATABASE ... CONVERT TO SCHEMA`](https://www.cockroachlabs.com/docs/v21.1/convert-to-schema) could potentially leave the schema with invalid privileges thus causing the privilege descriptor to be invalid. [#65810][#65810] -- CockroachDB now renders the `CACHE` clause for [sequences](https://www.cockroachlabs.com/docs/v21.1/create-sequence) which use a cache. [#65805][#65805] -- Fixed a bug that could cause a node to crash in rare cases if a [`BACKUP`](https://www.cockroachlabs.com/docs/v21.1/backup) writing to Google Cloud Storage failed. [#65802][#65802] -- Fixed a bug introduced in 21.1 where [cluster restores](https://www.cockroachlabs.com/docs/v21.1/restore) would report inaccurate progress, showing 100% progress erroneously. [#65803][#65803] -- Fixed a crash when performing a cluster [`BACKUP`](https://www.cockroachlabs.com/docs/v21.1/backup) with revision history of a cluster upgraded from 20.1 to 20.2 to 21.1 which contains tables that were truncated by 20.1. [#65860][#65860] -- Fixed a bug that caused incorrect results for queries where [`CHAR` and `VARCHAR`](https://www.cockroachlabs.com/docs/v21.1/string#related-types) columns are filtered by constant string values. The bug was present since version v21.1.0. [#66101][#66101] - -

Performance improvements

- -- The [optimizer](https://www.cockroachlabs.com/docs/v21.1/cost-based-optimizer) can now avoid full table scans for queries with a [`LIMIT`](https://www.cockroachlabs.com/docs/v21.1/limit-offset) and [`ORDER BY`](https://www.cockroachlabs.com/docs/v21.1/order-by) clause in some additional cases where the `ORDER BY` columns are not a prefix of an index. [#65392][#65392] -- The [optimizer](https://www.cockroachlabs.com/docs/v21.1/cost-based-optimizer) now generates query plans that scan indexes on virtual collated string columns, regardless of the casing or formatting of the collated locale in the query. [#65531][#65531] -- CockroachDB now reduces the number of round-trips required to call `pg_table_is_visible` in the context of [`pg_catalog` queries](https://www.cockroachlabs.com/docs/v21.1/pg-catalog). [#65807][#65807] - -
- -

Contributors

- -This release includes 58 merged PRs by 34 authors. -We would like to thank the following contributors from the CockroachDB community: - -- Max Neverov -- Rupesh Harode - -
- -[#64590]: https://github.com/cockroachdb/cockroach/pull/64590 -[#65341]: https://github.com/cockroachdb/cockroach/pull/65341 -[#65383]: https://github.com/cockroachdb/cockroach/pull/65383 -[#65387]: https://github.com/cockroachdb/cockroach/pull/65387 -[#65388]: https://github.com/cockroachdb/cockroach/pull/65388 -[#65392]: https://github.com/cockroachdb/cockroach/pull/65392 -[#65462]: https://github.com/cockroachdb/cockroach/pull/65462 -[#65471]: https://github.com/cockroachdb/cockroach/pull/65471 -[#65482]: https://github.com/cockroachdb/cockroach/pull/65482 -[#65531]: https://github.com/cockroachdb/cockroach/pull/65531 -[#65534]: https://github.com/cockroachdb/cockroach/pull/65534 -[#65542]: https://github.com/cockroachdb/cockroach/pull/65542 -[#65554]: https://github.com/cockroachdb/cockroach/pull/65554 -[#65605]: https://github.com/cockroachdb/cockroach/pull/65605 -[#65635]: https://github.com/cockroachdb/cockroach/pull/65635 -[#65636]: https://github.com/cockroachdb/cockroach/pull/65636 -[#65642]: https://github.com/cockroachdb/cockroach/pull/65642 -[#65671]: https://github.com/cockroachdb/cockroach/pull/65671 -[#65674]: https://github.com/cockroachdb/cockroach/pull/65674 -[#65698]: https://github.com/cockroachdb/cockroach/pull/65698 -[#65700]: https://github.com/cockroachdb/cockroach/pull/65700 -[#65723]: https://github.com/cockroachdb/cockroach/pull/65723 -[#65731]: https://github.com/cockroachdb/cockroach/pull/65731 -[#65752]: https://github.com/cockroachdb/cockroach/pull/65752 -[#65786]: https://github.com/cockroachdb/cockroach/pull/65786 -[#65802]: https://github.com/cockroachdb/cockroach/pull/65802 -[#65803]: https://github.com/cockroachdb/cockroach/pull/65803 -[#65805]: https://github.com/cockroachdb/cockroach/pull/65805 -[#65806]: https://github.com/cockroachdb/cockroach/pull/65806 -[#65807]: https://github.com/cockroachdb/cockroach/pull/65807 -[#65810]: https://github.com/cockroachdb/cockroach/pull/65810 -[#65833]: https://github.com/cockroachdb/cockroach/pull/65833 -[#65834]: https://github.com/cockroachdb/cockroach/pull/65834 -[#65860]: https://github.com/cockroachdb/cockroach/pull/65860 -[#66101]: https://github.com/cockroachdb/cockroach/pull/66101 diff --git a/src/current/_includes/releases/v21.1/v21.1.20.md b/src/current/_includes/releases/v21.1/v21.1.20.md deleted file mode 100644 index 3788bd5f4ba..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.20.md +++ /dev/null @@ -1,17 +0,0 @@ -## v21.1.20 - -Release Date: September 12, 2022 - - - -

Bug fixes

- -- Fixed a rare crash which could occur when [restarting a node](https://www.cockroachlabs.com/docs/v21.1/cockroach-start) after dropping tables. [#80568][#80568] -- Fixed a bug where the rollback of [materialized view creation](https://www.cockroachlabs.com/docs/v21.1/views#materialized-views) left references inside dependent objects. [#82098][#82098] - -

Contributors

- -This release includes 7 merged PRs by 6 authors. - -[#80568]: https://github.com/cockroachdb/cockroach/pull/80568 -[#82098]: https://github.com/cockroachdb/cockroach/pull/82098 diff --git a/src/current/_includes/releases/v21.1/v21.1.21.md b/src/current/_includes/releases/v21.1/v21.1.21.md deleted file mode 100644 index d1fb467f2cd..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.21.md +++ /dev/null @@ -1,15 +0,0 @@ -## v21.1.21 - -Release Date: September 15, 2022 - - - -

Command-line changes

- -- Added the `--log-config-vars` flag to the [`cockroach` CLI](https://www.cockroachlabs.com/docs/v21.1/cockroach-commands), which allows for environment variables to be specified for expansion within the logging configuration file. This change allows for a single logging configuration file to service an array of sinks without further manipulation of the configuration file. [#85173][#85173] - -

Contributors

- -This release includes 1 merged PR by 2 authors. - -[#85173]: https://github.com/cockroachdb/cockroach/pull/85173 diff --git a/src/current/_includes/releases/v21.1/v21.1.3.md b/src/current/_includes/releases/v21.1/v21.1.3.md deleted file mode 100644 index ce7616973a7..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.3.md +++ /dev/null @@ -1,114 +0,0 @@ -## v21.1.3 - -Release Date: June 21, 2021 - - - -

Security updates

- -- Syntax errors in the host-based authentication (HBA) configuration in cluster setting `server.host_based_authentication.configuration` are now logged on the [OPS channel](https://www.cockroachlabs.com/docs/v21.1/logging). [#66128][#66128] {% comment %}doc{% endcomment %} -- The `User` and `ApplicationName` fields of structured events pertaining to SQL queries are now marked as non-sensitive when they contain certain values (`root`/`node` for `User` and values starting with `$` for application names). [#66443][#66443] {% comment %}doc{% endcomment %} - -

Enterprise edition changes

- -- Changefeeds with custom Kafka client configurations (using the `kafka_sink_config` object) that could lead to long delays in flushing messages will now produce an error. [#66265][#66265] {% comment %}doc{% endcomment %} -- The `kafka_sink_config` object now supports a `version` configuration item to specify Kafka server versions. This is likely only necessary for old (Kafka 0.11/Confluent 3.3 or earlier) Kafka servers. Additionally, settings not specified in `kafka_sink_config` now retain their default values. [#66314][#66314] {% comment %}doc{% endcomment %} - -

SQL language changes

- -- [`TRUNCATE`](https://www.cockroachlabs.com/docs/v21.1/truncate) is now less disruptive on tables with a large amount of concurrent traffic. [#65940][#65940] {% comment %}doc{% endcomment %} -- Creating `STORED` or `VIRTUAL` [computed columns](https://www.cockroachlabs.com/docs/v21.1/computed-columns) with expressions that reference [foreign key](https://www.cockroachlabs.com/docs/v21.1/foreign-key) columns is now allowed. [#66168][#66168] {% comment %}doc{% endcomment %} -- The new function [`crdb_internal.get_vmodule`](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators#system-info-functions) returns the current `vmodule` configuration on the node processing the request. [#63545][#63545] {% comment %}doc{% endcomment %} -- The description string for the `random()` function now clarifies that there are at most 53 bits of randomness available; that is, the function is unsuitable to generate 64-bit random integer values. This behavior is similar to that of PostgreSQL. [#66128][#66128] {% comment %}doc{% endcomment %} -- [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v21.1/explain-analyze) now displays information about the regions on which a statement was executed. [#66368][#66368] {% comment %}doc{% endcomment %} - -

Operational changes

- -- Added a configurable limit to the number of intents collected by a `scan` before aborting, to prevent out-of-memory errors. The setting `storage.mvcc.max_intents_per_error` replaces `storage.sst_export.max_intents_per_error` and covers both `scan` and `export` commands. [#65923][#65923] {% comment %}doc{% endcomment %} -- [`BACKUP`](https://www.cockroachlabs.com/docs/v21.1/backup) now puts backup data files in a `data` sub-directory of the `BACKUP` path instead of directly in the backup path. [#66161][#66161] {% comment %}doc{% endcomment %} - -

Command-line changes

- -- The informational messages printed out when [`cockroach demo`](https://www.cockroachlabs.com/docs/v21.1/cockroach-demo) starts have been updated to clarify that certain information is only needed when accessing the demo cluster from another tool. [#66129][#66129] {% comment %}doc{% endcomment %} -- [`cockroach demo`](https://www.cockroachlabs.com/docs/v21.1/cockroach-demo) and [`cockroach sql`](https://www.cockroachlabs.com/docs/v21.1/cockroach-sql) are now able to run client-side commands via the `-e` command-line flag. This makes it possible to use commands like `\dt` or `\hf` from a shell script. [#66326][#66326] {% comment %}doc{% endcomment %} - -

DB Console changes

- -- Users can now reset SQL stats from the [DB Console](https://www.cockroachlabs.com/docs/v21.1/ui-overview). [#65916][#65916] {% comment %}doc{% endcomment %} -- Removed shading on line graphs, improving legibility when viewing more than a few series on the same plot. [#66032][#66032] {% comment %}doc{% endcomment %} -- Drag-to-zoom on metrics graphs now supports time ranges under 10 minutes. [#66032][#66032] {% comment %}doc{% endcomment %} -- In some cases, the Execution Stats page would show a very high Overhead latency for a statement. This could happen due to multiple statements being parsed together or due to statement execution being retried. To avoid this, we no longer consider the time between when parsing ends and execution begins when determining service latency. [#66108][#66108] {% comment %}doc{% endcomment %} -- Improved the style of the password input field for Safari. [#66134][#66134] -- The metrics chart under Overview was renamed from `SQL Queries` to `SQL Statements` to match the naming used under SQL Metrics. [#66364][#66364] {% comment %}doc{% endcomment %} - -

Bug fixes

- -- Fixed a bug which prevented adding columns to tables which contain data and use `NOT NULL` virtual columns [#65973][#65973] -- Fixed a bug in the DB Console where graphs for clusters with decommissioned nodes could show an empty series and data could be incorrectly attributed to the wrong nodes. [#66032][#66032] -- Fixed a bug where queries on `REGIONAL BY ROW` tables could fail in the brief window in which a DROP REGION operation is in progress. [#65984][#65984] -- Fixed a bug where a schema's privilege descriptor could be corrupted upon executing `ALTER DATABASE ... CONVERT TO SCHEMA`, where privileges invalid on a schema were copied from the database, rendering the schema unusable. [#65993][#65993] -- Fixed the error classification for duplicate index names where the later index was a `UNIQUE` index. [#64000][#64000] -- Fixed the error classification for `ALTER TABLE ... ADD CONSTRAINT ... UNIQUE` with the same name as an existing index. [#64000][#64000] -- Fixed a bug that made it less likely for range merges to succeed on clusters using multiple stores per node is now fixed. [#65889][#65889] -- Improved `TRUNCATE` operations to prevent contention issues. [#65940][#65940] -- Improved garbage collection of stale replicas by proactively checking certain replicas that have lost contact with other voting replicas. [#65186][#65186] -- Fixed a bug in `SHOW RANGES` which misattributed localities to nodes when using multiple stores. [#66037][#66037] -- Queries run through the `EXECUTE` statement can now generate statement diagnostic bundles as expected. [#66098][#66098] -- Previously, an `INSERT` causing a foreign key violation could result in an internal error in rare cases. The bug only affected the error response; any affected `INSERT`s (which would have been foreign-key violations) did not succeed. This bug, present since v21.1.0, has been fixed. [#66300][#66300] -- `BACKUP` and other operations can now reuse a previously created S3 client session when operating on the same bucket, which can avoid `NoCredentialProviders` errors on EC2 when iterating with large incremental backups. [#66259][#66259] -- The command exit status of `cockroach demo` and `cockroach sql` is now properly set to non-zero (error) after an error is encountered in a client-side command. Additionally, `cockroach sql` and `cockroach demo` now properly stop upon encountering an invalid configuration with `--set`, instead of starting to execute SQL statements after the invalid configuration. [#66326][#66326] -- Improved the availability of the jobs table for reads in large, global clusters by running background tasks at low priority. [#66344][#66344] -- Backups no longer risk the possibility of blocking conflicting writes while being rate limited by the `kv.bulk_io_write.concurrent_export_requests` concurrency limit. [#66408][#66408] -- The `soundex` and `st_difference` [built-in functions](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators#built-in-functions) for fuzzy string matching now correctly handle `NULL` values. [#66302][#66302] - -

Performance improvements

- -- Fixed an issue in the optimizer that prevented spatial predicates of the form `(column && value) = true` from being index-accelerated. These queries can now use a [spatial index](https://www.cockroachlabs.com/docs/v21.1/spatial-indexes) if one is available. [#65986][#65986] -- The optimizer is now more efficient when planning queries on tables that have many columns and indexes. [#66304][#66304] -- The `COCKROACHDB_REGISTRY` file is no longer rewritten whenever a new unencrypted file is created. [#66423][#66423] {% comment %}doc{% endcomment %} -- After improvements, queries use up to 1MB less system memory per scan, [lookup join, index join, zigzag join, or inverted join](https://www.cockroachlabs.com/docs/v21.1/joins) in their query plans. This will result in improved memory performance for workloads with concurrent OLAP-style queries. [#66145][#66145] -- Made improvements to prevent intra-query leaks during disk spilling that could cause the database to run out of memory, especially on tables with wide rows. [#66145][#66145] - -

Contributors

- -This release includes 64 merged PRs by 35 authors. - - -[#63545]: https://github.com/cockroachdb/cockroach/pull/63545 -[#64000]: https://github.com/cockroachdb/cockroach/pull/64000 -[#65186]: https://github.com/cockroachdb/cockroach/pull/65186 -[#65889]: https://github.com/cockroachdb/cockroach/pull/65889 -[#65916]: https://github.com/cockroachdb/cockroach/pull/65916 -[#65923]: https://github.com/cockroachdb/cockroach/pull/65923 -[#65940]: https://github.com/cockroachdb/cockroach/pull/65940 -[#65973]: https://github.com/cockroachdb/cockroach/pull/65973 -[#65984]: https://github.com/cockroachdb/cockroach/pull/65984 -[#65986]: https://github.com/cockroachdb/cockroach/pull/65986 -[#65993]: https://github.com/cockroachdb/cockroach/pull/65993 -[#66022]: https://github.com/cockroachdb/cockroach/pull/66022 -[#66032]: https://github.com/cockroachdb/cockroach/pull/66032 -[#66037]: https://github.com/cockroachdb/cockroach/pull/66037 -[#66098]: https://github.com/cockroachdb/cockroach/pull/66098 -[#66108]: https://github.com/cockroachdb/cockroach/pull/66108 -[#66128]: https://github.com/cockroachdb/cockroach/pull/66128 -[#66129]: https://github.com/cockroachdb/cockroach/pull/66129 -[#66134]: https://github.com/cockroachdb/cockroach/pull/66134 -[#66145]: https://github.com/cockroachdb/cockroach/pull/66145 -[#66161]: https://github.com/cockroachdb/cockroach/pull/66161 -[#66168]: https://github.com/cockroachdb/cockroach/pull/66168 -[#66259]: https://github.com/cockroachdb/cockroach/pull/66259 -[#66265]: https://github.com/cockroachdb/cockroach/pull/66265 -[#66300]: https://github.com/cockroachdb/cockroach/pull/66300 -[#66302]: https://github.com/cockroachdb/cockroach/pull/66302 -[#66304]: https://github.com/cockroachdb/cockroach/pull/66304 -[#66314]: https://github.com/cockroachdb/cockroach/pull/66314 -[#66326]: https://github.com/cockroachdb/cockroach/pull/66326 -[#66344]: https://github.com/cockroachdb/cockroach/pull/66344 -[#66364]: https://github.com/cockroachdb/cockroach/pull/66364 -[#66368]: https://github.com/cockroachdb/cockroach/pull/66368 -[#66408]: https://github.com/cockroachdb/cockroach/pull/66408 -[#66423]: https://github.com/cockroachdb/cockroach/pull/66423 -[#66443]: https://github.com/cockroachdb/cockroach/pull/66443 -[#66453]: https://github.com/cockroachdb/cockroach/pull/66453 -[#66508]: https://github.com/cockroachdb/cockroach/pull/66508 -[25c3d10a0]: https://github.com/cockroachdb/cockroach/commit/25c3d10a0 diff --git a/src/current/_includes/releases/v21.1/v21.1.4.md b/src/current/_includes/releases/v21.1/v21.1.4.md deleted file mode 100644 index 91d0860e279..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.4.md +++ /dev/null @@ -1,88 +0,0 @@ -## v21.1.4 - -Release Date: June 29, 2021 - -{{site.data.alerts.callout_danger}} -We recommend upgrading from this release to the [v21.1.5]({% link releases/v21.1.md %}#v21-1-5) bug fix release as soon as possible. -{{site.data.alerts.end}} - - - -

Security updates

- -- Previously, all the [logging](https://www.cockroachlabs.com/docs/v21.1/logging-overview) output to files or network sinks was disabled temporarily while an operator was using the `/debug/logspy` HTTP API, resulting in lost entries and a breach of auditability guarantees. This behavior has been corrected. [#66328][#66328] -- CockroachDB now configures a maximum number of concurrent password verifications in the server process, across UI and SQL clients. This limit reduces the risk of DoS attacks or accidents due to misbehaving clients. By default, the maximum amount of concurrency is ~12% of the number of allocated CPU cores (as per `GOMAXPROCS`), with a minimum of 1 core. This default can be overridden using the environment variable `COCKROACH_MAX_BCRYPT_CONCURRENCY`. [#66367][#66367] {% comment %}doc{% endcomment %} - -

SQL language changes

- -- Implemented the `ST_HasArc` [built-in function](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators#spatial-functions). This adds better out-of-the-box support for [GeoServer](https://www.cockroachlabs.com/docs/v21.1/geoserver). [#66531][#66531] {% comment %}doc{% endcomment %} -- Added a new internal [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings), `jobs.cancel_update_limit`, for controlling how many [jobs](https://www.cockroachlabs.com/docs/v21.1/show-jobs) are cleaned up concurrently after query cancellation. [#66488][#66488] {% comment %}doc{% endcomment %} - -

Command-line changes

- -- The [SQL shell](https://www.cockroachlabs.com/docs/v21.1/cockroach-sql) now formats times with time zones so that the minutes and seconds offsets are only shown if they are non-zero. Also, infinite floating point values are now formatted as `Infinity` rather than `Inf`. [#66130][#66130] {% comment %}doc{% endcomment %} -- When [log entries](https://www.cockroachlabs.com/docs/v21.1/logging-overview) are written to disk, the first few header lines written at the start of every new file now report the configured [logging format](https://www.cockroachlabs.com/docs/v21.1/log-formats). [#66328][#66328] {% comment %}doc{% endcomment %} - -

API endpoint changes

- -- The `/debug/logspy` HTTP API has changed. The endpoint now returns JSON data by default. If the previous format is desired, the user can pass the query argument `&flatten=1` to the `logspy` URL to obtain the previous flat text format (`crdb-v1`) instead. [#66328][#66328] This change is motivated as follows: {% comment %}doc{% endcomment %} - - The previous format, `crdb-v1`, cannot be parsed reliably. - - Using JSON entries guarantees that the text of each entry all fits on a single line of output (newline characters inside the messages are escaped). This makes filtering easier and more reliable. - - Using JSON enables the user to apply `jq` on the output, for example via `curl -s .../debug/logspy | jq ...`. -- The `/debug/logspy` API no longer enables maximum logging verbosity automatically. To change the verbosity, use the new `/debug/vmodule` endpoint or pass the `&vmodule=` query parameter to the `/debug/logspy` endpoint. [#66328][#66328] For example, suppose you wish to run a 20s logspy session: {% comment %}doc{% endcomment %} - - Before: `curl 'https://.../debug/logspy?duration=20s&...'`. - - Now: `curl 'https://.../debug/logspy?duration=20s&vmodule=...'` OR `curl 'https://.../debug/vmodule?duration=22s&vmodule=...' curl 'https://.../debug/logspy?duration=20s'`. - - As for the regular `vmodule` command-line flag, the maximum verbosity across all the source code can be selected with the pattern `*=4`. - - Note: at most one in-flight HTTP API request is allowed to modify the `vmodule` parameter. This maintains the invariant that the configuration restored at the end of each request is the same as when the request started. -- The new `/debug/vmodule` API makes it possible for an operator to configure the logging verbosity in a similar way as the SQL [built-in function](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators) `crdb_internal.set_vmodule()`, or to query the current configuration as in `crdb_internal.get_vmodule()`. Additionally, any configuration change performed via this API can be automatically reverted after a configurable delay. [#66328][#66328] The API forms are: {% comment %}doc{% endcomment %} - - `/debug/vmodule`: Retrieve the current configuration - - `/debug/vmodule?set=[vmodule config]&duration=[duration]`: Change the configuration to `[vmodule config]` . The previous configuration at the time the `/debug/vmodule` request started is restored after `[duration]`. This duration, if not specified, defaults to twice the default duration of a `logspy` request (currently, the `logspy` default duration is 5s, so the `vmodule` default duration is 10s). If the duration is zero or negative, the previous configuration is never restored. - -

DB Console changes

- -- Fixed an issue with displaying more than 100 hours of remaining time on the [Jobs page](https://www.cockroachlabs.com/docs/v21.1/ui-jobs-page). [#66596][#66596] - -

Bug fixes

- -- Minute timezone offsets are only displayed in the wire protocol if they are non-zero for `TimestampTZ` and `TimeTZ` values. Previously, they would always display. [#66130][#66130] -- Fixed a bug where binary `TimeTZ` values were not being decoded correctly when being sent as a parameter in the wire protocol. [#66130][#66130] -- CockroachDB's [SQL shell](https://www.cockroachlabs.com/docs/v21.1/cockroach-sql) now properly displays results of common array types, for example: arrays of floats, or arrays of strings. [#66130][#66130] -- Fixed a bug where the `--log='file-defaults: {format: crdb-v1}'` flag was not being handled properly. This bug existed since v21.1.0. [#66328][#66328] -- Fixed a bug where log entries could be lost while the `/debug/logspy` HTTP API was being used. This bug had existed since CockroachDB v1.1. [#66328][#66328] -- The binary encoding of decimals will no longer have negative `dscale` values. This was preventing [Npgsql](https://www.npgsql.org) from being able to read some binary decimals from CockroachDB. [#66532][#66532] -- A bug has been fixed which prevented the [optimizer](https://www.cockroachlabs.com/docs/v21.1/cost-based-optimizer) from producing plans with [partial indexes](https://www.cockroachlabs.com/docs/v21.1/partial-indexes) when executing some prepared statements that contained placeholders, stable functions, or casts. This bug was present since partial indexes were added in v20.2.0. [#66634][#66634] -- Fixed a bug which could have prevented [backups](https://www.cockroachlabs.com/docs/v21.1/backup) from being successfully [restored](https://www.cockroachlabs.com/docs/v21.1/restore). [#66616][#66616] -- Fixed a bug where CockroachDB could crash when executing [`EXPLAIN (VEC)`](https://www.cockroachlabs.com/docs/v21.1/explain#vec-option) on some mutations. The bug is present only in the v21.1.1-v21.1.3 releases. [#66573][#66573] -- Fixed a bug where CockroachDB could encounter an internal error when computing [window functions](https://www.cockroachlabs.com/docs/v21.1/window-functions) with `ROWS` mode of window framing if the offsets were very large for the `OFFSET FOLLOWING` boundary type. [#66446][#66446] -- Fixed a bug where using [`ADD COLUMN UNIQUE`](https://www.cockroachlabs.com/docs/v21.1/add-column) on [`REGIONAL BY ROW` tables](https://www.cockroachlabs.com/docs/v21.1/set-locality#regional-by-row) did not correctly add the [zone configs](https://www.cockroachlabs.com/docs/v21.1/configure-replication-zones) for the newly created column index. [#66696][#66696] -- Fixed a bug where reading from Google Cloud Storage was not using the resuming reader, as a result of which some retryable errors were not being treated as such, and the read would fail. [#66190][#66190] -- Fixed a deadlock during [backups](https://www.cockroachlabs.com/docs/v21.1/backup) and [imports](https://www.cockroachlabs.com/docs/v21.1/import). [#66773][#66773] -- Fixed incorrect accounting for statement/transaction sampled [execution statistics](https://www.cockroachlabs.com/docs/v21.1/explain-analyze). [#66790][#66790] -- Fixed a bug causing [transactions](https://www.cockroachlabs.com/docs/v21.1/transactions) to be spuriously aborted in rare circumstances. [#66567][#66567] - -

DB Console

- -- Fixed a CSS width calculation which was causing the multibar to not be visible in the [DB Console](https://www.cockroachlabs.com/docs/v21.1/ui-overview). [#66739][#66739] - -

Contributors

- -This release includes 28 merged PRs by 22 authors. - -[#66130]: https://github.com/cockroachdb/cockroach/pull/66130 -[#66190]: https://github.com/cockroachdb/cockroach/pull/66190 -[#66328]: https://github.com/cockroachdb/cockroach/pull/66328 -[#66367]: https://github.com/cockroachdb/cockroach/pull/66367 -[#66446]: https://github.com/cockroachdb/cockroach/pull/66446 -[#66473]: https://github.com/cockroachdb/cockroach/pull/66473 -[#66488]: https://github.com/cockroachdb/cockroach/pull/66488 -[#66531]: https://github.com/cockroachdb/cockroach/pull/66531 -[#66532]: https://github.com/cockroachdb/cockroach/pull/66532 -[#66567]: https://github.com/cockroachdb/cockroach/pull/66567 -[#66573]: https://github.com/cockroachdb/cockroach/pull/66573 -[#66596]: https://github.com/cockroachdb/cockroach/pull/66596 -[#66616]: https://github.com/cockroachdb/cockroach/pull/66616 -[#66634]: https://github.com/cockroachdb/cockroach/pull/66634 -[#66696]: https://github.com/cockroachdb/cockroach/pull/66696 -[#66739]: https://github.com/cockroachdb/cockroach/pull/66739 -[#66773]: https://github.com/cockroachdb/cockroach/pull/66773 -[#66790]: https://github.com/cockroachdb/cockroach/pull/66790 diff --git a/src/current/_includes/releases/v21.1/v21.1.5.md b/src/current/_includes/releases/v21.1/v21.1.5.md deleted file mode 100644 index 138358895e9..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.5.md +++ /dev/null @@ -1,15 +0,0 @@ -## v21.1.5 - -Release Date: July 2, 2021 - -{{site.data.alerts.callout_danger}} -We recommend upgrading from [v21.1.4]({% link releases/v21.1.md %}#v21-1-4) to this bug fix release as soon as possible. -{{site.data.alerts.end}} - - - -

Bug fixes

- -- Fixed a panic that could occur in the [optimizer](https://www.cockroachlabs.com/docs/v21.1/cost-based-optimizer) when executing a prepared plan with placeholders. This could happen when one of the tables used by the query had [computed columns](https://www.cockroachlabs.com/docs/v21.1/computed-columns) or a [partial index](https://www.cockroachlabs.com/docs/v21.1/partial-indexes). [#66883][#66883] - -[#66883]: https://github.com/cockroachdb/cockroach/pull/66833 diff --git a/src/current/_includes/releases/v21.1/v21.1.6.md b/src/current/_includes/releases/v21.1/v21.1.6.md deleted file mode 100644 index 1c9dea1c4b0..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.6.md +++ /dev/null @@ -1,84 +0,0 @@ -## v21.1.6 - -Release Date: July 20, 2021 - - - -

General changes

- -- Switched the `release-21.1` branch to point to a fork of the Google Cloud SDK at version 0.45.1 with an [upstream bug fix](https://github.com/googleapis/google-cloud-go/pull/4226). [#66808][#66808] - -

Enterprise edition changes

- -- Improved the internal buffering of [changefeeds](https://www.cockroachlabs.com/docs/v21.1/stream-data-out-of-cockroachdb-using-changefeeds) to be robust to bursts in traffic which exceed the throughput to the sink. [#67205][#67205] -- [Incremental backups](https://www.cockroachlabs.com/docs/v21.1/take-full-and-incremental-backups#incremental-backups) to a cloud storage location that already contains large existing backups now find their derived destination without listing as many remote files. [#67286][#67286] - -

Operational changes

- -- Added logs for important events during the server draining/shutdown process: (1) log when the server closes an existing connection while draining, (2) log when the server rejects a new connection while draining, and (3) log when the server cancels in-flight queries after waiting for the [`server.shutdown.query_wait`](https://www.cockroachlabs.com/docs/v21.1/cluster-settings) duration to elapse while draining. [#66874][#66874] {% comment %}doc{% endcomment %} - -

DB Console changes

- -- Fixed a bug preventing the [Custom Chart debug page](https://www.cockroachlabs.com/docs/v21.1/ui-custom-chart-debug-page) from loading on initial. [#66897][#66897] -- Added a drag-to-select timescale feature to the [Custom Chart debug page](https://www.cockroachlabs.com/docs/v21.1/ui-custom-chart-debug-page). [#66594][#66594] {% comment %}doc{% endcomment %} - -

Bug fixes

- -- CockroachDB now allows a node with lease preferences to drain gracefully. [#66712][#66712] -- Fixed a panic that could occur in the [optimizer](https://www.cockroachlabs.com/docs/v21.1/cost-based-optimizer) when executing a prepared plan with placeholders. This could happen when one of the tables used by the query had computed columns or a partial index. [#66833][#66833] -- Fixed a bug that caused graceful drain to call `time.sleep` multiple times, which cut into time needed for range lease transfers. [#66851][#66851] -- Reduced CPU usage of idle `cockroach` processes. [#66894][#66894] -- Fixed an error that backup would produce in some [full backups with revision history](https://www.cockroachlabs.com/docs/v21.1/take-backups-with-revision-history-and-restore-from-a-point-in-time). Previously, some full backups would erroneously produce an error in the form of `batch timestamp must be after replica GC threshold `. [#66904][#66904] -- CockroachDB now avoids interacting with [decommissioned nodes](https://www.cockroachlabs.com/docs/v21.1/remove-nodes#how-it-works) during DistSQL planning and consistency checking. [#66950][#66950] -- [Changefeeds](https://www.cockroachlabs.com/docs/v21.1/stream-data-out-of-cockroachdb-using-changefeeds) no longer interact poorly with large, abandoned transactions. Previously, this combination could result in a cascade of work during transaction cleanup that could starve out foreground traffic. [#66813][#66813] -- [Changefeeds](https://www.cockroachlabs.com/docs/v21.1/stream-data-out-of-cockroachdb-using-changefeeds) now properly invalidate cached range descriptors and retry when encountering [decommissioned nodes](https://www.cockroachlabs.com/docs/v21.1/remove-nodes#how-it-works). [#67013][#67013] -- Fixed a bug where [metrics pages](https://www.cockroachlabs.com/docs/v21.1/ui-overview-dashboard) would lose their scroll position on chart data updates. [#67089][#67089] -- Fixed a bug which caused internal errors when running an [`IMPORT TABLE`](https://www.cockroachlabs.com/docs/v21.1/import) statement with a virtual computed column to import a CSV file. This bug has been present since virtual computed columns were introduced in v21.1.0. [#67043][#67043] -- Previously, the CLI would attribute some of the time spent running schema changes to network latency. This has now been fixed. Additionally, verbose timings in the CLI now also show time spent running schema changes in a separate bucket. [#65719][#65719] -- Fixed a statement buffer memory leak when using suspended portals. [#67372][#67372] -- [Changefeeds](https://www.cockroachlabs.com/docs/v21.1/stream-data-out-of-cockroachdb-using-changefeeds) on tables with array columns now support Avro. [#67433][#67433] -- Avro encoding now supports collated string types. [#67433][#67433] -- [`ENUM`](https://www.cockroachlabs.com/docs/v21.1/enum) columns can now be encoded in Avro [changefeeds](https://www.cockroachlabs.com/docs/v21.1/stream-data-out-of-cockroachdb-using-changefeeds) [#67433][#67433] -- Avro [changefeeds](https://www.cockroachlabs.com/docs/v21.1/stream-data-out-of-cockroachdb-using-changefeeds) now support `BIT` and `VARBIT` columns. [#67433][#67433] -- [`INTERVAL`](https://www.cockroachlabs.com/docs/v21.1/interval) columns are now supported in Avro [changefeeds](https://www.cockroachlabs.com/docs/v21.1/stream-data-out-of-cockroachdb-using-changefeeds). [#67433][#67433] - -

Performance improvements

- -- CockroachDB now continues to generate histograms when table statistics collection reaches memory limits, instead of disabling histogram generation. [#67057][#67057] {% comment %}doc{% endcomment %} -- The [optimizer](https://www.cockroachlabs.com/docs/v21.1/cost-based-optimizer) now prefers performing a reverse scan over a forward scan + sort if the reverse scan eliminates the need for a sort and the plans are otherwise equivalent. This was previously the case in most cases, but some edge cases with a small number of rows have been fixed. [#67388][#67388] -- When choosing between index scans that are estimated to have the same number of rows, the [optimizer](https://www.cockroachlabs.com/docs/v21.1/cost-based-optimizer) now prefers indexes for which it has higher certainty about the maximum number of rows over indexes for which there is more uncertainty in the estimated row count. This helps to avoid choosing suboptimal plans for small tables or if the statistics are stale. [#67388][#67388] - -
- -

Contributors

- -This release includes 47 merged PRs by 34 authors. -We would like to thank the following contributors from the CockroachDB community: - -- joesankey (first-time contributor) - -
- -[#65719]: https://github.com/cockroachdb/cockroach/pull/65719 -[#66594]: https://github.com/cockroachdb/cockroach/pull/66594 -[#66712]: https://github.com/cockroachdb/cockroach/pull/66712 -[#66808]: https://github.com/cockroachdb/cockroach/pull/66808 -[#66813]: https://github.com/cockroachdb/cockroach/pull/66813 -[#66833]: https://github.com/cockroachdb/cockroach/pull/66833 -[#66851]: https://github.com/cockroachdb/cockroach/pull/66851 -[#66874]: https://github.com/cockroachdb/cockroach/pull/66874 -[#66894]: https://github.com/cockroachdb/cockroach/pull/66894 -[#66897]: https://github.com/cockroachdb/cockroach/pull/66897 -[#66904]: https://github.com/cockroachdb/cockroach/pull/66904 -[#66950]: https://github.com/cockroachdb/cockroach/pull/66950 -[#67013]: https://github.com/cockroachdb/cockroach/pull/67013 -[#67043]: https://github.com/cockroachdb/cockroach/pull/67043 -[#67057]: https://github.com/cockroachdb/cockroach/pull/67057 -[#67089]: https://github.com/cockroachdb/cockroach/pull/67089 -[#67205]: https://github.com/cockroachdb/cockroach/pull/67205 -[#67286]: https://github.com/cockroachdb/cockroach/pull/67286 -[#67356]: https://github.com/cockroachdb/cockroach/pull/67356 -[#67372]: https://github.com/cockroachdb/cockroach/pull/67372 -[#67388]: https://github.com/cockroachdb/cockroach/pull/67388 -[#67433]: https://github.com/cockroachdb/cockroach/pull/67433 -[a8ebcb0af]: https://github.com/cockroachdb/cockroach/commit/a8ebcb0af diff --git a/src/current/_includes/releases/v21.1/v21.1.7.md b/src/current/_includes/releases/v21.1/v21.1.7.md deleted file mode 100644 index aff262277cc..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.7.md +++ /dev/null @@ -1,92 +0,0 @@ -## v21.1.7 - -Release Date: August 9, 2021 - - - -

Security updates

- -- The `--cert-principal-map` flag passed to [`cockroach` commands](https://www.cockroachlabs.com/docs/v21.1/cockroach-commands) now allows the certificate principal name to contain colons. [#67810][#67810] {% comment %}doc{% endcomment %} - -

General changes

- -- Added a new [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings) (`kv.transaction.reject_over_max_intents_budget`) that controls the behavior of CockroachDB when a transaction exceeds the locks-tracking memory budget set by the `kv.transaction.max_intents_bytes` cluster setting. If `kv.transaction.reject_over_max_intents_budget` is set to `true`, CockroachDB rejects the query that would push its transaction over the memory budget with an error (error code 53400 - "configuration limit exceeded"). Transactions that don't track their locks precisely are potentially destabilizing for the cluster since cleaning up locks can take up considerable resources. Transactions that change many rows have the potential to run into this memory budget issue. [#67967][#67967] {% comment %}doc{% endcomment %} - -

SQL language changes

- -- The remote [DistSQL](https://www.cockroachlabs.com/docs/v21.1/architecture/sql-layer#distsql) flows are now eagerly canceled if they were queued up and the query was canceled. [#66331][#66331] -- Added a new cluster setting (`changefeed.slow_span_log_threshold`) that allows setting a cluster-wide default for slow span logging. [#68106][#68106] {% comment %}doc{% endcomment %} -- Added a new [session variable](https://www.cockroachlabs.com/docs/v21.1/set-vars) (`enable_copying_partitioning_when_deinterleaving_table`) which will change the behavior of [`ALTER PRIMARY KEY`](https://www.cockroachlabs.com/docs/v21.1/alter-primary-key) when performing a change which retains the same primary key but removes an [`INTERLEAVE INTO` clause](https://www.cockroachlabs.com/docs/v21.1/interleave-in-parent). When this variable is set to `true` and an `ALTER PRIMARY KEY` is run that only removes an `INTERLEAVE INTO` clause, the [partitioning](https://www.cockroachlabs.com/docs/v21.1/partitioning) and [zone configuration](https://www.cockroachlabs.com/docs/v21.1/configure-zone) which applied to the root of the interleave will be applied to the new primary index. The default value for `enable_copying_partitioning_when_deinterleaving_table` is equal to the value set for the new cluster setting `sql.defaults.copy_partitioning_when_deinterleaving_table.enabled`. [#68114][#68114] {% comment %}doc{% endcomment %} - -

Operational changes

- -- [Histogram metrics](https://www.cockroachlabs.com/docs/v21.1/cost-based-optimizer#control-histogram-collection) now store the total number of observations over time. [#68106][#68106] {% comment %}doc{% endcomment %} - -

DB Console changes

- -- Fixed a bug causing the Summary Panel on the [Overview Dashboard](https://www.cockroachlabs.com/docs/v21.1/ui-overview-dashboard) to flicker. [#67365][#67365] -- Fixed a bug preventing a redirect to the originally-requested DB Console page after user login. [#67858][#67858] -- User can now see time series metrics for disk spilling on the [Advanced Debug page](https://www.cockroachlabs.com/docs/v21.1/ui-debug-pages). [#68112][#68112] {% comment %}doc{% endcomment %} -- Fixed color mismatch of node status badge on the [Cluster Overview page](https://www.cockroachlabs.com/docs/v21.1/ui-cluster-overview-page). [#68056][#68056] -- Chart titles on the [Replication Dashboard](https://www.cockroachlabs.com/docs/v21.1/ui-replication-dashboard) were previously falsely labeled as "per Store" but were in fact "per Node". This bug is now fixed. [#67847][#67847] {% comment %}doc{% endcomment %} - -

Bug fixes

- -- Fixed a bug causing the `ST_GeneratePoints` [built-in function](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators) to return a garbage value or an error if an empty geometry or negative nPoints input is given. [#67580][#67580] -- Fixed a bug where [`DROP DATABASE`](https://www.cockroachlabs.com/docs/v21.1/drop-database) could return errors if the database contained [temporary views](https://www.cockroachlabs.com/docs/v21.1/views#temporary-views) in use in an another session. [#67172][#67172] -- Fixed a [storage-level](https://www.cockroachlabs.com/docs/v21.1/architecture/storage-layer) bug where Pebble would occasionally create excessively large SSTables, causing poor compaction performance and high read-amplification. This was especially likely after a manual offline compaction. [#67610][#67610] -- [Correlated subqueries](https://www.cockroachlabs.com/docs/v21.1/subqueries#correlated-subqueries) that couldn't be decorrelated and that have their own subqueries are now executed correctly when supported. Note that it is an edge case of an edge case, so it's unlikely that users have hit this bug (it was found by the randomized testing). [#67570][#67570] -- Fixed very rare, unexpected "index out of bounds" error from the [vectorized engine](https://www.cockroachlabs.com/docs/v21.1/vectorized-execution) when evaluating a `CASE` operator. [#67779][#67779] -- Catching up [Raft](https://www.cockroachlabs.com/docs/v21.1/architecture/replication-layer#raft) followers on the Raft log is now more efficient in the presence of many large Raft log entries. This helps avoid situations where Raft leaders struggle to retain leadership while catching up their followers. [#67127][#67127] -- Fixed a bug that allowed rows to be inserted into a table with a [`CHECK` constraint](https://www.cockroachlabs.com/docs/v21.1/check) that always evaluated to `false` (e.g., `CHECK (false)`). This bug was present since version 21.1.0. [#67341][#67341] -- Fixed a bug causing [changefeeds](https://www.cockroachlabs.com/docs/v21.1/changefeed-for) to sometimes get stuck. [#67968][#67968] -- Previously, CockroachDB nodes would crash whenever the cluster setting `sql.trace.txn.enable_threshold` was changed to a non-zero value. The bug was introduced in 21.1.0. [#68027][#68027] -- Fixed a deadlock that could occur when many replicas were rapidly queued for removal. [#65859][#65859] -- Fixed two bugs which affected [geospatial queries](https://www.cockroachlabs.com/docs/v21.1/spatial-features) with the `st_distance` function. The first bug caused errors for filters of the form `st_distance(g1, g2, use_spheroid) = 0`. The second could cause incorrect results in some cases; it incorrectly transformed filters of the form `st_distance(g1, g2) = 0` when `g1` and `g2` were geographies to `st_instersects(g1, g2)`. This is not a valid transformation because `st_distance` makes spheroid-based calculations by default while `st_intersects` only makes sphere-based calculations. [#67392][#67392] -- Fixed a bug causing a prepared statement to incorrectly reuse the query plan of a different prepared statement that had similar, but not identical type hints. [#67688][#67688] -- Fixed an issue with statistics estimation in the optimizer that could cause it to over-estimate the number of rows for some expressions and thus choose a sub-optimal plan. This issue could happen when multi-column statistics were used in combination with histograms, the query contained a predicate on two or more columns where the columns were highly correlated, and the selected values were very common according to the histograms. [#67998][#67998] -- Previously, CockroachDB could encounter an internal error or crash when performing a cast of [`NULL`](https://www.cockroachlabs.com/docs/v21.1/null-handling) [`JSON`](https://www.cockroachlabs.com/docs/v21.1/jsonb) value to [Geography or Geometry types](https://www.cockroachlabs.com/docs/v21.1/spatial-data). Now this is fixed. [#67902][#67902] -- [`INSERT`](https://www.cockroachlabs.com/docs/v21.1/insert) and [`UPDATE`](https://www.cockroachlabs.com/docs/v21.1/update) statements which operate on larger rows can now be split into batches using the `sql.mutations.mutation_batch_byte_size` setting. [#67958][#67958] -- A rare bug that could result in a crash while creating a [debug.zip file](https://www.cockroachlabs.com/docs/v21.1/cockroach-debug-zip) has been fixed. The bug was only possible to hit if a debug.zip file was captured during a period of rapid lease movement. [#67728][#67728] -- Previously the [`GRANT`](https://www.cockroachlabs.com/docs/v21.1/grant) and [`REVOKE`](https://www.cockroachlabs.com/docs/v21.1/revoke) commands would incorrectly handle role names. CockroachDB treats role names as case-insensitive, but these commands were incorrectly handling the names. Now, `GRANT` and `REVOKE` normalize the names and are case-insensitive. [#67901][#67901] {% comment %}doc{% endcomment %} - -

Performance improvements

- -- [Vectorized flows](https://www.cockroachlabs.com/docs/v21.1/vectorized-execution) can use less memory when sending and receiving data to the network. [#67609][#67609] -- Range merges are no longer considered if a range has seen significant load over the previous 5 minutes, instead of being considered as long as a range had low load over the previous second. This change improves stability, as load-based splits will no longer rapidly disappear during transient throughput dips. [#65362][#65362] -- A new cluster setting `sql.defaults.optimizer_improve_disjunction_selectivity.enabled` enables more accurate selectivity estimation of query filters with `OR` expressions. This improves query plans in some cases. This cluster setting is disabled by default. [#67730][#67730] {% comment %}doc{% endcomment %} - -

Contributors

- -This release includes 47 merged PRs by 26 authors. - -[#65362]: https://github.com/cockroachdb/cockroach/pull/65362 -[#65859]: https://github.com/cockroachdb/cockroach/pull/65859 -[#66331]: https://github.com/cockroachdb/cockroach/pull/66331 -[#67127]: https://github.com/cockroachdb/cockroach/pull/67127 -[#67172]: https://github.com/cockroachdb/cockroach/pull/67172 -[#67341]: https://github.com/cockroachdb/cockroach/pull/67341 -[#67365]: https://github.com/cockroachdb/cockroach/pull/67365 -[#67392]: https://github.com/cockroachdb/cockroach/pull/67392 -[#67570]: https://github.com/cockroachdb/cockroach/pull/67570 -[#67580]: https://github.com/cockroachdb/cockroach/pull/67580 -[#67609]: https://github.com/cockroachdb/cockroach/pull/67609 -[#67610]: https://github.com/cockroachdb/cockroach/pull/67610 -[#67688]: https://github.com/cockroachdb/cockroach/pull/67688 -[#67728]: https://github.com/cockroachdb/cockroach/pull/67728 -[#67730]: https://github.com/cockroachdb/cockroach/pull/67730 -[#67779]: https://github.com/cockroachdb/cockroach/pull/67779 -[#67810]: https://github.com/cockroachdb/cockroach/pull/67810 -[#67847]: https://github.com/cockroachdb/cockroach/pull/67847 -[#67858]: https://github.com/cockroachdb/cockroach/pull/67858 -[#67901]: https://github.com/cockroachdb/cockroach/pull/67901 -[#67902]: https://github.com/cockroachdb/cockroach/pull/67902 -[#67958]: https://github.com/cockroachdb/cockroach/pull/67958 -[#67967]: https://github.com/cockroachdb/cockroach/pull/67967 -[#67968]: https://github.com/cockroachdb/cockroach/pull/67968 -[#67998]: https://github.com/cockroachdb/cockroach/pull/67998 -[#68027]: https://github.com/cockroachdb/cockroach/pull/68027 -[#68056]: https://github.com/cockroachdb/cockroach/pull/68056 -[#68106]: https://github.com/cockroachdb/cockroach/pull/68106 -[#68112]: https://github.com/cockroachdb/cockroach/pull/68112 -[#68114]: https://github.com/cockroachdb/cockroach/pull/68114 diff --git a/src/current/_includes/releases/v21.1/v21.1.8.md b/src/current/_includes/releases/v21.1/v21.1.8.md deleted file mode 100644 index df7410674b5..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.8.md +++ /dev/null @@ -1,87 +0,0 @@ -## v21.1.8 - -Release Date: August 30, 2021 - -{% include releases/release-downloads-docker-image.md release=include.release advisory_key="a69874" %} - -

Security updates

- -- The [node status retrieval endpoints over HTTP](https://www.cockroachlabs.com/docs/v21.1/monitoring-and-alerting) (`/_status/nodes`, `/_status/nodes/`, and the DB Console `/#/reports/nodes`) have been updated to require the [`admin`](https://www.cockroachlabs.com/docs/v21.1/authorization#admin-role) role from the requesting user. This ensures that operational details such as network addresses and command-line flags do not leak to unprivileged users. [#67068][#67068] {% comment %}doc{% endcomment %} - -

General changes

- -- A recent release removed parts of some queries from the [debugging traces](https://www.cockroachlabs.com/docs/v21.1/show-trace) of those queries. This information (i.e. the execution of some low-level RPCs) has been re-included in the traces. [#68923][#68923] {% comment %}doc{% endcomment %} - -

Enterprise edition changes

- -- The `kafka_sink_config` [changefeed option](https://www.cockroachlabs.com/docs/v21.1/create-changefeed) can now include `RequiredAcks`. [#69015][#69015] {% comment %}doc{% endcomment %} -- Added a new per-node [changefeed](https://www.cockroachlabs.com/docs/v21.1/stream-data-out-of-cockroachdb-using-changefeeds) configuration [`changefeed.node_sink_throttle_config`](https://www.cockroachlabs.com/docs/v21.1/cluster-settings) that can be used to throttle the rate of emission from the memory buffer thus making it possible to limit the emission rate from changefeeds. [#68628][#68628] {% comment %}doc{% endcomment %} - -

SQL language changes

- -- Added a new [`EXPLAIN` flag](https://www.cockroachlabs.com/docs/v21.1/explain), `MEMO`, to be used with [`EXPLAIN (OPT)`](https://www.cockroachlabs.com/docs/v21.1/explain#opt-option). When the `MEMO` flag is passed, a representation of the optimizer memo will be printed along with the best plan. The `MEMO` flag can be used in combination with other flags such as `CATALOG` and `VERBOSE`. For example, `EXPLAIN (OPT, MEMO, VERBOSE)` will print the memo along with verbose output for the best plan. [#67778][#67775] {% comment %}doc{% endcomment %} -- Some queries with [lookup joins](https://www.cockroachlabs.com/docs/v21.1/joins#lookup-joins) and/or top K sorts are now more likely to be executed in a "local" manner with the [`distsql=auto`](https://www.cockroachlabs.com/docs/v21.1/set-vars#parameters) session variable when the newly introduced `sql.distsql.prefer_local_execution.enabled` [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings) is set to `true` (`false` is the default). [#68613][#68613] {% comment %}doc{% endcomment %} - -

Operational changes

- -- Introduced the `bulkio.index_backfill.checkpoint_interval` [cluster setting](https://www.cockroachlabs.com/docs/v21.1/cluster-settings) to control the rate at which backfills checkpoint their progress. Useful for controlling the backfill rate on large tables. [#68287][#68287] {% comment %}doc{% endcomment %} - -

Command-line changes

- -- [`cockroach debug zip`](https://www.cockroachlabs.com/docs/v21.1/cockroach-debug-zip) no longer retrieves database and table details into separate files. The schema information is collected by means of `system.descriptors` and `crdb_internal.create_statements`. [#68984][#68984] {% comment %}doc{% endcomment %} - -

DB Console changes

- -- Added a new column picker on the [Statements](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page) and [Statements Details pages](https://www.cockroachlabs.com/docs/v21.1/ui-statements-page#statement-details-page) to select which columns to display in the statements table. Includes a new database column option for database name information, which is hidden by default on the [Statements page](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page). [#68294][#68294] {% comment %}doc{% endcomment %} -- Fixed the [Transactions page](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page) to show the correct value for implicit transactions. [#68294][#68294] -- [Statements Details](https://www.cockroachlabs.com/docs/v21.1/ui-statements-page#statement-details-page), [Statements](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page), and [Transactions pages](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page) now display information about the node and region on which a statement was executed. [#68317][#68317] {% comment %}doc{% endcomment %} -- Fixed tooltips behavior on the [Sessions](https://www.cockroachlabs.com/docs/v21.2/ui-sessions-page), [Statements](https://www.cockroachlabs.com/docs/v21.2/ui-statements-page), and [Transactions pages](https://www.cockroachlabs.com/docs/v21.2/ui-transactions-page). [#68474][#68474] {% comment %}doc{% endcomment %} - -

Bug fixes

- -- Fixed missing [foreign key](https://www.cockroachlabs.com/docs/v21.1/foreign-key) checks in some cases when there are multiple checks and the inserted data contains a `NULL` for one of the checks. [#68520][#68520] -- Fixed a bug where using [`ST_segmentize`](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators#spatial-functions) on coordinates that are infinite would error with a crash. [#67848][#67848] -- Fixed the [`COPY CSV`](https://www.cockroachlabs.com/docs/v21.1/copy-from) command so that it handles multiple records separated by newline characters. [#68623][#68623] {% comment %}doc{% endcomment %} -- Fixed a bug that caused incorrect query results when querying tables with multiple [column families](https://www.cockroachlabs.com/docs/v21.1/column-families) and unique secondary [indexes](https://www.cockroachlabs.com/docs/v21.1/indexes). The bug only occurred if 1) [vectorized execution](https://www.cockroachlabs.com/docs/v21.1/vectorized-execution) was enabled for the query, 2) the query scanned a [unique secondary index](https://www.cockroachlabs.com/docs/v21.1/indexes) that contained columns from more than one column family, and 3) the rows fetched by the query contained `NULL` values for some of the indexed columns. This bug was present since version v20.1. [#68238][#68239] -- Fixed a crash in the `/debug/closedts-{sender,receiver}` [advanced debug pages](https://www.cockroachlabs.com/docs/v21.1/ui-debug-pages) if the last message of the closed timestamp side transport buffer was removed before rendering. [#68669][#68669] -- Fixed an issue where terminating a CockroachDB process early in its startup routine might cause it to fail to start again, falsely reporting write-ahead log corruption. [#68897][#68897] -- Fixed a regression in the [optimizer's cost model](https://www.cockroachlabs.com/docs/v21.1/cost-based-optimizer) that could cause it to choose sub-optimal plans when choosing between two non-unique index scans with different numbers of columns per index. [#68991][#68991] -- Fixed a bug where CockroachDB could incorrectly evaluate [`LIKE` expressions](https://www.cockroachlabs.com/docs/v21.1/scalar-expressions#string-pattern-matching) when the pattern contained the escape characters `\` if the expressions were executed via the [vectorized execution engine](https://www.cockroachlabs.com/docs/v21.1/vectorized-execution). [#68353][#68353] -- File logs now respect the [`max-group-size`](https://www.cockroachlabs.com/docs/v21.1/configure-logs) configuration parameter again. This parameter had incorrectly become ignored in v21.1, resulting in arbitrarily large log directories. [#69007][#69007] -- Fixed a bug in [`EXPORT`](https://www.cockroachlabs.com/docs/v21.1/export) where concurrent exports could overwrite each other. [#68392][#68392] - -

Performance improvements

- -- Jobs no longer hold exclusive locks during the duration of their checkpointing transactions which could result in long wait times when trying to run [`SHOW JOBS`](https://www.cockroachlabs.com/docs/v21.1/show-jobs). [#68244][#68244] {% comment %}doc{% endcomment %} -- [Lookup joins](https://www.cockroachlabs.com/docs/v21.1/joins#lookup-joins) on indexes with virtual columns are now considered by the optimizer. This should result in more efficient queries in many cases. Most notably, post-query uniqueness checks for unique indexes on virtual columns in [`REGIONAL BY ROW` tables](https://www.cockroachlabs.com/docs/v21.1/regional-tables#regional-by-row-tables) can now use the unique index rather than perform a full-table scan. [#68423][#68423] {% comment %}doc{% endcomment %} - -

Contributors

- -This release includes 40 merged PRs by 26 authors. - -[#67068]: https://github.com/cockroachdb/cockroach/pull/67068 -[#67746]: https://github.com/cockroachdb/cockroach/pull/67746 -[#67775]: https://github.com/cockroachdb/cockroach/pull/67775 -[#67848]: https://github.com/cockroachdb/cockroach/pull/67848 -[#67883]: https://github.com/cockroachdb/cockroach/pull/67883 -[#68239]: https://github.com/cockroachdb/cockroach/pull/68239 -[#68244]: https://github.com/cockroachdb/cockroach/pull/68244 -[#68287]: https://github.com/cockroachdb/cockroach/pull/68287 -[#68294]: https://github.com/cockroachdb/cockroach/pull/68294 -[#68317]: https://github.com/cockroachdb/cockroach/pull/68317 -[#68353]: https://github.com/cockroachdb/cockroach/pull/68353 -[#68392]: https://github.com/cockroachdb/cockroach/pull/68392 -[#68423]: https://github.com/cockroachdb/cockroach/pull/68423 -[#68474]: https://github.com/cockroachdb/cockroach/pull/68474 -[#68510]: https://github.com/cockroachdb/cockroach/pull/68510 -[#68520]: https://github.com/cockroachdb/cockroach/pull/68520 -[#68613]: https://github.com/cockroachdb/cockroach/pull/68613 -[#68623]: https://github.com/cockroachdb/cockroach/pull/68623 -[#68628]: https://github.com/cockroachdb/cockroach/pull/68628 -[#68669]: https://github.com/cockroachdb/cockroach/pull/68669 -[#68897]: https://github.com/cockroachdb/cockroach/pull/68897 -[#68923]: https://github.com/cockroachdb/cockroach/pull/68923 -[#68984]: https://github.com/cockroachdb/cockroach/pull/68984 -[#68991]: https://github.com/cockroachdb/cockroach/pull/68991 -[#69007]: https://github.com/cockroachdb/cockroach/pull/69007 -[#69015]: https://github.com/cockroachdb/cockroach/pull/69015 diff --git a/src/current/_includes/releases/v21.1/v21.1.9.md b/src/current/_includes/releases/v21.1/v21.1.9.md deleted file mode 100644 index e78bb892dfb..00000000000 --- a/src/current/_includes/releases/v21.1/v21.1.9.md +++ /dev/null @@ -1,81 +0,0 @@ -## v21.1.9 - -Release Date: September 20, 2021 - - - -

SQL language changes

- -- Added the [`bulkio.backup.proxy_file_writes.enabled`](https://www.cockroachlabs.com/docs/v21.1/cluster-settings) cluster setting to opt-in to writing backup files outside the KV storage layer. [#69250][#69250] -- You can now alter the owner of the `crdb_internal_region` type which is created by initiating a [multi-region database](https://www.cockroachlabs.com/docs/v21.1/multiregion-overview). [#69759][#69759] - -

Operational changes

- -- A new cluster setting, [`sql.mutations.max_row_size.log`](https://www.cockroachlabs.com/docs/v21.1/cluster-settings), was added, which controls large row logging. Whenever a row larger than this size is written (or a single column family if multiple column families are in use) a `LargeRow` event is logged to the `SQL_PERF` channel (or a `LargeRowInternal` event is logged to `SQL_INTERNAL_PERF` if the row was added by an internal query). This could occur for `INSERT`, `UPSERT`, `UPDATE`, `CREATE TABLE AS`, `CREATE INDEX`, `ALTER TABLE`, `ALTER INDEX`, `IMPORT`, or `RESTORE` statements. `SELECT`, `DELETE`, `TRUNCATE`, and `DROP` are not affected by this setting. This setting is disabled by default. [#69946][#69946] -- A new cluster setting, [`sql.mutations.max_row_size.err`](https://www.cockroachlabs.com/docs/v21.1/cluster-settings), was added, which limits the size of rows written to the database (or individual column families, if multiple column families are in use). Statements trying to write a row larger than this will fail with a code 54000 (`program_limit_exceeded`) error. Internal queries writing a row larger than this will not fail, but will log a `LargeRowInternal` event to the `SQL_INTERNAL_PERF` channel. This limit is enforced for `INSERT`, `UPSERT`, and `UPDATE` statements. `CREATE TABLE AS`, `CREATE INDEX`, `ALTER TABLE`, `ALTER INDEX`, `IMPORT`, and `RESTORE` will not fail with an error, but will log `LargeRowInternal` events to the `SQL_INTERNAL_PERF` channel. `SELECT`, `DELETE`, `TRUNCATE`, and `DROP` are not affected by this limit. Note that existing rows violating the limit *cannot* be updated, unless the update shrinks the size of the row below the limit, but *can* be selected, deleted, altered, backed-up, and restored. For this reason we recommend using the accompanying setting `sql.mutations.max_row_size.log` in conjunction with `SELECT pg_column_size()` queries to detect and fix any existing large rows before lowering `sql.mutations.max_row_size.err`. This setting is disabled by default. [#69946][#69946] -- New variables `sql.mutations.max_row_size.{log|err}` were renamed to `sql.guardrails.max_row_size_{log|err}` for consistency with other variables and metrics. [#69946][#69946] -- Added four new metrics: `sql.guardrails.max_row_size_log.count`, `sql.guardrails.max_row_size_log.count.internal`, `sql.guardrails.max_row_size_err.count`, and `sql.guardrails.max_row_size_err.count.internal`. These metrics are incremented whenever a large row violates the corresponding `sql.guardrails.max_row_size_{log|err}` limit. [#69946][#69946] - -

DB Console changes

- -- A CES survey link component was added to support being able to get client feedback. [#68517][#68517] - -

Bug fixes

- -- Fixed a bug where running [`IMPORT PGDUMP`](https://www.cockroachlabs.com/docs/v21.1/migrate-from-postgres) with a UDT would result in a null pointer exception. This change makes it fail gracefully. [#69249][#69249] -- Fixed a bug where the `schedules.backup.succeeded` and `schedules.backup.failed` metrics would sometimes not be updated. [#69256][#69256] -- The correct format code will now be returned when using [`COPY FROM .. BINARY`](https://www.cockroachlabs.com/docs/v21.1/copy-from). [#69278][#69278] -- Fixed a bug where [`COPY FROM ... BINARY`](https://www.cockroachlabs.com/docs/v21.1/copy-from) would return an error if the input data was split across different messages. [#69278][#69278] -- Fixed a bug where [`COPY FROM ... CSV`](https://www.cockroachlabs.com/docs/v21.1/copy-from) would require each `CopyData` message to be split at the boundary of a record. The `COPY` protocol allows messages to be split at arbitrary points. [#69278][#69278] -- Fixed a bug where [`COPY FROM ... CSV`](https://www.cockroachlabs.com/docs/v21.1/copy-from) did not correctly handle octal byte escape sequences such as `\011` when using a [`BYTES` column](https://www.cockroachlabs.com/docs/v21.1/bytes). [#69278][#69278] -- Fixed an oversight in the data generator for TPC-H which was causing a smaller number of distinct values to be generated for p_type and p_container in the part table than the spec calls for. [#68710][#68710] -- Fixed a bug that was introduced in [v21.1.5]({% link releases/v21.1.md %}#v21-1-5), which prevented [nodes from being decommissioned](https://www.cockroachlabs.com/docs/v21.1/remove-nodes) in a cluster if the cluster had multiple nodes intermittently miss their liveness heartbeat. [#68552][#68552] -- Fixed a bug introduced in v21.1 where CockroachDB could return an internal error when performing streaming aggregation in some edge cases. [#69181][#69181] -- Fixed a bug that created non-partial unique constraints when a user attempted to create a partial unique constraint in [`ALTER TABLE` statements](https://www.cockroachlabs.com/docs/v21.1/alter-table). [#68745][#68745] -- Fixed a bug where a [`DROP VIEW ... CASCADE`](https://www.cockroachlabs.com/docs/v21.1/drop-view) could incorrectly result in "table ...is already being dropped" errors. [#68618][#68618] -- Fixed a bug introduced in v21.1 where the output of [`SHOW CREATE TABLE`](https://www.cockroachlabs.com/docs/v21.1/show-create) on tables with [hash-sharded indexes](https://www.cockroachlabs.com/docs/v21.1/hash-sharded-indexes) was not round-trippable. Executing the output would not create an identical table. This has been fixed by showing `CHECK` constraints that are automatically created for these indexes in the output of `SHOW CREATE TABLE`. [#69695][#69695] -- Fixed internal or "invalid cast" error in some cases involving cascading [updates](https://www.cockroachlabs.com/docs/v21.1/update). [#69180][#69180] -- Fixed a bug with cardinality estimation in the [optimizer](https://www.cockroachlabs.com/docs/v21.1/cost-based-optimizer) that was introduced in v21.1.0. This bug could cause inaccurate row count estimates in queries involving tables with a large number of null values. As a result, it was possible that the optimizer could choose a suboptimal plan. [#69125][#69125] -- Fixed a bug introduced in v20.2 that caused internal errors with set operations, like [`UNION`](https://www.cockroachlabs.com/docs/v21.1/selection-queries#union-combine-two-queries), and columns with tuple types that contained constant `NULL` values. [#69271][#69271] -- Added backwards compatibility between v21.1.x cluster versions and the [v21.1.8 cluster version]({% link releases/v21.1.md %}#v21-1-8). [#69894][#69894] -- Fixed a bug where table stats collection issued via [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v21.1/explain-analyze) statements or via [`CREATE STATISTICS`](https://www.cockroachlabs.com/docs/v21.1/create-statistics) statements without specifying the [`AS OF SYSTEM TIME`](https://www.cockroachlabs.com/docs/v21.1/as-of-system-time) option could run into `flow: memory budget exceeded`. [#69588][#69588] -- Fixed a bug where an internal error or a crash could occur when some [`crdb_internal` built-in functions](https://www.cockroachlabs.com/docs/v21.1/functions-and-operators) took string-like type arguments (e.g. `name`). [#69993][#69993] -- Fixed all broken links to the documentation from the DB Console. [#70117][#70117] -- Previously, when using [`ALTER PRIMARY KEY`](https://www.cockroachlabs.com/docs/v21.1/alter-primary-key) on a [`REGIONAL BY ROW`](https://www.cockroachlabs.com/docs/v21.1/multiregion-overview) table, the copied unique index from the old primary key would not have the correct [zone configurations](https://www.cockroachlabs.com/docs/v21.1/configure-zone) applied. This bug is now fixed. Users who have encountered this bug should re-create the index. [#69681][#69681] - -

Performance improvements

- -- Lookup joins on [partial indexes](https://www.cockroachlabs.com/docs/v21.1/partial-indexes) with [virtual computed columns](https://www.cockroachlabs.com/docs/v21.1/computed-columns) are not considered by the [optimizer](https://www.cockroachlabs.com/docs/v21.1/cost-based-optimizer), resulting in more efficient query plans in some cases. [#69110][#69110] -- Updated the [optimizer](https://www.cockroachlabs.com/docs/v21.1/cost-based-optimizer) cost model so that, all else being equal, the optimizer prefers plans in which [`LIMIT`](https://www.cockroachlabs.com/docs/v21.1/limit-offset) operators are pushed as far down the tree as possible. This can reduce the number of rows that need to be processed by higher operators in the plan tree, improving performance.[#69977][#69977] - -

Contributors

- -This release includes 40 merged PRs by 19 authors. - -[#68509]: https://github.com/cockroachdb/cockroach/pull/68509 -[#68517]: https://github.com/cockroachdb/cockroach/pull/68517 -[#68552]: https://github.com/cockroachdb/cockroach/pull/68552 -[#68618]: https://github.com/cockroachdb/cockroach/pull/68618 -[#68710]: https://github.com/cockroachdb/cockroach/pull/68710 -[#68745]: https://github.com/cockroachdb/cockroach/pull/68745 -[#69110]: https://github.com/cockroachdb/cockroach/pull/69110 -[#69125]: https://github.com/cockroachdb/cockroach/pull/69125 -[#69180]: https://github.com/cockroachdb/cockroach/pull/69180 -[#69181]: https://github.com/cockroachdb/cockroach/pull/69181 -[#69249]: https://github.com/cockroachdb/cockroach/pull/69249 -[#69250]: https://github.com/cockroachdb/cockroach/pull/69250 -[#69256]: https://github.com/cockroachdb/cockroach/pull/69256 -[#69271]: https://github.com/cockroachdb/cockroach/pull/69271 -[#69278]: https://github.com/cockroachdb/cockroach/pull/69278 -[#69305]: https://github.com/cockroachdb/cockroach/pull/69305 -[#69588]: https://github.com/cockroachdb/cockroach/pull/69588 -[#69695]: https://github.com/cockroachdb/cockroach/pull/69695 -[#69759]: https://github.com/cockroachdb/cockroach/pull/69759 -[#69894]: https://github.com/cockroachdb/cockroach/pull/69894 -[#69946]: https://github.com/cockroachdb/cockroach/pull/69946 -[#69977]: https://github.com/cockroachdb/cockroach/pull/69977 -[#69993]: https://github.com/cockroachdb/cockroach/pull/69993 -[#70117]: https://github.com/cockroachdb/cockroach/pull/70117 -[#69681]: https://github.com/cockroachdb/cockroach/pull/69681 -[5c9ab2a9f]: https://github.com/cockroachdb/cockroach/commit/5c9ab2a9f -[df88282e3]: https://github.com/cockroachdb/cockroach/commit/df88282e3 diff --git a/src/current/_includes/releases/v21.2/v21.2.0.md b/src/current/_includes/releases/v21.2/v21.2.0.md index abd77ac5493..ca3ecf3e9de 100644 --- a/src/current/_includes/releases/v21.2/v21.2.0.md +++ b/src/current/_includes/releases/v21.2/v21.2.0.md @@ -91,7 +91,7 @@ Core | **Automatic ballast files** | CockroachDB now automatica Before [upgrading to CockroachDB v21.2](https://www.cockroachlabs.com/docs/v21.2/upgrade-cockroach-version), be sure to review the following backward-incompatible changes and adjust your deployment as necessary. -- Interleaved tables and interleaved indexes have been removed. Before upgrading to v21.2, [convert interleaved tables](https://www.cockroachlabs.com/docs/v21.1/interleave-in-parent#convert-interleaved-tables) and [replace interleaved indexes](https://www.cockroachlabs.com/docs/v21.1/interleave-in-parent#replace-interleaved-indexes). Clusters with interleaved tables and indexes cannot finalize the v21.2 upgrade. +- Interleaved tables and interleaved indexes have been removed. Before upgrading to v21.2, convert interleaved tables and replace interleaved indexes. Clusters with interleaved tables and indexes cannot finalize the v21.2 upgrade. - Previously, CockroachDB only supported the YMD format for parsing timestamps from strings. It now also supports the MDY format to better align with PostgreSQL. A timestamp such as `1-1-18`, which was previously interpreted as `2001-01-18`, will now be interpreted as `2018-01-01`. To continue interpreting the timestamp in the YMD format, the first number can be represented with 4 digits, `2001-1-18`. - The deprecated [cluster setting](https://www.cockroachlabs.com/docs/v21.2/cluster-settings) `cloudstorage.gs.default.key` has been removed, and the behavior of the `AUTH` parameter in Google Cloud Storage [`BACKUP`](https://www.cockroachlabs.com/docs/v21.2/backup) and `IMPORT` URIs has been changed. The default behavior is now that of `AUTH=specified`, which uses the credentials passed in the `CREDENTIALS` parameter, and the previous default behavior of using the node's implicit access (via its machine account or role) now requires explicitly passing `AUTH=implicit`. - Switched types from `TEXT` to `"char"` for compatibility with PostgreSQL in the following columns: `pg_constraint` (`confdeltype`, `confmatchtype`, `confudptype`, `contype`) `pg_operator` (`oprkind`), `pg_prog` (`proargmodes`), `pg_rewrite` (`ev_enabled`, `ev_type`), and `pg_trigger` (`tgenabled`). diff --git a/src/current/_includes/sidebar-data-v21.1.json b/src/current/_includes/sidebar-data-v21.1.json deleted file mode 100644 index ff6d4ba51f6..00000000000 --- a/src/current/_includes/sidebar-data-v21.1.json +++ /dev/null @@ -1,1316 +0,0 @@ -[ - { - "title": "Docs Home", - "is_top_level": true, - "urls": [ - "/" - ] - }, - { - "title": "Quickstart", - "is_top_level": true, - "urls": [ - "/cockroachcloud/quickstart.html", - "/cockroachcloud/quickstart-trial-cluster.html" - ] - }, - {% include_cached sidebar-data-cockroachcloud.json %}, - { - "title": "CockroachDB Self-Hosted", - "is_top_level": true, - "urls": [ - "/${VERSION}/index.html" - ], - "items": [ - { - "title": "Get Started", - "items": [ - { - "title": "Install CockroachDB", - "urls": [ - "/${VERSION}/install-cockroachdb.html", - "/${VERSION}/install-cockroachdb-mac.html", - "/${VERSION}/install-cockroachdb-linux.html", - "/${VERSION}/install-cockroachdb-windows.html" - ] - }, - { - "title": "Start a Local Cluster", - "items": [ - { - "title": "From Binary", - "urls": [ - "/${VERSION}/secure-a-cluster.html", - "/${VERSION}/start-a-local-cluster.html" - ] - }, - { - "title": "In Kubernetes", - "urls": [ - "/${VERSION}/orchestrate-a-local-cluster-with-kubernetes.html", - "/${VERSION}/orchestrate-a-local-cluster-with-kubernetes-insecure.html" - ] - }, - { - "title": "In Docker", - "urls": [ - "/${VERSION}/start-a-local-cluster-in-docker-mac.html", - "/${VERSION}/start-a-local-cluster-in-docker-linux.html", - "/${VERSION}/start-a-local-cluster-in-docker-windows.html" - ] - }, - { - "title": "Simulate a Multi-Region Cluster on localhost", - "urls": [ - "/${VERSION}/simulate-a-multi-region-cluster-on-localhost.html" - ] - } - ] - }, - { - "title": "Learn CockroachDB SQL", - "urls": [ - "/${VERSION}/learn-cockroachdb-sql.html" - ] - }, - { - "title": "Explore Database Features", - "items": [ - { - "title": "Replication & Rebalancing", - "urls": [ - "/${VERSION}/demo-replication-and-rebalancing.html" - ] - }, - { - "title": "Fault Tolerance & Recovery", - "urls": [ - "/${VERSION}/demo-fault-tolerance-and-recovery.html" - ] - }, - { - "title": "Multi-Region Performance", - "urls": [ - "/${VERSION}/demo-low-latency-multi-region-deployment.html" - ] - }, - { - "title": "Serializable Transactions", - "urls": [ - "/${VERSION}/demo-serializable.html" - ] - }, - { - "title": "Spatial Data", - "urls": [ - "/${VERSION}/spatial-tutorial.html" - ] - }, - { - "title": "Cross-Cloud Migration", - "urls": [ - "/${VERSION}/demo-automatic-cloud-migration.html" - ] - }, - { - "title": "Orchestration with Kubernetes", - "urls": [ - "/${VERSION}/orchestrate-a-local-cluster-with-kubernetes.html" - ] - }, - { - "title": "JSON Support", - "urls": [ - "/${VERSION}/demo-json-support.html" - ] - }, - { - "title": "SQL Tuning with EXPLAIN", - "urls": [ - "/${VERSION}/sql-tuning-with-explain.html" - ] - } - ] - } - ] - }, - { - "title": "Develop", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/developer-guide-overview.html" - ] - }, - { - "title": "Connect to CockroachDB", - "items": [ - { - "title": "Install a driver or ORM", - "urls": [ - "/${VERSION}/install-client-drivers.html" - ] - }, - { - "title": "Connect to a Cluster", - "urls": [ - "/${VERSION}/connect-to-the-database.html", - "/${VERSION}/connect-to-the-database-cockroachcloud.html" - ] - }, - { - "title": "Use Connection Pools", - "urls": [ - "/${VERSION}/connection-pooling.html" - ] - } - ] - }, - { - "title": "Design a Database Schema", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/schema-design-overview.html" - ] - }, - { - "title": "Create a Database", - "urls": [ - "/${VERSION}/schema-design-database.html" - ] - }, - { - "title": "Create a User-defined Schema", - "urls": [ - "/${VERSION}/schema-design-schema.html" - ] - }, - { - "title": "Create a Table", - "urls": [ - "/${VERSION}/schema-design-table.html" - ] - }, - { - "title": "Add Secondary Indexes", - "urls": [ - "/${VERSION}/schema-design-indexes.html" - ] - }, - { - "title": "Update a Database Schema", - "items": [ - { - "title": "Change and Remove Objects", - "urls": [ - "/${VERSION}/schema-design-update.html" - ] - }, - { - "title": "Online Schema Changes", - "urls": [ - "/${VERSION}/online-schema-changes.html" - ] - } - ] - }, - { - "title": "Advanced Schema Design", - "items": [ - { - "title": "Use Computed Columns", - "urls": [ - "/${VERSION}/computed-columns.html" - ] - }, - { - "title": "Group Columns into Families", - "urls": [ - "/${VERSION}/column-families.html" - ] - }, - { - "title": "Index a Subset of Rows", - "urls": [ - "/${VERSION}/partial-indexes.html" - ] - }, - { - "title": "Index Sequential Keys", - "urls": [ - "/${VERSION}/hash-sharded-indexes.html" - ] - }, - { - "title": "Index JSON and Array Data", - "urls": [ - "/${VERSION}/inverted-indexes.html" - ] - }, - { - "title": "Index Spatial Data", - "urls": [ - "/${VERSION}/spatial-indexes.html" - ] - }, - { - "title": "Scale to Multi-region", - "urls": [ - "/${VERSION}/multiregion-scale-application.html" - ] - } - ] - } - ] - }, - { - "title": "Write Data", - "items": [ - { - "title": "Insert Data", - "urls": [ - "/${VERSION}/insert-data.html" - ] - }, - { - "title": "Update Data", - "urls": [ - "/${VERSION}/update-data.html" - ] - }, - { - "title": "Bulk-update Data", - "urls": [ - "/${VERSION}/bulk-update-data.html" - ] - }, - { - "title": "Delete Data", - "urls": [ - "/${VERSION}/delete-data.html" - ] - }, - { - "title": "Bulk-delete Data", - "urls": [ - "/${VERSION}/bulk-delete-data.html" - ] - } - ] - }, - { - "title": "Read Data", - "items": [ - { - "title": "Select Rows of Data", - "urls": [ - "/${VERSION}/query-data.html" - ] - }, - { - "title": "Reuse Query Results", - "items": [ - { - "title": "Reusable Views", - "urls": [ - "/${VERSION}/views.html" - ] - }, - { - "title": "Subqueries", - "urls": [ - "/${VERSION}/subqueries.html" - ] - } - ] - }, - { - "title": "Create Temporary Tables", - "urls": [ - "/${VERSION}/temporary-tables.html" - ] - }, - { - "title": "Paginate Results", - "urls": [ - "/${VERSION}/pagination.html" - ] - }, - { - "title": "Read Historical Data", - "items": [ - { - "title": "AS OF SYSTEM TIME", - "urls": [ - "/${VERSION}/as-of-system-time.html" - ] - }, - { - "title": "Follower Reads", - "urls": [ - "/${VERSION}/follower-reads.html" - ] - } - ] - }, - { - "title": "Query Spatial Data", - "urls": [ - "/${VERSION}/query-spatial-data.html" - ] - } - ] - }, - { - "title": "Transactions", - "urls": [ - "/${VERSION}/transactions.html" - ] - }, - { - "title": "Test Your Application Locally", - "urls": [ - "/${VERSION}/local-testing.html" - ] - }, - { - "title": "Debug Your Application", - "items": - [ - { - "title": "Log Events", - "urls": [ - "/${VERSION}/logging-overview.html" - ] - }, - { - "title": "Monitor CockroachDB Apps in the DB Console", - "urls": [ - "/${VERSION}/ui-overview.html" - ] - }, - { - "title": "Troubleshoot Common Problems", - "urls": [ - "/${VERSION}/error-handling-and-troubleshooting.html" - ] - } - ] - }, - { - "title": "Optimize Performance", - "items": - [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/make-queries-fast.html" - ] - }, - { - "title": "Performance Best Practices", - "urls": [ - "/${VERSION}/performance-best-practices-overview.html" - ] - }, - { - "title": "Use the EXPLAIN statement", - "urls": [ - "/${VERSION}/sql-tuning-with-explain.html" - ] - }, - { - "title": "Performance tuning recipes", - "urls": [ - "/${VERSION}/performance-recipes.html" - ] - } - ] - }, - { - "title": "Example Apps", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/example-apps.html" - ] - }, - { - "title": "Simple CRUD", - "items": [ - { - "title": "Go", - "urls": [ - "/${VERSION}/build-a-go-app-with-cockroachdb.html", - "/${VERSION}/build-a-go-app-with-cockroachdb-gorm.html", - "/${VERSION}/build-a-go-app-with-cockroachdb-pq.html", - "/${VERSION}/build-a-go-app-with-cockroachdb-upperdb.html" - ] - }, - { - "title": "Java", - "urls": [ - "/${VERSION}/build-a-java-app-with-cockroachdb.html", - "/${VERSION}/build-a-java-app-with-cockroachdb-hibernate.html", - "/${VERSION}/build-a-java-app-with-cockroachdb-jooq.html", - "/${VERSION}/build-a-spring-app-with-cockroachdb-mybatis.html" - ] - }, - { - "title": "JavaScript/TypeScript (Node.js)", - "urls": [ - "/${VERSION}/build-a-nodejs-app-with-cockroachdb.html", - "/${VERSION}/build-a-nodejs-app-with-cockroachdb-sequelize.html", - "/${VERSION}/build-a-nodejs-app-with-cockroachdb-knexjs.html", - "/${VERSION}/build-a-nodejs-app-with-cockroachdb-prisma.html", - "/${VERSION}/build-a-typescript-app-with-cockroachdb.html" - ] - }, - { - "title": "Python", - "urls": [ - "/${VERSION}/build-a-python-app-with-cockroachdb.html", - "/${VERSION}/build-a-python-app-with-cockroachdb-sqlalchemy.html", - "/${VERSION}/build-a-python-app-with-cockroachdb-django.html" - ] - }, - { - "title": "Ruby", - "urls": [ - "/${VERSION}/build-a-ruby-app-with-cockroachdb.html", - "/${VERSION}/build-a-ruby-app-with-cockroachdb-activerecord.html" - ] - }, - { - "title": "C# (.NET)", - "urls": [ - "/${VERSION}/build-a-csharp-app-with-cockroachdb.html" - ] - }, - { - "title": "Rust", - "urls": [ - "/${VERSION}/build-a-rust-app-with-cockroachdb.html" - ] - } - ] - }, - { - "title": "Roach Data", - "items": [ - { - "title": "Spring Boot with JDBC", - "urls": [ - "/${VERSION}/build-a-spring-app-with-cockroachdb-jdbc.html" - ] - }, - { - "title": "Spring Boot with JPA", - "urls": [ - "/${VERSION}/build-a-spring-app-with-cockroachdb-jpa.html" - ] - } - ] - }, - { - "title": "MovR", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/movr.html" - ] - }, - { - "title": "Global Application", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/movr-flask-overview.html" - ] - }, - { - "title": "Global Application Use-case", - "urls": [ - "/${VERSION}/movr-flask-use-case.html" - ] - }, - { - "title": "Multi-region Database Schema", - "urls": [ - "/${VERSION}/movr-flask-database.html" - ] - }, - { - "title": "Set up a Development Environment", - "urls": [ - "/${VERSION}/movr-flask-setup.html" - ] - }, - { - "title": "Develop a Global Application", - "urls": [ - "/${VERSION}/movr-flask-application.html" - ] - }, - { - "title": "Deploy a Global Application", - "urls": [ - "/${VERSION}/movr-flask-deployment.html" - ] - } - ] - } - ] - } - ] - }, - { - "title": "Third-Party Database Tools", - "items": [ - { - "title": "Supported by Cockroach Labs", - "urls": [ - "/${VERSION}/third-party-database-tools.html" - ] - }, - { - "title": "Supported by the Community", - "urls": [ - "/${VERSION}/community-tooling.html" - ] - }, - { - "title": "Tutorials", - "items": [ - { - "title": "Schema Migration Tools", - "items": [ - { - "title": "Alembic", - "urls": [ - "/${VERSION}/alembic.html" - ] - }, - { - "title": "Flyway", - "urls": [ - "/${VERSION}/flyway.html" - ] - }, - { - "title": "Liquibase", - "urls": [ - "/${VERSION}/liquibase.html" - ] - } - ] - }, - { - "title": "GUIs & IDEs", - "items": [ - { - "title": "DBeaver GUI", - "urls": [ - "/${VERSION}/dbeaver.html" - ] - }, - { - "title": "IntelliJ IDEA", - "urls": [ - "/${VERSION}/intellij-idea.html" - ] - } - ] - }, - { - "title": "Application Deployment Tools", - "items": [ - { - "title": "Google Cloud Run", - "urls": [ - "/${VERSION}/deploy-app-gcr.html" - ] - }, - { - "title": "AWS Lambda", - "urls": [ - "/${VERSION}/deploy-lambda-function.html" - ] - } - ] - } - ] - } - ] - } - ] - }, - { - "title": "Deploy", - "items": [ - { - "title": "Production Checklist", - "urls": [ - "/${VERSION}/recommended-production-settings.html" - ] - }, - { - "title": "Multi-region Capabilities", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/multiregion-overview.html" - ] - }, - { - "title": "Choosing a multi-region configuration", - "urls": [ - "/${VERSION}/choosing-a-multi-region-configuration.html" - ] - }, - { - "title": "When to use ZONE vs. REGION Survival Goals", - "urls": [ - "/${VERSION}/when-to-use-zone-vs-region-survival-goals.html" - ] - }, - { - "title": "When to use REGIONAL vs. GLOBAL tables", - "urls": [ - "/${VERSION}/when-to-use-regional-vs-global-tables.html" - ] - }, - { - "title": "Data Domiciling", - "urls": [ - "/${VERSION}/data-domiciling.html" - ] - }, - { - "title": "Migrate to Multi-region SQL", - "urls": [ - "/${VERSION}/migrate-to-multiregion-sql.html" - ] - }, - { - "title": "Topology Patterns", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/topology-patterns.html" - ] - }, - { - "title": "Development", - "urls": [ - "/${VERSION}/topology-development.html" - ] - }, - { - "title": "Basic Production", - "urls": [ - "/${VERSION}/topology-basic-production.html" - ] - }, - { - "title": "Regional Tables", - "urls": [ - "/${VERSION}/regional-tables.html" - ] - }, - { - "title": "Global Tables", - "urls": [ - "/${VERSION}/global-tables.html" - ] - }, - { - "title": "Follower Reads", - "urls": [ - "/${VERSION}/topology-follower-reads.html" - ] - }, - { - "title": "Follow-the-Workload", - "urls": [ - "/${VERSION}/topology-follow-the-workload.html" - ] - } - ] - } - ] - }, - { - "title": "Manual Deployment", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/manual-deployment.html" - ] - }, - { - "title": "On-Premises", - "urls": [ - "/${VERSION}/deploy-cockroachdb-on-premises.html", - "/${VERSION}/deploy-cockroachdb-on-premises-insecure.html" - ] - }, - { - "title": "AWS", - "urls": [ - "/${VERSION}/deploy-cockroachdb-on-aws.html", - "/${VERSION}/deploy-cockroachdb-on-aws-insecure.html" - ] - }, - { - "title": "Azure", - "urls": [ - "/${VERSION}/deploy-cockroachdb-on-microsoft-azure.html", - "/${VERSION}/deploy-cockroachdb-on-microsoft-azure-insecure.html" - ] - }, - { - "title": "Digital Ocean", - "urls": [ - "/${VERSION}/deploy-cockroachdb-on-digital-ocean.html", - "/${VERSION}/deploy-cockroachdb-on-digital-ocean-insecure.html" - ] - }, - { - "title": "Google Cloud Platform GCE", - "urls": [ - "/${VERSION}/deploy-cockroachdb-on-google-cloud-platform.html", - "/${VERSION}/deploy-cockroachdb-on-google-cloud-platform-insecure.html" - ] - } - ] - }, - { - "title": "Orchestrated Deployment", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/orchestration.html" - ] - }, - { - "title": "Kubernetes Single-Cluster Deployment", - "urls": [ - "/${VERSION}/deploy-cockroachdb-with-kubernetes.html", - "/${VERSION}/deploy-cockroachdb-with-kubernetes-insecure.html" - ] - }, - { - "title": "Operate CockroachDB on Kubernetes", - "urls": [ - "/${VERSION}/operate-cockroachdb-kubernetes.html" - ] - }, - { - "title": "Monitor CockroachDB on Kubernetes", - "urls": [ - "/${VERSION}/monitor-cockroachdb-kubernetes.html" - ] - }, - { - "title": "OpenShift Single-Cluster Deployment", - "urls": [ - "/${VERSION}/deploy-cockroachdb-with-kubernetes-openshift.html" - ] - }, - { - "title": "Kubernetes Multi-Cluster Deployment", - "urls": [ - "/${VERSION}/orchestrate-cockroachdb-with-kubernetes-multi-cluster.html" - ] - }, - { - "title": "Kubernetes Performance Optimization", - "urls": [ - "/${VERSION}/kubernetes-performance.html" - ] - } - ] - }, - { - "title": "Back Up and Restore Data", - "items": [ - { - "title": "Full and Incremental Backups", - "urls": [ - "/${VERSION}/take-full-and-incremental-backups.html" - ] - }, - { - "title": "Backups with Revision History and Point-in-time Restore", - "urls": [ - "/${VERSION}/take-backups-with-revision-history-and-restore-from-a-point-in-time.html" - ] - }, - { - "title": "Encrypted Backup and Restore", - "urls": [ - "/${VERSION}/take-and-restore-encrypted-backups.html" - ] - }, - { - "title": "Locality-aware Backup and Restore", - "urls": [ - "/${VERSION}/take-and-restore-locality-aware-backups.html" - ] - }, - { - "title": "Scheduled Backups", - "urls": [ - "/${VERSION}/manage-a-backup-schedule.html" - ] - } - ] - }, - { - "title": "File Storage for Bulk Operations", - "items": [ - { - "title": "Cloud Storage", - "urls": [ - "/${VERSION}/use-cloud-storage-for-bulk-operations.html" - ] - }, - { - "title": "Userfile Storage", - "urls": [ - "/${VERSION}/use-userfile-for-bulk-operations.html" - ] - }, - { - "title": "Local File Server", - "urls": [ - "/${VERSION}/use-a-local-file-server-for-bulk-operations.html" - ] - } - ] - }, - { - "title": "Performance", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/performance.html" - ] - }, - { - "title": "Benchmarking Instructions", - "urls": [ - "/${VERSION}/performance-benchmarking-with-tpcc-local.html", - "/${VERSION}/performance-benchmarking-with-tpcc-small.html", - "/${VERSION}/performance-benchmarking-with-tpcc-medium.html", - "/${VERSION}/performance-benchmarking-with-tpcc-large.html" - ] - }, - { - "title": "Tuning Best Practices", - "urls": [ - "/${VERSION}/performance-best-practices-overview.html" - ] - }, - { - "title": "Performance tuning recipes", - "urls": [ - "/${VERSION}/performance-recipes.html" - ] - }, - { - "title": "Improving statement performance", - "urls": [ - "/${VERSION}/make-queries-fast.html" - ] - }, - { - "title": "Tuning with EXPLAIN", - "urls": [ - "/${VERSION}/sql-tuning-with-explain.html" - ] - } - ] - }, - { - "title": "Security", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/security-overview.html" - ] - }, - { - "title": "Authentication", - "urls": [ - "/${VERSION}/authentication.html" - ] - }, - { - "title": "Encryption", - "urls": [ - "/${VERSION}/encryption.html" - ] - }, - { - "title": "Authorization", - "urls": [ - "/${VERSION}/authorization.html" - ] - }, - { - "title": "SQL Audit Logging", - "urls": [ - "/${VERSION}/sql-audit-logging.html" - ] - }, - { - "title": "GSSAPI Authentication", - "urls": [ - "/${VERSION}/gssapi_authentication.html" - ] - }, - { - "title": "Single Sign-on", - "urls": [ - "/${VERSION}/sso.html" - ] - } - ] - }, - { - "title": "Monitoring and Alerting", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/monitoring-and-alerting.html" - ] - }, - { - "title": "Enable the Node Map", - "urls": [ - "/${VERSION}/enable-node-map.html" - ] - }, - { - "title": "Use Prometheus and Alertmanager", - "urls": [ - "/${VERSION}/monitor-cockroachdb-with-prometheus.html" - ] - }, - { - "title": "Cluster API", - "urls": [ - "/${VERSION}/cluster-api.html" - ] - }, - { - "title": "Third-Party Monitoring Integrations", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/third-party-monitoring-tools.html" - ] - }, - { - "title": "Datadog", - "urls": [ - "/${VERSION}/datadog.html" - ] - }, - { - "title": "Kibana", - "urls": [ - "/${VERSION}/kibana.html" - ] - } - ] - } - ] - }, - { - "title": "Logging", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/logging-overview.html" - ] - }, - { - "title": "Configure Logs", - "urls": [ - "/${VERSION}/configure-logs.html" - ] - }, - { - "title": "Logging Use Cases", - "urls": [ - "/${VERSION}/logging-use-cases.html" - ] - } - ] - }, - { - "title": "Cluster Maintenance", - "items": [ - { - "title": "Upgrade to CockroachDB v21.1", - "urls": [ - "/${VERSION}/upgrade-cockroach-version.html" - ] - }, - { - "title": "Online Schema Changes", - "urls": [ - "/${VERSION}/online-schema-changes.html" - ] - }, - { - "title": "Manage Long-Running Queries", - "urls": [ - "/${VERSION}/manage-long-running-queries.html" - ] - }, - { - "title": "Decommission Nodes", - "urls": [ - "/${VERSION}/remove-nodes.html" - ] - }, - { - "title": "Rotate Security Certificates", - "urls": [ - "/${VERSION}/rotate-certificates.html" - ] - }, - { - "title": "Disaster Recovery", - "urls": [ - "/${VERSION}/disaster-recovery.html" - ] - } - ] - }, - { - "title": "Replication Controls", - "urls": [ - "/${VERSION}/configure-replication-zones.html" - ] - }, - { - "title": "Stream Data Out of CockroachDB", - "urls": [ - "/${VERSION}/stream-data-out-of-cockroachdb-using-changefeeds.html" - ] - }, - { - "title": "Enterprise Features", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/enterprise-licensing.html" - ] - }, - { - "title": "Table Partitioning", - "urls": [ - "/${VERSION}/partitioning.html" - ] - } - ] - } - ] - }, - { - "title": "Migrate", - "items": [ - { - "title": "Migration Overview", - "urls": [ - "/${VERSION}/migration-overview.html" - ] - }, - { - "title": "Migrate from Oracle", - "urls": [ - "/${VERSION}/migrate-from-oracle.html" - ] - }, - { - "title": "Migrate from Postgres", - "urls": [ - "/${VERSION}/migrate-from-postgres.html" - ] - }, - { - "title": "Migrate from MySQL", - "urls": [ - "/${VERSION}/migrate-from-mysql.html" - ] - }, - { - "title": "Migrate from CSV", - "urls": [ - "/${VERSION}/migrate-from-csv.html" - ] - }, - { - "title": "Migrate from Avro", - "urls": [ - "/${VERSION}/migrate-from-avro.html" - ] - }, - { - "title": "Migrate from Shapefiles", - "urls": [ - "/${VERSION}/migrate-from-shapefiles.html" - ] - }, - { - "title": "Migrate from OpenStreetMap", - "urls": [ - "/${VERSION}/migrate-from-openstreetmap.html" - ] - }, - { - "title": "Migrate from GeoJSON", - "urls": [ - "/${VERSION}/migrate-from-geojson.html" - ] - }, - { - "title": "Migrate from GeoPackage", - "urls": [ - "/${VERSION}/migrate-from-geopackage.html" - ] - }, - { - "title": "Export Spatial Data", - "urls": [ - "/${VERSION}/export-spatial-data.html" - ] - }, - { - "title": "Import Performance Best Practices", - "urls": [ - "/${VERSION}/import-performance-best-practices.html" - ] - } - ] - }, - { - "title": "Troubleshoot", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/troubleshooting-overview.html" - ] - }, - { - "title": "Common Errors", - "urls": [ - "/${VERSION}/common-errors.html" - ] - }, - { - "title": "Troubleshoot Cluster Setup", - "urls": [ - "/${VERSION}/cluster-setup-troubleshooting.html" - ] - }, - { - "title": "Troubleshoot Query Behavior", - "urls": [ - "/${VERSION}/query-behavior-troubleshooting.html" - ] - }, - { - "title": "Understand Debug Logs", - "urls": [ - "/${VERSION}/logging-overview.html" - ] - }, - { - "title": "Replication Reports", - "urls": [ - "/${VERSION}/query-replication-reports.html" - ] - }, - { - "title": "Support Resources", - "urls": [ - "/${VERSION}/support-resources.html" - ] - }, - { - "title": "File an Issue", - "urls": [ - "/${VERSION}/file-an-issue.html" - ] - } - ] - }, - { - "title": "FAQs", - "items": [ - { - "title": "Product FAQs", - "urls": [ - "/${VERSION}/frequently-asked-questions.html" - ] - }, - { - "title": "SQL FAQs", - "urls": [ - "/${VERSION}/sql-faqs.html" - ] - }, - { - "title": "Operational FAQs", - "urls": [ - "/${VERSION}/operational-faqs.html" - ] - }, - { - "title": "Availability FAQs", - "urls": [ - "/${VERSION}/multi-active-availability.html" - ] - }, - { - "title": "Licensing FAQs", - "urls": [ - "/${VERSION}/licensing-faqs.html" - ] - }, - { - "title": "CockroachDB in Comparison", - "urls": [ - "/${VERSION}/cockroachdb-in-comparison.html" - ] - } - ] - }, - {% include_cached sidebar-releases.json %} - ] - }, - {% include_cached v21.1/sidebar-data-reference.json %} -] diff --git a/src/current/_includes/v21.1/app/before-you-begin.md b/src/current/_includes/v21.1/app/before-you-begin.md deleted file mode 100644 index b271d6ff85c..00000000000 --- a/src/current/_includes/v21.1/app/before-you-begin.md +++ /dev/null @@ -1,12 +0,0 @@ -1. [Install CockroachDB](install-cockroachdb.html). -2. Start up a [secure](secure-a-cluster.html) or [insecure](start-a-local-cluster.html) local cluster. -3. Choose the instructions that correspond to whether your cluster is secure or insecure: - -
- - -
- -
-{% include {{ page.version.version }}/prod-deployment/insecure-flag.md %} -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/app/cc-free-tier-params.md b/src/current/_includes/v21.1/app/cc-free-tier-params.md deleted file mode 100644 index fc04ba4d15b..00000000000 --- a/src/current/_includes/v21.1/app/cc-free-tier-params.md +++ /dev/null @@ -1,10 +0,0 @@ -Where: - -- `{username}` and `{password}` specify the SQL username and password that you created earlier. -- `{globalhost}` is the name of the {{site.data.products.serverless}} host (e.g., `free-tier.gcp-us-central1.cockroachlabs.cloud`). -- `{path to the CA certificate}` is the path to the `cc-ca.crt` file that you downloaded from the CockroachDB {{ site.data.products.cloud }} Console. -- `{cluster_name}` is the name of your cluster. - -{{site.data.alerts.callout_info}} -If you are using the connection string that you [copied from the **Connection info** modal](#set-up-your-cluster-connection), your username, password, hostname, and cluster name will be pre-populated. -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v21.1/app/create-a-database.md b/src/current/_includes/v21.1/app/create-a-database.md deleted file mode 100644 index d24dd3e2cd8..00000000000 --- a/src/current/_includes/v21.1/app/create-a-database.md +++ /dev/null @@ -1,56 +0,0 @@ -
- -1. In the SQL shell, create the `bank` database that your application will use: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE bank; - ~~~ - -1. Create a SQL user for your app: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE USER WITH PASSWORD ; - ~~~ - - Take note of the username and password. You will use it in your application code later. - -1. Give the user the necessary permissions: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > GRANT ALL ON DATABASE bank TO ; - ~~~ - -
- -
- -1. If you haven't already, [download the CockroachDB binary](install-cockroachdb.html). -1. Start the [built-in SQL shell](cockroach-sql.html) using the connection string you got from the CockroachDB {{ site.data.products.cloud }} Console [earlier](#set-up-your-cluster-connection): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - --url='postgres://:@:26257/.defaultdb?sslmode=verify-full&sslrootcert=/cc-ca.crt' - ~~~ - - In the connection string copied from the CockroachDB {{ site.data.products.cloud }} Console, your username, password and cluster name are pre-populated. Replace the `` placeholder with the path to the `certs` directory that you created [earlier](#set-up-your-cluster-connection). - -1. In the SQL shell, create the `bank` database that your application will use: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE bank; - ~~~ - -1. Exit the SQL shell: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/app/create-maxroach-user-and-bank-database.md b/src/current/_includes/v21.1/app/create-maxroach-user-and-bank-database.md deleted file mode 100644 index 1e259b96012..00000000000 --- a/src/current/_includes/v21.1/app/create-maxroach-user-and-bank-database.md +++ /dev/null @@ -1,32 +0,0 @@ -Start the [built-in SQL shell](cockroach-sql.html): - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs -~~~ - -In the SQL shell, issue the following statements to create the `maxroach` user and `bank` database: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE USER IF NOT EXISTS maxroach; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE DATABASE bank; -~~~ - -Give the `maxroach` user the necessary permissions: - -{% include_cached copy-clipboard.html %} -~~~ sql -> GRANT ALL ON DATABASE bank TO maxroach; -~~~ - -Exit the SQL shell: - -{% include_cached copy-clipboard.html %} -~~~ sql -> \q -~~~ diff --git a/src/current/_includes/v21.1/app/for-a-complete-example-go.md b/src/current/_includes/v21.1/app/for-a-complete-example-go.md deleted file mode 100644 index 64803f686a9..00000000000 --- a/src/current/_includes/v21.1/app/for-a-complete-example-go.md +++ /dev/null @@ -1,4 +0,0 @@ -For complete examples, see: - -- [Build a Go App with CockroachDB](build-a-go-app-with-cockroachdb.html) (pgx) -- [Build a Go App with CockroachDB and GORM](build-a-go-app-with-cockroachdb.html) diff --git a/src/current/_includes/v21.1/app/for-a-complete-example-java.md b/src/current/_includes/v21.1/app/for-a-complete-example-java.md deleted file mode 100644 index b4c63135ae0..00000000000 --- a/src/current/_includes/v21.1/app/for-a-complete-example-java.md +++ /dev/null @@ -1,4 +0,0 @@ -For complete examples, see: - -- [Build a Java App with CockroachDB](build-a-java-app-with-cockroachdb.html) (JDBC) -- [Build a Java App with CockroachDB and Hibernate](build-a-java-app-with-cockroachdb-hibernate.html) diff --git a/src/current/_includes/v21.1/app/for-a-complete-example-python.md b/src/current/_includes/v21.1/app/for-a-complete-example-python.md deleted file mode 100644 index c647ce75df2..00000000000 --- a/src/current/_includes/v21.1/app/for-a-complete-example-python.md +++ /dev/null @@ -1,5 +0,0 @@ -For complete examples, see: - -- [Build a Python App with CockroachDB](build-a-python-app-with-cockroachdb.html) (psycopg2) -- [Build a Python App with CockroachDB and SQLAlchemy](build-a-python-app-with-cockroachdb-sqlalchemy.html) -- [Build a Python App with CockroachDB and Django](build-a-python-app-with-cockroachdb-django.html) diff --git a/src/current/_includes/v21.1/app/init-bank-sample.md b/src/current/_includes/v21.1/app/init-bank-sample.md deleted file mode 100644 index ee3f58df496..00000000000 --- a/src/current/_includes/v21.1/app/init-bank-sample.md +++ /dev/null @@ -1,54 +0,0 @@ - -To initialize the example database, use the [`cockroach sql`](cockroach-sql.html) command to execute the SQL statements in the `dbinit.sql` file: - -
- -{% include_cached copy-clipboard.html %} -~~~ shell -cat dbinit.sql | cockroach sql --url "" -~~~ - -Where `` is the connection string you obtained earlier from the CockroachDB {{ site.data.products.cloud }} Console. - -
- -
- -{% include_cached copy-clipboard.html %} -~~~ shell -cat dbinit.sql | cockroach sql --url "postgresql://root@localhost:26257?sslmode=disable" -~~~ - -{{site.data.alerts.callout_info}} -`postgresql://root@localhost:26257?sslmode=disable` is the `sql` connection string you obtained earlier from the `cockroach` welcome text. -{{site.data.alerts.end}} - -
- -The SQL statements in the initialization file should execute: - -~~~ -SET - -Time: 1ms - -SET - -Time: 2ms - -DROP DATABASE - -Time: 1ms - -CREATE DATABASE - -Time: 2ms - -SET - -Time: 10ms - -CREATE TABLE - -Time: 4ms -~~~ diff --git a/src/current/_includes/v21.1/app/insecure/create-maxroach-user-and-bank-database.md b/src/current/_includes/v21.1/app/insecure/create-maxroach-user-and-bank-database.md deleted file mode 100644 index 0fff36e7545..00000000000 --- a/src/current/_includes/v21.1/app/insecure/create-maxroach-user-and-bank-database.md +++ /dev/null @@ -1,32 +0,0 @@ -Start the [built-in SQL shell](cockroach-sql.html): - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -~~~ - -In the SQL shell, issue the following statements to create the `maxroach` user and `bank` database: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE USER IF NOT EXISTS maxroach; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE DATABASE bank; -~~~ - -Give the `maxroach` user the necessary permissions: - -{% include_cached copy-clipboard.html %} -~~~ sql -> GRANT ALL ON DATABASE bank TO maxroach; -~~~ - -Exit the SQL shell: - -{% include_cached copy-clipboard.html %} -~~~ sql -> \q -~~~ diff --git a/src/current/_includes/v21.1/app/insecure/jooq-basic-sample/Sample.java b/src/current/_includes/v21.1/app/insecure/jooq-basic-sample/Sample.java deleted file mode 100644 index d1a54a8ddd2..00000000000 --- a/src/current/_includes/v21.1/app/insecure/jooq-basic-sample/Sample.java +++ /dev/null @@ -1,215 +0,0 @@ -package com.cockroachlabs; - -import com.cockroachlabs.example.jooq.db.Tables; -import com.cockroachlabs.example.jooq.db.tables.records.AccountsRecord; -import org.jooq.DSLContext; -import org.jooq.SQLDialect; -import org.jooq.Source; -import org.jooq.conf.RenderQuotedNames; -import org.jooq.conf.Settings; -import org.jooq.exception.DataAccessException; -import org.jooq.impl.DSL; - -import java.io.InputStream; -import java.sql.Connection; -import java.sql.DriverManager; -import java.sql.SQLException; -import java.util.*; -import java.util.concurrent.atomic.AtomicInteger; -import java.util.concurrent.atomic.AtomicLong; -import java.util.function.Function; - -import static com.cockroachlabs.example.jooq.db.Tables.ACCOUNTS; - -public class Sample { - - private static final Random RAND = new Random(); - private static final boolean FORCE_RETRY = false; - private static final String RETRY_SQL_STATE = "40001"; - private static final int MAX_ATTEMPT_COUNT = 6; - - private static Function addAccounts() { - return ctx -> { - long rv = 0; - - ctx.delete(ACCOUNTS).execute(); - ctx.batchInsert( - new AccountsRecord(1L, 1000L), - new AccountsRecord(2L, 250L), - new AccountsRecord(3L, 314159L) - ).execute(); - - rv = 1; - System.out.printf("APP: addAccounts() --> %d\n", rv); - return rv; - }; - } - - private static Function transferFunds(long fromId, long toId, long amount) { - return ctx -> { - long rv = 0; - - AccountsRecord fromAccount = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(fromId)); - AccountsRecord toAccount = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(toId)); - - if (!(amount > fromAccount.getBalance())) { - fromAccount.setBalance(fromAccount.getBalance() - amount); - toAccount.setBalance(toAccount.getBalance() + amount); - - ctx.batchUpdate(fromAccount, toAccount).execute(); - rv = amount; - System.out.printf("APP: transferFunds(%d, %d, %d) --> %d\n", fromId, toId, amount, rv); - } - - return rv; - }; - } - - // Test our retry handling logic if FORCE_RETRY is true. This - // method is only used to test the retry logic. It is not - // intended for production code. - private static Function forceRetryLogic() { - return ctx -> { - long rv = -1; - try { - System.out.printf("APP: testRetryLogic: BEFORE EXCEPTION\n"); - ctx.execute("SELECT crdb_internal.force_retry('1s')"); - } catch (DataAccessException e) { - System.out.printf("APP: testRetryLogic: AFTER EXCEPTION\n"); - throw e; - } - return rv; - }; - } - - private static Function getAccountBalance(long id) { - return ctx -> { - AccountsRecord account = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(id)); - long balance = account.getBalance(); - System.out.printf("APP: getAccountBalance(%d) --> %d\n", id, balance); - return balance; - }; - } - - // Run SQL code in a way that automatically handles the - // transaction retry logic so we do not have to duplicate it in - // various places. - private static long runTransaction(DSLContext session, Function fn) { - AtomicLong rv = new AtomicLong(0L); - AtomicInteger attemptCount = new AtomicInteger(0); - - while (attemptCount.get() < MAX_ATTEMPT_COUNT) { - attemptCount.incrementAndGet(); - - if (attemptCount.get() > 1) { - System.out.printf("APP: Entering retry loop again, iteration %d\n", attemptCount.get()); - } - - if (session.connectionResult(connection -> { - connection.setAutoCommit(false); - System.out.printf("APP: BEGIN;\n"); - - if (attemptCount.get() == MAX_ATTEMPT_COUNT) { - String err = String.format("hit max of %s attempts, aborting", MAX_ATTEMPT_COUNT); - throw new RuntimeException(err); - } - - // This block is only used to test the retry logic. - // It is not necessary in production code. See also - // the method 'testRetryLogic()'. - if (FORCE_RETRY) { - session.fetch("SELECT now()"); - } - - try { - rv.set(fn.apply(session)); - if (rv.get() != -1) { - connection.commit(); - System.out.printf("APP: COMMIT;\n"); - return true; - } - } catch (DataAccessException | SQLException e) { - String sqlState = e instanceof SQLException ? ((SQLException) e).getSQLState() : ((DataAccessException) e).sqlState(); - - if (RETRY_SQL_STATE.equals(sqlState)) { - // Since this is a transaction retry error, we - // roll back the transaction and sleep a little - // before trying again. Each time through the - // loop we sleep for a little longer than the last - // time (A.K.A. exponential backoff). - System.out.printf("APP: retryable exception occurred:\n sql state = [%s]\n message = [%s]\n retry counter = %s\n", sqlState, e.getMessage(), attemptCount.get()); - System.out.printf("APP: ROLLBACK;\n"); - connection.rollback(); - int sleepMillis = (int)(Math.pow(2, attemptCount.get()) * 100) + RAND.nextInt(100); - System.out.printf("APP: Hit 40001 transaction retry error, sleeping %s milliseconds\n", sleepMillis); - try { - Thread.sleep(sleepMillis); - } catch (InterruptedException ignored) { - // no-op - } - rv.set(-1L); - } else { - throw e; - } - } - - return false; - })) { - break; - } - } - - return rv.get(); - } - - public static void main(String[] args) throws Exception { - try (Connection connection = DriverManager.getConnection( - "jdbc:postgresql://localhost:26257/bank?sslmode=disable", - "maxroach", - "" - )) { - DSLContext ctx = DSL.using(connection, SQLDialect.COCKROACHDB, new Settings() - .withExecuteLogging(true) - .withRenderQuotedNames(RenderQuotedNames.NEVER)); - - // Initialise database with db.sql script - try (InputStream in = Sample.class.getResourceAsStream("/db.sql")) { - ctx.parser().parse(Source.of(in).readString()).executeBatch(); - } - - long fromAccountId = 1; - long toAccountId = 2; - long transferAmount = 100; - - if (FORCE_RETRY) { - System.out.printf("APP: About to test retry logic in 'runTransaction'\n"); - runTransaction(ctx, forceRetryLogic()); - } else { - - runTransaction(ctx, addAccounts()); - long fromBalance = runTransaction(ctx, getAccountBalance(fromAccountId)); - long toBalance = runTransaction(ctx, getAccountBalance(toAccountId)); - if (fromBalance != -1 && toBalance != -1) { - // Success! - System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalance); - System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalance); - } - - // Transfer $100 from account 1 to account 2 - long transferResult = runTransaction(ctx, transferFunds(fromAccountId, toAccountId, transferAmount)); - if (transferResult != -1) { - // Success! - System.out.printf("APP: transferFunds(%d, %d, %d) --> %d \n", fromAccountId, toAccountId, transferAmount, transferResult); - - long fromBalanceAfter = runTransaction(ctx, getAccountBalance(fromAccountId)); - long toBalanceAfter = runTransaction(ctx, getAccountBalance(toAccountId)); - if (fromBalanceAfter != -1 && toBalanceAfter != -1) { - // Success! - System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalanceAfter); - System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalanceAfter); - } - } - } - } - } -} diff --git a/src/current/_includes/v21.1/app/insecure/jooq-basic-sample/jooq-basic-sample.zip b/src/current/_includes/v21.1/app/insecure/jooq-basic-sample/jooq-basic-sample.zip deleted file mode 100644 index f11f86b8f43..00000000000 Binary files a/src/current/_includes/v21.1/app/insecure/jooq-basic-sample/jooq-basic-sample.zip and /dev/null differ diff --git a/src/current/_includes/v21.1/app/insecure/upperdb-basic-sample/main.go b/src/current/_includes/v21.1/app/insecure/upperdb-basic-sample/main.go deleted file mode 100644 index 5c855356d7e..00000000000 --- a/src/current/_includes/v21.1/app/insecure/upperdb-basic-sample/main.go +++ /dev/null @@ -1,185 +0,0 @@ -package main - -import ( - "fmt" - "log" - "time" - - "github.com/upper/db/v4" - "github.com/upper/db/v4/adapter/cockroachdb" -) - -// The settings variable stores connection details. -var settings = cockroachdb.ConnectionURL{ - Host: "localhost", - Database: "bank", - User: "maxroach", - Options: map[string]string{ - // Insecure node. - "sslmode": "disable", - }, -} - -// Accounts is a handy way to represent a collection. -func Accounts(sess db.Session) db.Store { - return sess.Collection("accounts") -} - -// Account is used to represent a single record in the "accounts" table. -type Account struct { - ID uint64 `db:"id,omitempty"` - Balance int64 `db:"balance"` -} - -// Collection is required in order to create a relation between the Account -// struct and the "accounts" table. -func (a *Account) Store(sess db.Session) db.Store { - return Accounts(sess) -} - -// createTables creates all the tables that are neccessary to run this example. -func createTables(sess db.Session) error { - _, err := sess.SQL().Exec(` - CREATE TABLE IF NOT EXISTS accounts ( - ID SERIAL PRIMARY KEY, - balance INT - ) - `) - if err != nil { - return err - } - return nil -} - -// crdbForceRetry can be used to simulate a transaction error and -// demonstrate upper/db's ability to retry the transaction automatically. -// -// By default, upper/db will retry the transaction five times, if you want -// to modify this number use: sess.SetMaxTransactionRetries(n). -// -// This is only used for demonstration purposes and not intended -// for production code. -func crdbForceRetry(sess db.Session) error { - var err error - - // The first statement in a transaction can be retried transparently on the - // server, so we need to add a placeholder statement so that our - // force_retry() statement isn't the first one. - _, err = sess.SQL().Exec(`SELECT 1`) - if err != nil { - return err - } - - // If force_retry is called during the specified interval from the beginning - // of the transaction it returns a retryable error. If not, 0 is returned - // instead of an error. - _, err = sess.SQL().Exec(`SELECT crdb_internal.force_retry('1s'::INTERVAL)`) - if err != nil { - return err - } - - return nil -} - -func main() { - // Connect to the local CockroachDB node. - sess, err := cockroachdb.Open(settings) - if err != nil { - log.Fatal("cockroachdb.Open: ", err) - } - defer sess.Close() - - // Adjust this number to fit your specific needs (set to 5, by default) - // sess.SetMaxTransactionRetries(10) - - // Create the "accounts" table - createTables(sess) - - // Delete all the previous items in the "accounts" table. - err = Accounts(sess).Truncate() - if err != nil { - log.Fatal("Truncate: ", err) - } - - // Create a new account with a balance of 1000. - account1 := Account{Balance: 1000} - err = Accounts(sess).InsertReturning(&account1) - if err != nil { - log.Fatal("sess.Save: ", err) - } - - // Create a new account with a balance of 250. - account2 := Account{Balance: 250} - err = Accounts(sess).InsertReturning(&account2) - if err != nil { - log.Fatal("sess.Save: ", err) - } - - // Printing records - printRecords(sess) - - // Change the balance of the first account. - account1.Balance = 500 - err = sess.Save(&account1) - if err != nil { - log.Fatal("sess.Save: ", err) - } - - // Change the balance of the second account. - account2.Balance = 999 - err = sess.Save(&account2) - if err != nil { - log.Fatal("sess.Save: ", err) - } - - // Printing records - printRecords(sess) - - // Delete the first record. - err = sess.Delete(&account1) - if err != nil { - log.Fatal("Delete: ", err) - } - - startTime := time.Now() - - // Add a couple of new records within a transaction. - err = sess.Tx(func(tx db.Session) error { - var err error - - if err = tx.Save(&Account{Balance: 887}); err != nil { - return err - } - - if time.Now().Sub(startTime) < time.Second*1 { - // Will fail continuously for 2 seconds. - if err = crdbForceRetry(tx); err != nil { - return err - } - } - - if err = tx.Save(&Account{Balance: 342}); err != nil { - return err - } - - return nil - }) - if err != nil { - log.Fatal("Could not commit transaction: ", err) - } - - // Printing records - printRecords(sess) -} - -func printRecords(sess db.Session) { - accounts := []Account{} - err := Accounts(sess).Find().All(&accounts) - if err != nil { - log.Fatal("Find: ", err) - } - log.Printf("Balances:") - for i := range accounts { - fmt.Printf("\taccounts[%d]: %d\n", accounts[i].ID, accounts[i].Balance) - } -} diff --git a/src/current/_includes/v21.1/app/java-tls-note.md b/src/current/_includes/v21.1/app/java-tls-note.md deleted file mode 100644 index a1fd6f61600..00000000000 --- a/src/current/_includes/v21.1/app/java-tls-note.md +++ /dev/null @@ -1,13 +0,0 @@ -{{site.data.alerts.callout_danger}} -CockroachDB supports TLS 1.2 and 1.3, and uses 1.3 by default. - -[A bug in the TLS 1.3 implementation](https://bugs.openjdk.java.net/browse/JDK-8236039) in Java 11 versions lower than 11.0.7 and Java 13 versions lower than 13.0.3 makes the versions incompatible with CockroachDB. - -If an incompatible version is used, the client may throw the following exception: - -`javax.net.ssl.SSLHandshakeException: extension (5) should not be presented in certificate_request` - -For applications running Java 11 or 13, make sure that you have version 11.0.7 or higher, or 13.0.3 or higher. - -If you cannot upgrade to a version higher than 11.0.7 or 13.0.3, you must configure the application to use TLS 1.2. For example, when starting your app, use: `$ java -Djdk.tls.client.protocols=TLSv1.2 appName` -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v21.1/app/java-version-note.md b/src/current/_includes/v21.1/app/java-version-note.md deleted file mode 100644 index 3d559314262..00000000000 --- a/src/current/_includes/v21.1/app/java-version-note.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -We recommend using Java versions 8+ with CockroachDB. -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v21.1/app/jooq-basic-sample/Sample.java b/src/current/_includes/v21.1/app/jooq-basic-sample/Sample.java deleted file mode 100644 index fd71726603e..00000000000 --- a/src/current/_includes/v21.1/app/jooq-basic-sample/Sample.java +++ /dev/null @@ -1,215 +0,0 @@ -package com.cockroachlabs; - -import com.cockroachlabs.example.jooq.db.Tables; -import com.cockroachlabs.example.jooq.db.tables.records.AccountsRecord; -import org.jooq.DSLContext; -import org.jooq.SQLDialect; -import org.jooq.Source; -import org.jooq.conf.RenderQuotedNames; -import org.jooq.conf.Settings; -import org.jooq.exception.DataAccessException; -import org.jooq.impl.DSL; - -import java.io.InputStream; -import java.sql.Connection; -import java.sql.DriverManager; -import java.sql.SQLException; -import java.util.*; -import java.util.concurrent.atomic.AtomicInteger; -import java.util.concurrent.atomic.AtomicLong; -import java.util.function.Function; - -import static com.cockroachlabs.example.jooq.db.Tables.ACCOUNTS; - -public class Sample { - - private static final Random RAND = new Random(); - private static final boolean FORCE_RETRY = false; - private static final String RETRY_SQL_STATE = "40001"; - private static final int MAX_ATTEMPT_COUNT = 6; - - private static Function addAccounts() { - return ctx -> { - long rv = 0; - - ctx.delete(ACCOUNTS).execute(); - ctx.batchInsert( - new AccountsRecord(1L, 1000L), - new AccountsRecord(2L, 250L), - new AccountsRecord(3L, 314159L) - ).execute(); - - rv = 1; - System.out.printf("APP: addAccounts() --> %d\n", rv); - return rv; - }; - } - - private static Function transferFunds(long fromId, long toId, long amount) { - return ctx -> { - long rv = 0; - - AccountsRecord fromAccount = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(fromId)); - AccountsRecord toAccount = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(toId)); - - if (!(amount > fromAccount.getBalance())) { - fromAccount.setBalance(fromAccount.getBalance() - amount); - toAccount.setBalance(toAccount.getBalance() + amount); - - ctx.batchUpdate(fromAccount, toAccount).execute(); - rv = amount; - System.out.printf("APP: transferFunds(%d, %d, %d) --> %d\n", fromId, toId, amount, rv); - } - - return rv; - }; - } - - // Test our retry handling logic if FORCE_RETRY is true. This - // method is only used to test the retry logic. It is not - // intended for production code. - private static Function forceRetryLogic() { - return ctx -> { - long rv = -1; - try { - System.out.printf("APP: testRetryLogic: BEFORE EXCEPTION\n"); - ctx.execute("SELECT crdb_internal.force_retry('1s')"); - } catch (DataAccessException e) { - System.out.printf("APP: testRetryLogic: AFTER EXCEPTION\n"); - throw e; - } - return rv; - }; - } - - private static Function getAccountBalance(long id) { - return ctx -> { - AccountsRecord account = ctx.fetchSingle(ACCOUNTS, ACCOUNTS.ID.eq(id)); - long balance = account.getBalance(); - System.out.printf("APP: getAccountBalance(%d) --> %d\n", id, balance); - return balance; - }; - } - - // Run SQL code in a way that automatically handles the - // transaction retry logic so we do not have to duplicate it in - // various places. - private static long runTransaction(DSLContext session, Function fn) { - AtomicLong rv = new AtomicLong(0L); - AtomicInteger attemptCount = new AtomicInteger(0); - - while (attemptCount.get() < MAX_ATTEMPT_COUNT) { - attemptCount.incrementAndGet(); - - if (attemptCount.get() > 1) { - System.out.printf("APP: Entering retry loop again, iteration %d\n", attemptCount.get()); - } - - if (session.connectionResult(connection -> { - connection.setAutoCommit(false); - System.out.printf("APP: BEGIN;\n"); - - if (attemptCount.get() == MAX_ATTEMPT_COUNT) { - String err = String.format("hit max of %s attempts, aborting", MAX_ATTEMPT_COUNT); - throw new RuntimeException(err); - } - - // This block is only used to test the retry logic. - // It is not necessary in production code. See also - // the method 'testRetryLogic()'. - if (FORCE_RETRY) { - session.fetch("SELECT now()"); - } - - try { - rv.set(fn.apply(session)); - if (rv.get() != -1) { - connection.commit(); - System.out.printf("APP: COMMIT;\n"); - return true; - } - } catch (DataAccessException | SQLException e) { - String sqlState = e instanceof SQLException ? ((SQLException) e).getSQLState() : ((DataAccessException) e).sqlState(); - - if (RETRY_SQL_STATE.equals(sqlState)) { - // Since this is a transaction retry error, we - // roll back the transaction and sleep a little - // before trying again. Each time through the - // loop we sleep for a little longer than the last - // time (A.K.A. exponential backoff). - System.out.printf("APP: retryable exception occurred:\n sql state = [%s]\n message = [%s]\n retry counter = %s\n", sqlState, e.getMessage(), attemptCount.get()); - System.out.printf("APP: ROLLBACK;\n"); - connection.rollback(); - int sleepMillis = (int)(Math.pow(2, attemptCount.get()) * 100) + RAND.nextInt(100); - System.out.printf("APP: Hit 40001 transaction retry error, sleeping %s milliseconds\n", sleepMillis); - try { - Thread.sleep(sleepMillis); - } catch (InterruptedException ignored) { - // no-op - } - rv.set(-1L); - } else { - throw e; - } - } - - return false; - })) { - break; - } - } - - return rv.get(); - } - - public static void main(String[] args) throws Exception { - try (Connection connection = DriverManager.getConnection( - "jdbc:postgresql://localhost:26257/bank?ssl=true&sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.maxroach.key.pk8&sslcert=certs/client.maxroach.crt", - "maxroach", - "" - )) { - DSLContext ctx = DSL.using(connection, SQLDialect.COCKROACHDB, new Settings() - .withExecuteLogging(true) - .withRenderQuotedNames(RenderQuotedNames.NEVER)); - - // Initialise database with db.sql script - try (InputStream in = Sample.class.getResourceAsStream("/db.sql")) { - ctx.parser().parse(Source.of(in).readString()).executeBatch(); - } - - long fromAccountId = 1; - long toAccountId = 2; - long transferAmount = 100; - - if (FORCE_RETRY) { - System.out.printf("APP: About to test retry logic in 'runTransaction'\n"); - runTransaction(ctx, forceRetryLogic()); - } else { - - runTransaction(ctx, addAccounts()); - long fromBalance = runTransaction(ctx, getAccountBalance(fromAccountId)); - long toBalance = runTransaction(ctx, getAccountBalance(toAccountId)); - if (fromBalance != -1 && toBalance != -1) { - // Success! - System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalance); - System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalance); - } - - // Transfer $100 from account 1 to account 2 - long transferResult = runTransaction(ctx, transferFunds(fromAccountId, toAccountId, transferAmount)); - if (transferResult != -1) { - // Success! - System.out.printf("APP: transferFunds(%d, %d, %d) --> %d \n", fromAccountId, toAccountId, transferAmount, transferResult); - - long fromBalanceAfter = runTransaction(ctx, getAccountBalance(fromAccountId)); - long toBalanceAfter = runTransaction(ctx, getAccountBalance(toAccountId)); - if (fromBalanceAfter != -1 && toBalanceAfter != -1) { - // Success! - System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalanceAfter); - System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalanceAfter); - } - } - } - } - } -} diff --git a/src/current/_includes/v21.1/app/jooq-basic-sample/jooq-basic-sample.zip b/src/current/_includes/v21.1/app/jooq-basic-sample/jooq-basic-sample.zip deleted file mode 100644 index 859305478c0..00000000000 Binary files a/src/current/_includes/v21.1/app/jooq-basic-sample/jooq-basic-sample.zip and /dev/null differ diff --git a/src/current/_includes/v21.1/app/pkcs8-gen.md b/src/current/_includes/v21.1/app/pkcs8-gen.md deleted file mode 100644 index 411d262e970..00000000000 --- a/src/current/_includes/v21.1/app/pkcs8-gen.md +++ /dev/null @@ -1,8 +0,0 @@ -You can pass the [`--also-generate-pkcs8-key` flag](cockroach-cert.html#flag-pkcs8) to [`cockroach cert`](cockroach-cert.html) to generate a key in [PKCS#8 format](https://tools.ietf.org/html/rfc5208), which is the standard key encoding format in Java. For example, if you have the user `max`: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client max --certs-dir=certs --ca-key=my-safe-directory/ca.key --also-generate-pkcs8-key -~~~ - -The generated PKCS8 key will be named `client.max.key.pk8`. diff --git a/src/current/_includes/v21.1/app/python/sqlalchemy/sqlalchemy-large-txns.py b/src/current/_includes/v21.1/app/python/sqlalchemy/sqlalchemy-large-txns.py deleted file mode 100644 index 7a6ef82c2e3..00000000000 --- a/src/current/_includes/v21.1/app/python/sqlalchemy/sqlalchemy-large-txns.py +++ /dev/null @@ -1,57 +0,0 @@ -from sqlalchemy import create_engine, Column, Float, Integer -from sqlalchemy.ext.declarative import declarative_base -from sqlalchemy.orm import sessionmaker -from cockroachdb.sqlalchemy import run_transaction -from random import random - -Base = declarative_base() - -# The code below assumes you have run the following SQL statements. - -# CREATE DATABASE pointstore; - -# USE pointstore; - -# CREATE TABLE points ( -# id INT PRIMARY KEY DEFAULT unique_rowid(), -# x FLOAT NOT NULL, -# y FLOAT NOT NULL, -# z FLOAT NOT NULL -# ); - -engine = create_engine( - # For cockroach demo: - 'cockroachdb://:@:/bank?sslmode=require', - echo=True # Log SQL queries to stdout -) - - -class Point(Base): - __tablename__ = 'points' - id = Column(Integer, primary_key=True) - x = Column(Float) - y = Column(Float) - z = Column(Float) - - -def add_points(num_points): - chunk_size = 1000 # Tune this based on object sizes. - - def add_points_helper(sess, chunk, num_points): - points = [] - for i in range(chunk, min(chunk + chunk_size, num_points)): - points.append( - Point(x=random()*1024, y=random()*1024, z=random()*1024) - ) - sess.bulk_save_objects(points) - - for chunk in range(0, num_points, chunk_size): - run_transaction( - sessionmaker(bind=engine), - lambda s: add_points_helper( - s, chunk, min(chunk + chunk_size, num_points) - ) - ) - - -add_points(10000) diff --git a/src/current/_includes/v21.1/app/retry-errors.md b/src/current/_includes/v21.1/app/retry-errors.md deleted file mode 100644 index 5f219f53e12..00000000000 --- a/src/current/_includes/v21.1/app/retry-errors.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -Your application should [use a retry loop to handle transaction errors](error-handling-and-troubleshooting.html#transaction-retry-errors) that can occur under contention. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/app/sample-setup.md b/src/current/_includes/v21.1/app/sample-setup.md deleted file mode 100644 index 913e8d460c3..00000000000 --- a/src/current/_includes/v21.1/app/sample-setup.md +++ /dev/null @@ -1,48 +0,0 @@ - -
- - -
- -
- -### Create a free cluster - -{% include cockroachcloud/quickstart/create-a-free-cluster.md %} - -### Set up your cluster connection - -1. Navigate to the cluster's **Overview** page, and select **Connect**. - -1. Under the **Connection String** tab, download the cluster certificate. - -1. Take note of the connection string provided. You'll use it to connect to the database later in this tutorial. - -
- -
- -1. If you haven't already, [download the CockroachDB binary](install-cockroachdb.html). -1. Run the [`cockroach start-single-node`](cockroach-start-single-node.html) command: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start-single-node --advertise-addr 'localhost' --insecure - ~~~ - - This starts an insecure, single-node cluster. -1. Take note of the following connection information in the SQL shell welcome text: - - ~~~ - CockroachDB node starting at 2021-08-30 17:25:30.06524 +0000 UTC (took 4.3s) - build: CCL v21.1.6 @ 2021/07/20 15:33:43 (go1.15.11) - webui: http://localhost:8080 - sql: postgresql://root@localhost:26257?sslmode=disable - ~~~ - - You'll use the `sql` connection string to connect to the cluster later in this tutorial. - - -{% include {{ page.version.version }}/prod-deployment/insecure-flag.md %} - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/app/see-also-links.md b/src/current/_includes/v21.1/app/see-also-links.md deleted file mode 100644 index ee55292e744..00000000000 --- a/src/current/_includes/v21.1/app/see-also-links.md +++ /dev/null @@ -1,9 +0,0 @@ -You might also be interested in the following pages: - -- [Client Connection Parameters](connection-parameters.html) -- [Connection Pooling](connection-pooling.html) -- [Data Replication](demo-replication-and-rebalancing.html) -- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html) -- [Replication & Rebalancing](demo-replication-and-rebalancing.html) -- [Cross-Cloud Migration](demo-automatic-cloud-migration.html) -- [Automated Operations](orchestrate-a-local-cluster-with-kubernetes-insecure.html) diff --git a/src/current/_includes/v21.1/app/start-cockroachdb-no-cert.md b/src/current/_includes/v21.1/app/start-cockroachdb-no-cert.md deleted file mode 100644 index 2dfa468922b..00000000000 --- a/src/current/_includes/v21.1/app/start-cockroachdb-no-cert.md +++ /dev/null @@ -1,53 +0,0 @@ -Choose whether to run a temporary local cluster or a free CockroachDB cluster on CockroachDB {{ site.data.products.serverless }}. The instructions below will adjust accordingly. - -
- - -
- -
- -### Create a free cluster - -{% include cockroachcloud/quickstart/create-a-free-cluster.md %} - -### Set up your cluster connection - -The **Connection info** dialog shows information about how to connect to your cluster. - -1. Click the **Connection string** tab in the **Connection info** dialog. -1. Copy the connection string provided to a secure location. - - {{site.data.alerts.callout_info}} - The connection string is pre-populated with your username, password, cluster name, and other details. Your password, in particular, will be provided *only once*. Save it in a secure place (Cockroach Labs recommends a password manager) to connect to your cluster in the future. If you forget your password, you can reset it by going to the **SQL Users** page for the cluster, found at `https://cockroachlabs.cloud/cluster//users`. - {{site.data.alerts.end}} - -
- -
- -1. If you haven't already, [download the CockroachDB binary](install-cockroachdb.html). -1. Run the [`cockroach demo`](cockroach-demo.html) command: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach demo \ - --no-example-database - ~~~ - - This starts a temporary, in-memory cluster and opens an interactive SQL shell to the cluster. Any changes to the database will not persist after the cluster is stopped. - - {{site.data.alerts.callout_info}} - If `cockroach demo` fails due to SSL authentication, make sure you have cleared any previously downloaded CA certificates from the directory `~/.postgresql`. - {{site.data.alerts.end}} - -1. Take note of the `(sql)` connection string in the SQL shell welcome text: - - ~~~ - # Connection parameters: - # (webui) http://127.0.0.1:8080/demologin?password=demo76950&username=demo - # (sql) postgres://demo:demo76950@127.0.0.1:26257?sslmode=require - # (sql/unix) postgres://demo:demo76950@?host=%2Fvar%2Ffolders%2Fc8%2Fb_q93vjj0ybfz0fz0z8vy9zc0000gp%2FT%2Fdemo070856957&port=26257 - ~~~ - -
diff --git a/src/current/_includes/v21.1/app/start-cockroachdb.md b/src/current/_includes/v21.1/app/start-cockroachdb.md deleted file mode 100644 index 223eff5f55c..00000000000 --- a/src/current/_includes/v21.1/app/start-cockroachdb.md +++ /dev/null @@ -1,46 +0,0 @@ -Choose whether to run a temporary local cluster or a free CockroachDB cluster on CockroachDB {{ site.data.products.serverless }}. The instructions below will adjust accordingly. - -
- - -
- -
- -### Create a free cluster - -{% include cockroachcloud/quickstart/create-a-free-cluster.md %} - -### Set up your cluster connection - -{% include cockroachcloud/quickstart/set-up-your-cluster-connection.md %} - -
- -
- -1. If you haven't already, [download the CockroachDB binary](install-cockroachdb.html). -1. Run the [`cockroach demo`](cockroach-demo.html) command: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach demo \ - --no-example-database - ~~~ - - This starts a temporary, in-memory cluster and opens an interactive SQL shell to the cluster. Any changes to the database will not persist after the cluster is stopped. - - {{site.data.alerts.callout_info}} - If `cockroach demo` fails due to SSL authentication, make sure you have cleared any previously downloaded CA certificates from the directory `~/.postgresql`. - {{site.data.alerts.end}} - -1. Take note of the `(sql)` connection string in the SQL shell welcome text: - - ~~~ - # Connection parameters: - # (webui) http://127.0.0.1:8080/demologin?password=demo76950&username=demo - # (sql) postgres://demo:demo76950@127.0.0.1:26257?sslmode=require - # (sql/unix) postgres://demo:demo76950@?host=%2Fvar%2Ffolders%2Fc8%2Fb_q93vjj0ybfz0fz0z8vy9zc0000gp%2FT%2Fdemo070856957&port=26257 - ~~~ - -
diff --git a/src/current/_includes/v21.1/app/upperdb-basic-sample/main.go b/src/current/_includes/v21.1/app/upperdb-basic-sample/main.go deleted file mode 100644 index 3e838fe43e2..00000000000 --- a/src/current/_includes/v21.1/app/upperdb-basic-sample/main.go +++ /dev/null @@ -1,187 +0,0 @@ -package main - -import ( - "fmt" - "log" - "time" - - "github.com/upper/db/v4" - "github.com/upper/db/v4/adapter/cockroachdb" -) - -// The settings variable stores connection details. -var settings = cockroachdb.ConnectionURL{ - Host: "localhost", - Database: "bank", - User: "maxroach", - Options: map[string]string{ - // Secure node. - "sslrootcert": "certs/ca.crt", - "sslkey": "certs/client.maxroach.key", - "sslcert": "certs/client.maxroach.crt", - }, -} - -// Accounts is a handy way to represent a collection. -func Accounts(sess db.Session) db.Store { - return sess.Collection("accounts") -} - -// Account is used to represent a single record in the "accounts" table. -type Account struct { - ID uint64 `db:"id,omitempty"` - Balance int64 `db:"balance"` -} - -// Collection is required in order to create a relation between the Account -// struct and the "accounts" table. -func (a *Account) Store(sess db.Session) db.Store { - return Accounts(sess) -} - -// createTables creates all the tables that are neccessary to run this example. -func createTables(sess db.Session) error { - _, err := sess.SQL().Exec(` - CREATE TABLE IF NOT EXISTS accounts ( - ID SERIAL PRIMARY KEY, - balance INT - ) - `) - if err != nil { - return err - } - return nil -} - -// crdbForceRetry can be used to simulate a transaction error and -// demonstrate upper/db's ability to retry the transaction automatically. -// -// By default, upper/db will retry the transaction five times, if you want -// to modify this number use: sess.SetMaxTransactionRetries(n). -// -// This is only used for demonstration purposes and not intended -// for production code. -func crdbForceRetry(sess db.Session) error { - var err error - - // The first statement in a transaction can be retried transparently on the - // server, so we need to add a placeholder statement so that our - // force_retry() statement isn't the first one. - _, err = sess.SQL().Exec(`SELECT 1`) - if err != nil { - return err - } - - // If force_retry is called during the specified interval from the beginning - // of the transaction it returns a retryable error. If not, 0 is returned - // instead of an error. - _, err = sess.SQL().Exec(`SELECT crdb_internal.force_retry('1s'::INTERVAL)`) - if err != nil { - return err - } - - return nil -} - -func main() { - // Connect to the local CockroachDB node. - sess, err := cockroachdb.Open(settings) - if err != nil { - log.Fatal("cockroachdb.Open: ", err) - } - defer sess.Close() - - // Adjust this number to fit your specific needs (set to 5, by default) - // sess.SetMaxTransactionRetries(10) - - // Create the "accounts" table - createTables(sess) - - // Delete all the previous items in the "accounts" table. - err = Accounts(sess).Truncate() - if err != nil { - log.Fatal("Truncate: ", err) - } - - // Create a new account with a balance of 1000. - account1 := Account{Balance: 1000} - err = Accounts(sess).InsertReturning(&account1) - if err != nil { - log.Fatal("sess.Save: ", err) - } - - // Create a new account with a balance of 250. - account2 := Account{Balance: 250} - err = Accounts(sess).InsertReturning(&account2) - if err != nil { - log.Fatal("sess.Save: ", err) - } - - // Printing records - printRecords(sess) - - // Change the balance of the first account. - account1.Balance = 500 - err = sess.Save(&account1) - if err != nil { - log.Fatal("sess.Save: ", err) - } - - // Change the balance of the second account. - account2.Balance = 999 - err = sess.Save(&account2) - if err != nil { - log.Fatal("sess.Save: ", err) - } - - // Printing records - printRecords(sess) - - // Delete the first record. - err = sess.Delete(&account1) - if err != nil { - log.Fatal("Delete: ", err) - } - - startTime := time.Now() - - // Add a couple of new records within a transaction. - err = sess.Tx(func(tx db.Session) error { - var err error - - if err = tx.Save(&Account{Balance: 887}); err != nil { - return err - } - - if time.Now().Sub(startTime) < time.Second*1 { - // Will fail continuously for 2 seconds. - if err = crdbForceRetry(tx); err != nil { - return err - } - } - - if err = tx.Save(&Account{Balance: 342}); err != nil { - return err - } - - return nil - }) - if err != nil { - log.Fatal("Could not commit transaction: ", err) - } - - // Printing records - printRecords(sess) -} - -func printRecords(sess db.Session) { - accounts := []Account{} - err := Accounts(sess).Find().All(&accounts) - if err != nil { - log.Fatal("Find: ", err) - } - log.Printf("Balances:") - for i := range accounts { - fmt.Printf("\taccounts[%d]: %d\n", accounts[i].ID, accounts[i].Balance) - } -} diff --git a/src/current/_includes/v21.1/backups/advanced-examples-list.md b/src/current/_includes/v21.1/backups/advanced-examples-list.md deleted file mode 100644 index d6ace4c8a31..00000000000 --- a/src/current/_includes/v21.1/backups/advanced-examples-list.md +++ /dev/null @@ -1,9 +0,0 @@ -For examples of advanced `BACKUP` and `RESTORE` use cases, see: - -- [Incremental backups with a specified destination](take-full-and-incremental-backups.html#incremental-backups-with-explicitly-specified-destinations) -- [Backup with revision history and point-in-time restore](take-backups-with-revision-history-and-restore-from-a-point-in-time.html) -- [Locality-aware backup and restore](take-and-restore-locality-aware-backups.html) -- [Encrypted backup and restore](take-and-restore-encrypted-backups.html) -- [Restore into a different database](restore.html#restore-tables-into-a-different-database) -- [Remove the foreign key before restore](restore.html#remove-the-foreign-key-before-restore) -- [Restoring users from `system.users` backup](restore.html#restoring-users-from-system-users-backup) diff --git a/src/current/_includes/v21.1/backups/aws-auth-note.md b/src/current/_includes/v21.1/backups/aws-auth-note.md deleted file mode 100644 index 759a8ad1d3a..00000000000 --- a/src/current/_includes/v21.1/backups/aws-auth-note.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -The examples in this section use the **default** `AUTH=specified` parameter. For more detail on how to use `implicit` authentication with Amazon S3 buckets, read [Use Cloud Storage for Bulk Operations — Authentication](use-cloud-storage-for-bulk-operations.html#authentication). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/backups/backup-options.md b/src/current/_includes/v21.1/backups/backup-options.md deleted file mode 100644 index 2a05bae97ed..00000000000 --- a/src/current/_includes/v21.1/backups/backup-options.md +++ /dev/null @@ -1,7 +0,0 @@ - Option | Value | Description ------------------------------------------------------------------+-------------------------+------------------------------ -`revision_history` | N/A | Create a backup with full [revision history](take-backups-with-revision-history-and-restore-from-a-point-in-time.html), which records every change made to the cluster within the garbage collection period leading up to and including the given timestamp. -`encryption_passphrase` | [`STRING`](string.html) | The passphrase used to [encrypt the files](take-and-restore-encrypted-backups.html) (`BACKUP` manifest and data files) that the `BACKUP` statement generates. This same passphrase is needed to decrypt the file when it is used to [restore](take-and-restore-encrypted-backups.html) and to list the contents of the backup when using [`SHOW BACKUP`](show-backup.html). There is no practical limit on the length of the passphrase. -`DETACHED` | N/A | When a backup runs in `DETACHED` mode, it will execute asynchronously. The job ID will be returned after the backup job creation completes. Note that with `DETACHED` specified, further job information and the job completion status will not be returned. For more on the differences between the returned job data, see the [example](backup.html#run-a-backup-asynchronously) below. To check on the job status, use the [`SHOW JOBS`](show-jobs.html) statement.

To run a backup within a [transaction](transactions.html), use the `DETACHED` option. -`kms` | [`STRING`](string.html) | The [key management service (KMS) URI](take-and-restore-encrypted-backups.html#aws-kms-uri-format) (or a [comma-separated list of URIs](take-and-restore-encrypted-backups.html#take-a-backup-with-multi-region-encryption)) used to encrypt the files (`BACKUP` manifest and data files) that the `BACKUP` statement generates. This same KMS URI is needed to decrypt the file when it is used to [restore](take-and-restore-encrypted-backups.html#restore-from-an-encrypted-backup-with-aws-kms) and to list the contents of the backup when using [`SHOW BACKUP`](show-backup.html).

Currently, only AWS KMS is supported. -`INCLUDE_DEPRECATED_INTERLEAVES` | N/A | **New in v21.1:** Include [interleaved tables and indexes](interleave-in-parent.html) in the backup. If this option is not specified, and the cluster, database, or table being backed up includes interleaved data, CockroachDB will return an error. diff --git a/src/current/_includes/v21.1/backups/bulk-auth-options.md b/src/current/_includes/v21.1/backups/bulk-auth-options.md deleted file mode 100644 index ab02410dcac..00000000000 --- a/src/current/_includes/v21.1/backups/bulk-auth-options.md +++ /dev/null @@ -1,4 +0,0 @@ -The following examples make use of: - -- Amazon S3 connection strings. For guidance on connecting to other storage options or using other authentication parameters instead, read [Use Cloud Storage for Bulk Operations](use-cloud-storage-for-bulk-operations.html#example-file-urls). -- The **default** `AUTH=specified` parameter. For guidance on using `AUTH=implicit` authentication with Amazon S3 buckets instead, read [Use Cloud Storage for Bulk Operations — Authentication](use-cloud-storage-for-bulk-operations.html#authentication). \ No newline at end of file diff --git a/src/current/_includes/v21.1/backups/destination-file-privileges.md b/src/current/_includes/v21.1/backups/destination-file-privileges.md deleted file mode 100644 index 89e3960716e..00000000000 --- a/src/current/_includes/v21.1/backups/destination-file-privileges.md +++ /dev/null @@ -1,12 +0,0 @@ -The destination file URL does **not** require the `ADMIN` role in the following scenarios: - -- S3 and GS using `SPECIFIED` (and not `IMPLICIT`) credentials. Azure is always `SPECIFIED` by default. -- [Userfile](use-userfile-for-bulk-operations.html) - -The destination file URL **does** require the `ADMIN` role in the following scenarios: - -- S3 or GS using `IMPLICIT` credentials -- Use of a [custom endpoint](https://docs.aws.amazon.com/sdk-for-go/api/aws/endpoints/) on S3 -- [Nodelocal](cockroach-nodelocal-upload.html) - -We recommend using [cloud storage for bulk operations](use-cloud-storage-for-bulk-operations.html). diff --git a/src/current/_includes/v21.1/backups/encrypted-backup-description.md b/src/current/_includes/v21.1/backups/encrypted-backup-description.md deleted file mode 100644 index f0c39d2551a..00000000000 --- a/src/current/_includes/v21.1/backups/encrypted-backup-description.md +++ /dev/null @@ -1,11 +0,0 @@ -You can encrypt full or incremental backups with a passphrase by using the [`encryption_passphrase` option](backup.html#with-encryption-passphrase). Files written by the backup (including `BACKUP` manifests and data files) are encrypted using the specified passphrase to derive a key. To restore the encrypted backup, the same `encryption_passphrase` option (with the same passphrase) must be included in the [`RESTORE`](restore.html) statement. - -When used with [incremental backups](take-full-and-incremental-backups.html#incremental-backups), the `encryption_passphrase` option is applied to all the [backup file URLs](backup.html#backup-file-urls), which means the same passphrase must be used when appending another incremental backup to an existing backup. Similarly, when used with [locality-aware backups](take-and-restore-locality-aware-backups.html), the passphrase provided is applied to files in all localities. - -Encryption is done using [AES-256-GCM](https://en.wikipedia.org/wiki/Galois/Counter_Mode), and GCM is used to both encrypt and authenticate the files. A random [salt](https://en.wikipedia.org/wiki/Salt_(cryptography)) is used to derive a once-per-backup [AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) key from the specified passphrase, and then a random [initialization vector](https://en.wikipedia.org/wiki/Initialization_vector) is used per-file. CockroachDB uses [PBKDF2](https://en.wikipedia.org/wiki/PBKDF2) with 64,000 iterations for the key derivation. - -{{site.data.alerts.callout_info}} -`BACKUP` and `RESTORE` will use more memory when using encryption, as both the plain-text and cipher-text of a given file are held in memory during encryption and decryption. -{{site.data.alerts.end}} - -For an example of an encrypted backup, see [Create an encrypted backup](take-and-restore-encrypted-backups.html#take-an-encrypted-backup-using-a-passphrase). diff --git a/src/current/_includes/v21.1/backups/gcs-auth-note.md b/src/current/_includes/v21.1/backups/gcs-auth-note.md deleted file mode 100644 index 360ea21cb63..00000000000 --- a/src/current/_includes/v21.1/backups/gcs-auth-note.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -The examples in this section use the `AUTH=specified` parameter, which will be the default behavior in v21.2 and beyond for connecting to Google Cloud Storage. For more detail on how to pass your Google Cloud Storage credentials with this parameter, or, how to use `implicit` authentication, read [Use Cloud Storage for Bulk Operations — Authentication](use-cloud-storage-for-bulk-operations.html#authentication). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/backups/gcs-default-deprec.md b/src/current/_includes/v21.1/backups/gcs-default-deprec.md deleted file mode 100644 index f88c4215de3..00000000000 --- a/src/current/_includes/v21.1/backups/gcs-default-deprec.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_danger}} -**Deprecation notice:** Currently, GCS connections default to the `cloudstorage.gs.default.key` [cluster setting](cluster-settings.html). This default behavior will no longer be supported in v21.2. If you are relying on this default behavior, we recommend adjusting your queries and scripts to now specify the `AUTH` parameter you want to use. Similarly, if you are using the `cloudstorage.gs.default.key` cluster setting to authorize your GCS connection, we recommend switching to use `AUTH=specified` or `AUTH=implicit`. `AUTH=specified` will be **the default behavior in v21.2** and beyond. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/backups/no-multiregion-table-backups.md b/src/current/_includes/v21.1/backups/no-multiregion-table-backups.md deleted file mode 100644 index ec578492530..00000000000 --- a/src/current/_includes/v21.1/backups/no-multiregion-table-backups.md +++ /dev/null @@ -1,5 +0,0 @@ -CockroachDB does not currently support [`BACKUP`s](backup.html) of individual tables that are part of [multi-region databases](multiregion-overview.html). For example, you cannot backup a [`GLOBAL`](global-tables.html) or [`REGIONAL`](regional-tables.html) table individually. - -To work around this limitation, you must [back up the database](backup.html#backup-a-database) or [the entire cluster](backup.html#backup-a-cluster). - -[Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/61133) diff --git a/src/current/_includes/v21.1/cdc/avro-bit-varbit.md b/src/current/_includes/v21.1/cdc/avro-bit-varbit.md deleted file mode 100644 index a7d2856533b..00000000000 --- a/src/current/_includes/v21.1/cdc/avro-bit-varbit.md +++ /dev/null @@ -1,25 +0,0 @@ - For efficiency, CockroachDB encodes `BIT` and `VARBIT` bitfield types as arrays of 64-bit integers. That is, [base-2 (binary format)](https://en.wikipedia.org/wiki/Binary_number#Conversion_to_and_from_other_numeral_systems) `BIT` and `VARBIT` data types are converted to base 10 and stored in arrays. Encoding in CockroachDB is [big-endian](https://en.wikipedia.org/wiki/Endianness), therefore the last value may have many trailing zeroes. For this reason, the first value of each array is the number of bits that are used in the last value of the array. - - For instance, if the bitfield is 129 bits long, there will be 4 integers in the array. The first integer will be `1`; representing the number of bits in the last value, the second integer will be the first 64 bits, the third integer will be bits 65–128, and the last integer will either be `0` or `9223372036854775808` (i.e. the integer with only the first bit set, or `1000000000000000000000000000000000000000000000000000000000000000` when base 2). - - This example is base-10 encoded into an array as follows: - - ~~~ - {"array": [1, , , 0 or 9223372036854775808]} - ~~~ - - For downstream processing, it is necessary to base-2 encode every element in the array (except for the first element). The first number in the array gives you the number of bits to take from the last base-2 number — that is, the most significant bits. So, in the example above this would be `1`. Finally, all the base-2 numbers can be appended together, which will result in the original number of bits, 129. - - In a different example of this process where the bitfield is 136 bits long, the array would be similar to the following when base-10 encoded: - - ~~~ - {"array": [8, 18293058736425533439, 18446744073709551615, 13690942867206307840]} - ~~~ - - To then work with this data, you would convert each of the elements in the array to base-2 numbers, besides the first element. For the above array, this would convert to: - - ~~~ - [8, 1111110111011011111111111111111111111111111111111111111111111111, 1111111111111111111111111111111111111111111111111111111111111111, 1011111000000000000000000000000000000000000000000000000000000000] - ~~~ - - Next, you use the first element in the array to take the number of bits from the last base-2 element, `10111110`. Finally, you append each of the base-2 numbers together — in the above array, the second, third, and truncated last element. This results in 136 bits, the original number of bits. diff --git a/src/current/_includes/v21.1/cdc/client-key-encryption.md b/src/current/_includes/v21.1/cdc/client-key-encryption.md deleted file mode 100644 index c7c7be4c38c..00000000000 --- a/src/current/_includes/v21.1/cdc/client-key-encryption.md +++ /dev/null @@ -1 +0,0 @@ -**Note:** Client keys are often encrypted. You will receive an error if you pass an encrypted client key in your changefeed statement. To decrypt the client key, run: `openssl rsa -in key.pem -out key.decrypt.pem -passin pass:{PASSWORD}`. Once decrypted, be sure to update your changefeed statement to use the new `key.decrypt.pem` file instead. \ No newline at end of file diff --git a/src/current/_includes/v21.1/cdc/configure-all-changefeed.md b/src/current/_includes/v21.1/cdc/configure-all-changefeed.md deleted file mode 100644 index 2b993f9f41a..00000000000 --- a/src/current/_includes/v21.1/cdc/configure-all-changefeed.md +++ /dev/null @@ -1,19 +0,0 @@ -It is useful to be able to pause all running changefeeds during troubleshooting, testing, or when a decrease in CPU load is needed. - -To pause all running changefeeds: - -{% include_cached copy-clipboard.html %} -~~~sql -PAUSE JOBS (SELECT job_id FROM [SHOW JOBS] WHERE job_type='CHANGEFEED' AND status IN ('running')); -~~~ - -This will change the status for each of the running changefeeds to `paused`, which can be verified with [`SHOW JOBS`](stream-data-out-of-cockroachdb-using-changefeeds.html#using-show-jobs). - -To resume all running changefeeds: - -{% include_cached copy-clipboard.html %} -~~~sql -RESUME JOBS (SELECT job_id FROM [SHOW JOBS] WHERE job_type='CHANGEFEED' AND status IN ('paused')); -~~~ - -This will resume the changefeeds and update the status for each of the changefeeds to `running`. diff --git a/src/current/_includes/v21.1/cdc/core-csv.md b/src/current/_includes/v21.1/cdc/core-csv.md deleted file mode 100644 index 4ee6bfc587d..00000000000 --- a/src/current/_includes/v21.1/cdc/core-csv.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -To determine how wide the columns need to be, the default `table` display format in `cockroach sql` buffers the results it receives from the server before printing them to the console. When consuming core changefeed data using `cockroach sql`, it's important to use a display format like `csv` that does not buffer its results. To set the display format, use the [`--format=csv` flag](cockroach-sql.html#sql-flag-format) when starting the [built-in SQL client](cockroach-sql.html), or set the [`\set display_format=csv` option](cockroach-sql.html#client-side-options) once the SQL client is open. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/cdc/core-url.md b/src/current/_includes/v21.1/cdc/core-url.md deleted file mode 100644 index 7241e203aa7..00000000000 --- a/src/current/_includes/v21.1/cdc/core-url.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -Because core changefeeds return results differently than other SQL statements, they require a dedicated database connection with specific settings around result buffering. In normal operation, CockroachDB improves performance by buffering results server-side before returning them to a client; however, result buffering is automatically turned off for core changefeeds. Core changefeeds also have different cancellation behavior than other queries: they can only be canceled by closing the underlying connection or issuing a [`CANCEL QUERY`](cancel-query.html) statement on a separate connection. Combined, these attributes of changefeeds mean that applications should explicitly create dedicated connections to consume changefeed data, instead of using a connection pool as most client drivers do by default. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/cdc/create-core-changefeed-avro.md b/src/current/_includes/v21.1/cdc/create-core-changefeed-avro.md deleted file mode 100644 index 147c652e475..00000000000 --- a/src/current/_includes/v21.1/cdc/create-core-changefeed-avro.md +++ /dev/null @@ -1,104 +0,0 @@ -In this example, you'll set up a core changefeed for a single-node cluster that emits Avro records. CockroachDB's Avro binary encoding convention uses the [Confluent Schema Registry](https://docs.confluent.io/current/schema-registry/docs/serializer-formatter.html) to store Avro schemas. - -1. Use the [`cockroach start-single-node`](cockroach-start-single-node.html) command to start a single-node cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start-single-node \ - --insecure \ - --listen-addr=localhost \ - --background - ~~~ - -2. Download and extract the [Confluent Open Source platform](https://www.confluent.io/download/). - -3. Move into the extracted `confluent-` directory and start Confluent: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ./bin/confluent local services start - ~~~ - - Only `zookeeper`, `kafka`, and `schema-registry` are needed. To troubleshoot Confluent, see [their docs](https://docs.confluent.io/current/installation/installing_cp.html#zip-and-tar-archives) and the [Quick Start Guide](https://docs.confluent.io/platform/current/quickstart/ce-quickstart.html#ce-quickstart). - -4. As the `root` user, open the [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --url="postgresql://root@127.0.0.1:26257?sslmode=disable" --format=csv - ~~~ - - {% include {{ page.version.version }}/cdc/core-url.md %} - - {% include {{ page.version.version }}/cdc/core-csv.md %} - -5. Enable the `kv.rangefeed.enabled` [cluster setting](cluster-settings.html): - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING kv.rangefeed.enabled = true; - ~~~ - -6. Create table `bar`: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE TABLE bar (a INT PRIMARY KEY); - ~~~ - -7. Insert a row into the table: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO bar VALUES (0); - ~~~ - -8. Start the core changefeed: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > EXPERIMENTAL CHANGEFEED FOR bar WITH format = experimental_avro, confluent_schema_registry = 'http://localhost:8081'; - ~~~ - - ~~~ - table,key,value - bar,\000\000\000\000\001\002\000,\000\000\000\000\002\002\002\000 - ~~~ - -9. In a new terminal, add another row: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure -e "INSERT INTO bar VALUES (1)" - ~~~ - -10. Back in the terminal where the core changefeed is streaming, the output will appear: - - ~~~ - bar,\000\000\000\000\001\002\002,\000\000\000\000\002\002\002\002 - ~~~ - - Note that records may take a couple of seconds to display in the core changefeed. - -11. To stop streaming the changefeed, enter **CTRL+C** into the terminal where the changefeed is running. - -12. To stop `cockroach`, run: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach quit --insecure - ~~~ - -13. To stop Confluent, move into the extracted `confluent-` directory and stop Confluent: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ./bin/confluent local services stop - ~~~ - - To terminate all Confluent processes, use: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ./bin/confluent local destroy - ~~~ diff --git a/src/current/_includes/v21.1/cdc/create-core-changefeed.md b/src/current/_includes/v21.1/cdc/create-core-changefeed.md deleted file mode 100644 index 6fbe1d3500d..00000000000 --- a/src/current/_includes/v21.1/cdc/create-core-changefeed.md +++ /dev/null @@ -1,80 +0,0 @@ -In this example, you'll set up a core changefeed for a single-node cluster. - -1. In a terminal window, start `cockroach`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start-single-node \ - --insecure \ - --listen-addr=localhost \ - --background - ~~~ - -2. As the `root` user, open the [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - --url="postgresql://root@127.0.0.1:26257?sslmode=disable" \ - --format=csv - ~~~ - - {% include {{ page.version.version }}/cdc/core-url.md %} - - {% include {{ page.version.version }}/cdc/core-csv.md %} - -3. Enable the `kv.rangefeed.enabled` [cluster setting](cluster-settings.html): - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING kv.rangefeed.enabled = true; - ~~~ - -4. Create table `foo`: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE TABLE foo (a INT PRIMARY KEY); - ~~~ - -5. Insert a row into the table: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO foo VALUES (0); - ~~~ - -6. Start the core changefeed: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > EXPERIMENTAL CHANGEFEED FOR foo; - ~~~ - ~~~ - table,key,value - foo,[0],"{""after"": {""a"": 0}}" - ~~~ - -7. In a new terminal, add another row: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure -e "INSERT INTO foo VALUES (1)" - ~~~ - -8. Back in the terminal where the core changefeed is streaming, the following output has appeared: - - ~~~ - foo,[1],"{""after"": {""a"": 1}}" - ~~~ - - Note that records may take a couple of seconds to display in the core changefeed. - -9. To stop streaming the changefeed, enter **CTRL+C** into the terminal where the changefeed is running. - -10. To stop `cockroach`, run: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach quit --insecure - ~~~ diff --git a/src/current/_includes/v21.1/cdc/print-key.md b/src/current/_includes/v21.1/cdc/print-key.md deleted file mode 100644 index ab0b0924d30..00000000000 --- a/src/current/_includes/v21.1/cdc/print-key.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -This example only prints the value. To print both the key and value of each message in the changefeed (e.g., to observe what happens with `DELETE`s), use the `--property print.key=true` flag. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/cdc/url-encoding.md b/src/current/_includes/v21.1/cdc/url-encoding.md deleted file mode 100644 index 2a681d7f913..00000000000 --- a/src/current/_includes/v21.1/cdc/url-encoding.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -Parameters should always be URI-encoded before they are included the changefeed's URI, as they often contain special characters. Use Javascript's [encodeURIComponent](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/encodeURIComponent) function or Go language's [url.QueryEscape](https://golang.org/pkg/net/url/#QueryEscape) function to URI-encode the parameters. Other languages provide similar functions to URI-encode special characters. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/computed-columns/add-computed-column.md b/src/current/_includes/v21.1/computed-columns/add-computed-column.md deleted file mode 100644 index 5eff580e575..00000000000 --- a/src/current/_includes/v21.1/computed-columns/add-computed-column.md +++ /dev/null @@ -1,55 +0,0 @@ -In this example, create a table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE x ( - a INT NULL, - b INT NULL AS (a * 2) STORED, - c INT NULL AS (a + 4) STORED, - FAMILY "primary" (a, b, rowid, c) - ); -~~~ - -Then, insert a row of data: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO x VALUES (6); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM x; -~~~ - -~~~ -+---+----+----+ -| a | b | c | -+---+----+----+ -| 6 | 12 | 10 | -+---+----+----+ -(1 row) -~~~ - -Now add another virtual computed column to the table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE x ADD COLUMN d INT AS (a // 2) VIRTUAL; -~~~ - -The `d` column is added to the table and computed from the `a` column divided by 2. - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM x; -~~~ - -~~~ -+---+----+----+---+ -| a | b | c | d | -+---+----+----+---+ -| 6 | 12 | 10 | 3 | -+---+----+----+---+ -(1 row) -~~~ diff --git a/src/current/_includes/v21.1/computed-columns/alter-computed-column.md b/src/current/_includes/v21.1/computed-columns/alter-computed-column.md deleted file mode 100644 index 0c554f1c630..00000000000 --- a/src/current/_includes/v21.1/computed-columns/alter-computed-column.md +++ /dev/null @@ -1,76 +0,0 @@ -To alter the formula for a computed column, you must [`DROP`](drop-column.html) and [`ADD`](add-column.html) the column back with the new definition. Take the following table for instance: - -{% include_cached copy-clipboard.html %} -~~~sql -> CREATE TABLE x ( -a INT NULL, -b INT NULL AS (a * 2) STORED, -c INT NULL AS (a + 4) STORED, -FAMILY "primary" (a, b, rowid, c) -); -~~~ -~~~ -CREATE TABLE - - -Time: 4ms total (execution 4ms / network 0ms) -~~~ - -Add a computed column `d`: - -{% include_cached copy-clipboard.html %} -~~~sql -> ALTER TABLE x ADD COLUMN d INT AS (a // 2) STORED; -~~~ -~~~ -ALTER TABLE - - -Time: 199ms total (execution 199ms / network 0ms) -~~~ - -If you try to alter it, you'll get an error: - -{% include_cached copy-clipboard.html %} -~~~sql -> ALTER TABLE x ALTER COLUMN d INT AS (a // 3) STORED; -~~~ -~~~ -invalid syntax: statement ignored: at or near "int": syntax error -SQLSTATE: 42601 -DETAIL: source SQL: -ALTER TABLE x ALTER COLUMN d INT AS (a // 3) STORED - ^ -HINT: try \h ALTER TABLE -~~~ - -However, you can drop it and then add it with the new definition: - -{% include_cached copy-clipboard.html %} -~~~sql -> SET sql_safe_updates = false; -> ALTER TABLE x DROP COLUMN d; -> ALTER TABLE x ADD COLUMN d INT AS (a // 3) STORED; -> SET sql_safe_updates = true; -~~~ -~~~ -SET - - -Time: 1ms total (execution 0ms / network 0ms) - -ALTER TABLE - - -Time: 195ms total (execution 195ms / network 0ms) - -ALTER TABLE - - -Time: 186ms total (execution 185ms / network 0ms) - -SET - - -Time: 0ms total (execution 0ms / network 0ms) -~~~ diff --git a/src/current/_includes/v21.1/computed-columns/convert-computed-column.md b/src/current/_includes/v21.1/computed-columns/convert-computed-column.md deleted file mode 100644 index 2c9897b8319..00000000000 --- a/src/current/_includes/v21.1/computed-columns/convert-computed-column.md +++ /dev/null @@ -1,108 +0,0 @@ -You can convert a stored, computed column into a regular column by using `ALTER TABLE`. - -In this example, create a simple table with a computed column: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE office_dogs ( - id INT PRIMARY KEY, - first_name STRING, - last_name STRING, - full_name STRING AS (CONCAT(first_name, ' ', last_name)) STORED - ); -~~~ - -Then, insert a few rows of data: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO office_dogs (id, first_name, last_name) VALUES - (1, 'Petee', 'Hirata'), - (2, 'Carl', 'Kimball'), - (3, 'Ernie', 'Narayan'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM office_dogs; -~~~ - -~~~ -+----+------------+-----------+---------------+ -| id | first_name | last_name | full_name | -+----+------------+-----------+---------------+ -| 1 | Petee | Hirata | Petee Hirata | -| 2 | Carl | Kimball | Carl Kimball | -| 3 | Ernie | Narayan | Ernie Narayan | -+----+------------+-----------+---------------+ -(3 rows) -~~~ - -The `full_name` column is computed from the `first_name` and `last_name` columns without the need to define a [view](views.html). You can view the column details with the [`SHOW COLUMNS`](show-columns.html) statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM office_dogs; -~~~ - -~~~ -+-------------+-----------+-------------+----------------+------------------------------------+-------------+ -| column_name | data_type | is_nullable | column_default | generation_expression | indices | -+-------------+-----------+-------------+----------------+------------------------------------+-------------+ -| id | INT | false | NULL | | {"primary"} | -| first_name | STRING | true | NULL | | {} | -| last_name | STRING | true | NULL | | {} | -| full_name | STRING | true | NULL | concat(first_name, ' ', last_name) | {} | -+-------------+-----------+-------------+----------------+------------------------------------+-------------+ -(4 rows) -~~~ - -Now, convert the computed column (`full_name`) to a regular column: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE office_dogs ALTER COLUMN full_name DROP STORED; -~~~ - -Check that the computed column was converted: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM office_dogs; -~~~ - -~~~ -+-------------+-----------+-------------+----------------+-----------------------+-------------+ -| column_name | data_type | is_nullable | column_default | generation_expression | indices | -+-------------+-----------+-------------+----------------+-----------------------+-------------+ -| id | INT | false | NULL | | {"primary"} | -| first_name | STRING | true | NULL | | {} | -| last_name | STRING | true | NULL | | {} | -| full_name | STRING | true | NULL | | {} | -+-------------+-----------+-------------+----------------+-----------------------+-------------+ -(4 rows) -~~~ - -The computed column is now a regular column and can be updated as such: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO office_dogs (id, first_name, last_name, full_name) VALUES (4, 'Lola', 'McDog', 'This is not computed'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM office_dogs; -~~~ - -~~~ -+----+------------+-----------+----------------------+ -| id | first_name | last_name | full_name | -+----+------------+-----------+----------------------+ -| 1 | Petee | Hirata | Petee Hirata | -| 2 | Carl | Kimball | Carl Kimball | -| 3 | Ernie | Narayan | Ernie Narayan | -| 4 | Lola | McDog | This is not computed | -+----+------------+-----------+----------------------+ -(4 rows) -~~~ diff --git a/src/current/_includes/v21.1/computed-columns/jsonb.md b/src/current/_includes/v21.1/computed-columns/jsonb.md deleted file mode 100644 index 6b0ca92f80c..00000000000 --- a/src/current/_includes/v21.1/computed-columns/jsonb.md +++ /dev/null @@ -1,70 +0,0 @@ -In this example, create a table with a `JSONB` column and a stored computed column: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE student_profiles ( - id STRING PRIMARY KEY AS (profile->>'id') STORED, - profile JSONB -); -~~~ - -Create a compute column after you create a table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE student_profiles ADD COLUMN age INT AS ( (profile->>'age')::INT) STORED; -~~~ - -Then, insert a few rows of data: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO student_profiles (profile) VALUES - ('{"id": "d78236", "name": "Arthur Read", "age": "16", "school": "PVPHS", "credits": 120, "sports": "none"}'), - ('{"name": "Buster Bunny", "age": "15", "id": "f98112", "school": "THS", "credits": 67, "clubs": "MUN"}'), - ('{"name": "Ernie Narayan", "school" : "Brooklyn Tech", "id": "t63512", "sports": "Track and Field", "clubs": "Chess"}'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM student_profiles; -~~~ -~~~ -+--------+---------------------------------------------------------------------------------------------------------------------+------+ -| id | profile | age | ----------+---------------------------------------------------------------------------------------------------------------------+------+ -| d78236 | {"age": "16", "credits": 120, "id": "d78236", "name": "Arthur Read", "school": "PVPHS", "sports": "none"} | 16 | -| f98112 | {"age": "15", "clubs": "MUN", "credits": 67, "id": "f98112", "name": "Buster Bunny", "school": "THS"} | 15 | -| t63512 | {"clubs": "Chess", "id": "t63512", "name": "Ernie Narayan", "school": "Brooklyn Tech", "sports": "Track and Field"} | NULL | -+--------+---------------------------------------------------------------------------------------------------------------------+------| -~~~ - -The primary key `id` is computed as a field from the `profile` column. Additionally the `age` column is computed from the profile column data as well. - -This example shows how add a stored computed column with a [coerced type](scalar-expressions.html#explicit-type-coercions): - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE TABLE json_data ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - json_info JSONB -); -INSERT INTO json_data (json_info) VALUES ('{"amount": "123.45"}'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER TABLE json_data ADD COLUMN amount DECIMAL AS ((json_info->>'amount')::DECIMAL) STORED; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT * FROM json_data; -~~~ - -~~~ - id | json_info | amount ----------------------------------------+----------------------+--------- - e7c3d706-1367-4d77-bfb4-386dfdeb10f9 | {"amount": "123.45"} | 123.45 -(1 row) -~~~ diff --git a/src/current/_includes/v21.1/computed-columns/secondary-index.md b/src/current/_includes/v21.1/computed-columns/secondary-index.md deleted file mode 100644 index 8b78325e695..00000000000 --- a/src/current/_includes/v21.1/computed-columns/secondary-index.md +++ /dev/null @@ -1,63 +0,0 @@ -In this example, create a table with a virtual computed column and an index on that column: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE gymnastics ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - athlete STRING, - vault DECIMAL, - bars DECIMAL, - beam DECIMAL, - floor DECIMAL, - combined_score DECIMAL AS (vault + bars + beam + floor) VIRTUAL, - INDEX total (combined_score DESC) - ); -~~~ - -Then, insert a few rows a data: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO gymnastics (athlete, vault, bars, beam, floor) VALUES - ('Simone Biles', 15.933, 14.800, 15.300, 15.800), - ('Gabby Douglas', 0, 15.766, 0, 0), - ('Laurie Hernandez', 15.100, 0, 15.233, 14.833), - ('Madison Kocian', 0, 15.933, 0, 0), - ('Aly Raisman', 15.833, 0, 15.000, 15.366); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM gymnastics; -~~~ -~~~ -+--------------------------------------+------------------+--------+--------+--------+--------+----------------+ -| id | athlete | vault | bars | beam | floor | combined_score | -+--------------------------------------+------------------+--------+--------+--------+--------+----------------+ -| 3fe11371-6a6a-49de-bbef-a8dd16560fac | Aly Raisman | 15.833 | 0 | 15.000 | 15.366 | 46.199 | -| 56055a70-b4c7-4522-909b-8f3674b705e5 | Madison Kocian | 0 | 15.933 | 0 | 0 | 15.933 | -| 69f73fd1-da34-48bf-aff8-71296ce4c2c7 | Gabby Douglas | 0 | 15.766 | 0 | 0 | 15.766 | -| 8a7b730b-668d-4845-8d25-48bda25114d6 | Laurie Hernandez | 15.100 | 0 | 15.233 | 14.833 | 45.166 | -| b2c5ca80-21c2-4853-9178-b96ce220ea4d | Simone Biles | 15.933 | 14.800 | 15.300 | 15.800 | 61.833 | -+--------------------------------------+------------------+--------+--------+--------+--------+----------------+ -~~~ - -Now, run a query using the secondary index: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT athlete, combined_score FROM gymnastics ORDER BY combined_score DESC; -~~~ -~~~ -+------------------+----------------+ -| athlete | combined_score | -+------------------+----------------+ -| Simone Biles | 61.833 | -| Aly Raisman | 46.199 | -| Laurie Hernandez | 45.166 | -| Madison Kocian | 15.933 | -| Gabby Douglas | 15.766 | -+------------------+----------------+ -~~~ - -The athlete with the highest combined score of 61.833 is Simone Biles. diff --git a/src/current/_includes/v21.1/computed-columns/simple.md b/src/current/_includes/v21.1/computed-columns/simple.md deleted file mode 100644 index 24a86a59481..00000000000 --- a/src/current/_includes/v21.1/computed-columns/simple.md +++ /dev/null @@ -1,40 +0,0 @@ -In this example, let's create a simple table with a computed column: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE users ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - city STRING, - first_name STRING, - last_name STRING, - full_name STRING AS (CONCAT(first_name, ' ', last_name)) STORED, - address STRING, - credit_card STRING, - dl STRING UNIQUE CHECK (LENGTH(dl) < 8) -); -~~~ - -Then, insert a few rows of data: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO users (first_name, last_name) VALUES - ('Lola', 'McDog'), - ('Carl', 'Kimball'), - ('Ernie', 'Narayan'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM users; -~~~ -~~~ - id | city | first_name | last_name | full_name | address | credit_card | dl -+--------------------------------------+------+------------+-----------+---------------+---------+-------------+------+ - 5740da29-cc0c-47af-921c-b275d21d4c76 | NULL | Ernie | Narayan | Ernie Narayan | NULL | NULL | NULL - e7e0b748-9194-4d71-9343-cd65218848f0 | NULL | Lola | McDog | Lola McDog | NULL | NULL | NULL - f00e4715-8ca7-4d5a-8de5-ef1d5d8092f3 | NULL | Carl | Kimball | Carl Kimball | NULL | NULL | NULL -(3 rows) -~~~ - -The `full_name` column is computed from the `first_name` and `last_name` columns without the need to define a [view](views.html). diff --git a/src/current/_includes/v21.1/computed-columns/virtual.md b/src/current/_includes/v21.1/computed-columns/virtual.md deleted file mode 100644 index 7d873440328..00000000000 --- a/src/current/_includes/v21.1/computed-columns/virtual.md +++ /dev/null @@ -1,41 +0,0 @@ -In this example, create a table with a `JSONB` column and virtual computed columns: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE student_profiles ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - profile JSONB, - full_name STRING AS (concat_ws(' ',profile->>'firstName', profile->>'lastName')) VIRTUAL, - birthday TIMESTAMP AS (parse_timestamp(profile->>'birthdate')) VIRTUAL -); -~~~ - -Then, insert a few rows of data: - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO student_profiles (profile) VALUES - ('{"id": "d78236", "firstName": "Arthur", "lastName": "Read", "birthdate": "2010-01-25", "school": "PVPHS", "credits": 120, "sports": "none"}'), - ('{"firstName": "Buster", "lastName": "Bunny", "birthdate": "2011-11-07", "id": "f98112", "school": "THS", "credits": 67, "clubs": "MUN"}'), - ('{"firstName": "Ernie", "lastName": "Narayan", "school" : "Brooklyn Tech", "id": "t63512", "sports": "Track and Field", "clubs": "Chess"}'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM student_profiles; -~~~ -~~~ - id | profile | full_name | birthday ----------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------+---------------+---------------------- - 0e420282-105d-473b-83e2-3b082e7033e4 | {"birthdate": "2011-11-07", "clubs": "MUN", "credits": 67, "firstName": "Buster", "id": "f98112", "lastName": "Bunny", "school": "THS"} | Buster Bunny | 2011-11-07 00:00:00 - 6e9b77cd-ec67-41ae-b346-7b3d89902c72 | {"birthdate": "2010-01-25", "credits": 120, "firstName": "Arthur", "id": "d78236", "lastName": "Read", "school": "PVPHS", "sports": "none"} | Arthur Read | 2010-01-25 00:00:00 - f74b21e3-dc1e-49b7-a648-3c9b9024a70f | {"clubs": "Chess", "firstName": "Ernie", "id": "t63512", "lastName": "Narayan", "school": "Brooklyn Tech", "sports": "Track and Field"} | Ernie Narayan | NULL -(3 rows) - - -Time: 2ms total (execution 2ms / network 0ms) -~~~ - -The virtual column `full_name` is computed as a field from the `profile` column's data. The first name and last name are concatenated and separated by a single whitespace character using the [`concat_ws` string function](functions-and-operators.html#string-and-byte-functions). - -The virtual column `birthday` is parsed as a `TIMESTAMP` value from the `profile` column's `birthdate` string value. The [`parse_timestamp` function](functions-and-operators.html) is used to parse strings in `TIMESTAMP` format. diff --git a/src/current/_includes/v21.1/demo_movr.md b/src/current/_includes/v21.1/demo_movr.md deleted file mode 100644 index cde6c211213..00000000000 --- a/src/current/_includes/v21.1/demo_movr.md +++ /dev/null @@ -1,10 +0,0 @@ -Start the [MovR database](movr.html) on a 3-node CockroachDB demo cluster with a larger data set. - -{% include_cached copy-clipboard.html %} -~~~ shell -cockroach demo movr --num-histories 250000 --num-promo-codes 250000 --num-rides 125000 --num-users 12500 --num-vehicles 3750 --nodes 3 -~~~ - -{% comment %} -This is a test -{% endcomment %} diff --git a/src/current/_includes/v21.1/faq/auto-generate-unique-ids.html b/src/current/_includes/v21.1/faq/auto-generate-unique-ids.html deleted file mode 100644 index 20679a9acab..00000000000 --- a/src/current/_includes/v21.1/faq/auto-generate-unique-ids.html +++ /dev/null @@ -1,107 +0,0 @@ -To auto-generate unique row identifiers, use the [`UUID`](uuid.html) column with the `gen_random_uuid()` [function](functions-and-operators.html#id-generation-functions) as the [default value](default-value.html): - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE users ( - id UUID NOT NULL DEFAULT gen_random_uuid(), - city STRING NOT NULL, - name STRING NULL, - address STRING NULL, - credit_card STRING NULL, - CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC), - FAMILY "primary" (id, city, name, address, credit_card) -); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO users (name, city) VALUES ('Petee', 'new york'), ('Eric', 'seattle'), ('Dan', 'seattle'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM users; -~~~ - -~~~ - id | city | name | address | credit_card -+--------------------------------------+----------+-------+---------+-------------+ - cf8ee4e2-cd74-449a-b6e6-a0fb2017baa4 | new york | Petee | NULL | NULL - 2382564e-702f-42d9-a139-b6df535ae00a | seattle | Eric | NULL | NULL - 7d27e40b-263a-4891-b29b-d59135e55650 | seattle | Dan | NULL | NULL -(3 rows) -~~~ - -Alternatively, you can use the [`BYTES`](bytes.html) column with the `uuid_v4()` function as the default value instead: - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE users2 ( - id BYTES DEFAULT uuid_v4(), - city STRING NOT NULL, - name STRING NULL, - address STRING NULL, - credit_card STRING NULL, - CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC), - FAMILY "primary" (id, city, name, address, credit_card) -); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO users2 (name, city) VALUES ('Anna', 'new york'), ('Jonah', 'seattle'), ('Terry', 'chicago'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM users; -~~~ - -~~~ - id | city | name | address | credit_card -+------------------------------------------------+----------+-------+---------+-------------+ - 4\244\277\323/\261M\007\213\275*\0060\346\025z | chicago | Terry | NULL | NULL - \273*t=u.F\010\274f/}\313\332\373a | new york | Anna | NULL | NULL - \004\\\364nP\024L)\252\364\222r$\274O0 | seattle | Jonah | NULL | NULL -(3 rows) -~~~ - -In either case, generated IDs will be 128-bit, large enough for there to be virtually no chance of generating non-unique values. Also, once the table grows beyond a single key-value range (more than 512 MiB by default), new IDs will be scattered across all of the table's ranges and, therefore, likely across different nodes. This means that multiple nodes will share in the load. - -This approach has the disadvantage of creating a primary key that may not be useful in a query directly, which can require a join with another table or a secondary index. - -If it is important for generated IDs to be stored in the same key-value range, you can use an [integer type](int.html) with the `unique_rowid()` [function](functions-and-operators.html#id-generation-functions) as the default value, either explicitly or via the [`SERIAL` pseudo-type](serial.html): - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE users3 ( - id INT DEFAULT unique_rowid(), - city STRING NOT NULL, - name STRING NULL, - address STRING NULL, - credit_card STRING NULL, - CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC), - FAMILY "primary" (id, city, name, address, credit_card) -); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> INSERT INTO users3 (name, city) VALUES ('Blake', 'chicago'), ('Hannah', 'seattle'), ('Bobby', 'seattle'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM users3; -~~~ - -~~~ - id | city | name | address | credit_card -+--------------------+---------+--------+---------+-------------+ - 469048192112197633 | chicago | Blake | NULL | NULL - 469048192112263169 | seattle | Hannah | NULL | NULL - 469048192112295937 | seattle | Bobby | NULL | NULL -(3 rows) -~~~ - -Upon insert or upsert, the `unique_rowid()` function generates a default value from the timestamp and ID of the node executing the insert. Such time-ordered values are likely to be globally unique except in cases where a very large number of IDs (100,000+) are generated per node per second. Also, there can be gaps and the order is not completely guaranteed. diff --git a/src/current/_includes/v21.1/faq/clock-synchronization-effects.md b/src/current/_includes/v21.1/faq/clock-synchronization-effects.md deleted file mode 100644 index 7fae7e0e72d..00000000000 --- a/src/current/_includes/v21.1/faq/clock-synchronization-effects.md +++ /dev/null @@ -1,27 +0,0 @@ -CockroachDB requires moderate levels of clock synchronization to preserve data consistency. For this reason, when a node detects that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the maximum offset allowed, it spontaneously shuts down. This offset defaults to 500ms but can be changed via the [`--max-offset`](cockroach-start.html#flags-max-offset) flag when starting each node. - -While [serializable consistency](https://en.wikipedia.org/wiki/Serializability) is maintained regardless of clock skew, skew outside the configured clock offset bounds can result in violations of single-key linearizability between causally dependent transactions. It's therefore important to prevent clocks from drifting too far by running [NTP](http://www.ntp.org/) or other clock synchronization software on each node. - -In very rare cases, CockroachDB can momentarily run with a stale clock. This can happen when using vMotion, which can suspend a VM running CockroachDB, migrate it to different hardware, and resume it. This will cause CockroachDB to be out of sync for a short period before it jumps to the correct time. During this window, it would be possible for a client to read stale data and write data derived from stale reads. By enabling the `server.clock.forward_jump_check_enabled` [cluster setting](cluster-settings.html), you can be alerted when the CockroachDB clock jumps forward, indicating it had been running with a stale clock. To protect against this on vMotion, however, use the [`--clock-device`](cockroach-start.html#general) flag to specify a [PTP hardware clock](https://www.kernel.org/doc/html/latest/driver-api/ptp.html) for CockroachDB to use when querying the current time. When doing so, you should not enable `server.clock.forward_jump_check_enabled` because forward jumps will be expected and harmless. For more information on how `--clock-device` interacts with vMotion, see [this blog post](https://core.vmware.com/blog/cockroachdb-vmotion-support-vsphere-7-using-precise-timekeeping). - -### Considerations - -When setting up clock synchronization: - -- All nodes in the cluster must be synced to the same time source, or to different sources that implement leap second smearing in the same way. For example, Google and Amazon have time sources that are compatible with each other (they implement [leap second smearing](https://developers.google.com/time/smear) in the same way), but are incompatible with the default NTP pool (which does not implement leap second smearing). -- For nodes running in AWS, we recommend [Amazon Time Sync Service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). For nodes running in GCP, we recommend [Google's internal NTP service](https://cloud.google.com/compute/docs/instances/configure-ntp#configure_ntp_for_your_instances). For nodes running elsewhere, we recommend [Google Public NTP](https://developers.google.com/time/). Note that the Google and Amazon time services can be mixed with each other, but they cannot be mixed with other time services (unless you have verified leap second behavior). Either all of your nodes should use the Google and Amazon services, or none of them should. -- If you do not want to use the Google or Amazon time sources, you can use [`chrony`](https://chrony.tuxfamily.org/index.html) and enable client-side leap smearing, unless the time source you're using already does server-side smearing. In most cases, we recommend the Google Public NTP time source because it handles smearing the leap second. If you use a different NTP time source that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine. -- Do not run more than one clock sync service on VMs where `cockroach` is running. -- {% include v21.1/misc/multiregion-max-offset.md %} - -### Tutorials - -For guidance on synchronizing clocks, see the tutorial for your deployment environment: - -Environment | Featured Approach -------------|--------------------- -[On-Premises](deploy-cockroachdb-on-premises.html#step-1-synchronize-clocks) | Use NTP with Google's external NTP service. -[AWS](deploy-cockroachdb-on-aws.html#step-3-synchronize-clocks) | Use the Amazon Time Sync Service. -[Azure](deploy-cockroachdb-on-microsoft-azure.html#step-3-synchronize-clocks) | Disable Hyper-V time synchronization and use NTP with Google's external NTP service. -[Digital Ocean](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks) | Use NTP with Google's external NTP service. -[GCE](deploy-cockroachdb-on-google-cloud-platform.html#step-3-synchronize-clocks) | Use NTP with Google's internal NTP service. diff --git a/src/current/_includes/v21.1/faq/clock-synchronization-monitoring.html b/src/current/_includes/v21.1/faq/clock-synchronization-monitoring.html deleted file mode 100644 index 7fb82e4d188..00000000000 --- a/src/current/_includes/v21.1/faq/clock-synchronization-monitoring.html +++ /dev/null @@ -1,8 +0,0 @@ -As explained in more detail [in our monitoring documentation](monitoring-and-alerting.html#prometheus-endpoint), each CockroachDB node exports a wide variety of metrics at `http://:/_status/vars` in the format used by the popular Prometheus timeseries database. Two of these metrics export how close each node's clock is to the clock of all other nodes: - -Metric | Definition --------|----------- -`clock_offset_meannanos` | The mean difference between the node's clock and other nodes' clocks in nanoseconds -`clock_offset_stddevnanos` | The standard deviation of the difference between the node's clock and other nodes' clocks in nanoseconds - -As described in [the above answer](#what-happens-when-node-clocks-are-not-properly-synchronized), a node will shut down if the mean offset of its clock from the other nodes' clocks exceeds 80% of the maximum offset allowed. It's recommended to monitor the `clock_offset_meannanos` metric and alert if it's approaching the 80% threshold of your cluster's configured max offset. diff --git a/src/current/_includes/v21.1/faq/differences-between-numberings.md b/src/current/_includes/v21.1/faq/differences-between-numberings.md deleted file mode 100644 index 741ec4f8066..00000000000 --- a/src/current/_includes/v21.1/faq/differences-between-numberings.md +++ /dev/null @@ -1,11 +0,0 @@ - -| Property | UUID generated with `uuid_v4()` | INT generated with `unique_rowid()` | Sequences | -|--------------------------------------|-----------------------------------------|-----------------------------------------------|--------------------------------| -| Size | 16 bytes | 8 bytes | 1 to 8 bytes | -| Ordering properties | Unordered | Highly time-ordered | Highly time-ordered | -| Performance cost at generation | Small, scalable | Small, scalable | Variable, can cause contention | -| Value distribution | Uniformly distributed (128 bits) | Contains time and space (node ID) components | Dense, small values | -| Data locality | Maximally distributed | Values generated close in time are co-located | Highly local | -| `INSERT` latency when used as key | Small, insensitive to concurrency | Small, but increases with concurrent INSERTs | Higher | -| `INSERT` throughput when used as key | Highest | Limited by max throughput on 1 node | Limited by max throughput on 1 node | -| Read throughput when used as key | Highest (maximal parallelism) | Limited | Limited | diff --git a/src/current/_includes/v21.1/faq/planned-maintenance.md b/src/current/_includes/v21.1/faq/planned-maintenance.md deleted file mode 100644 index b1e9d60110b..00000000000 --- a/src/current/_includes/v21.1/faq/planned-maintenance.md +++ /dev/null @@ -1,22 +0,0 @@ -By default, if a node stays offline for more than 5 minutes, the cluster will consider it dead and will rebalance its data to other nodes. Before temporarily stopping nodes for planned maintenance (e.g., upgrading system software), if you expect any nodes to be offline for longer than 5 minutes, you can prevent the cluster from unnecessarily rebalancing data off the nodes by increasing the `server.time_until_store_dead` [cluster setting](cluster-settings.html) to match the estimated maintenance window. - -For example, let's say you want to maintain a group of servers, and the nodes running on the servers may be offline for up to 15 minutes as a result. Before shutting down the nodes, you would change the `server.time_until_store_dead` cluster setting as follows: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING server.time_until_store_dead = '15m0s'; -~~~ - -After completing the maintenance work and [restarting the nodes](cockroach-start.html), you would then change the setting back to its default: - -{% include_cached copy-clipboard.html %} -~~~ sql -> RESET CLUSTER SETTING server.time_until_store_dead; -~~~ - -It's also important to ensure that load balancers do not send client traffic to a node about to be shut down, even if it will only be down for a few seconds. If you find that your load balancer's health check is not always recognizing a node as unready before the node shuts down, you can increase the `server.shutdown.drain_wait` setting, which tells the node to wait in an unready state for the specified duration. For example: - -{% include_cached copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING server.shutdown.drain_wait = '10s'; - ~~~ diff --git a/src/current/_includes/v21.1/faq/sequential-numbers.md b/src/current/_includes/v21.1/faq/sequential-numbers.md deleted file mode 100644 index 5b79c97566c..00000000000 --- a/src/current/_includes/v21.1/faq/sequential-numbers.md +++ /dev/null @@ -1,8 +0,0 @@ -Sequential numbers can be generated in CockroachDB using the `unique_rowid()` built-in function or using [SQL sequences](create-sequence.html). However, note the following considerations: - -- Unless you need roughly-ordered numbers, use [`UUID`](uuid.html) values instead. See the [previous -FAQ](#how-do-i-auto-generate-unique-row-ids-in-cockroachdb) for details. -- [Sequences](create-sequence.html) produce **unique** values. However, not all values are guaranteed to be produced (e.g., when a transaction is canceled after it consumes a value) and the values may be slightly reordered (e.g., when a transaction that -consumes a lower sequence number commits after a transaction that consumes a higher number). -- For maximum performance, avoid using sequences or `unique_rowid()` to generate row IDs or indexed columns. Values generated in these ways are logically close to each other and can cause contention on few data ranges during inserts. Instead, prefer [`UUID`](uuid.html) identifiers. -- {% include {{page.version.version}}/performance/use-hash-sharded-indexes.md %} diff --git a/src/current/_includes/v21.1/faq/sequential-transactions.md b/src/current/_includes/v21.1/faq/sequential-transactions.md deleted file mode 100644 index 684f2ce5d2a..00000000000 --- a/src/current/_includes/v21.1/faq/sequential-transactions.md +++ /dev/null @@ -1,19 +0,0 @@ -Most use cases that ask for a strong time-based write ordering can be solved with other, more distribution-friendly -solutions instead. For example, CockroachDB's [time travel queries (`AS OF SYSTEM -TIME`)](https://www.cockroachlabs.com/blog/time-travel-queries-select-witty_subtitle-the_future/) support the following: - -- Paginating through all the changes to a table or dataset -- Determining the order of changes to data over time -- Determining the state of data at some point in the past -- Determining the changes to data between two points of time - -Consider also that the values generated by `unique_rowid()`, described in the previous FAQ entries, also provide an approximate time ordering. - -However, if your application absolutely requires strong time-based write ordering, it is possible to create a strictly monotonic counter in CockroachDB that increases over time as follows: - -- Initially: `CREATE TABLE cnt(val INT PRIMARY KEY); INSERT INTO cnt(val) VALUES(1);` -- In each transaction: `INSERT INTO cnt(val) SELECT max(val)+1 FROM cnt RETURNING val;` - -This will cause [`INSERT`](insert.html) transactions to conflict with each other and effectively force the transactions to commit one at a time throughout the cluster, which in turn guarantees the values generated in this way are strictly increasing over time without gaps. The caveat is that performance is severely limited as a result. - -If you find yourself interested in this problem, please [contact us](support-resources.html) and describe your situation. We would be glad to help you find alternative solutions and possibly extend CockroachDB to better match your needs. diff --git a/src/current/_includes/v21.1/faq/simulate-key-value-store.html b/src/current/_includes/v21.1/faq/simulate-key-value-store.html deleted file mode 100644 index 4772fa5358c..00000000000 --- a/src/current/_includes/v21.1/faq/simulate-key-value-store.html +++ /dev/null @@ -1,13 +0,0 @@ -CockroachDB is a distributed SQL database built on a transactional and strongly-consistent key-value store. Although it is not possible to access the key-value store directly, you can mirror direct access using a "simple" table of two columns, with one set as the primary key: - -~~~ sql -> CREATE TABLE kv (k INT PRIMARY KEY, v BYTES); -~~~ - -When such a "simple" table has no indexes or foreign keys, [`INSERT`](insert.html)/[`UPSERT`](upsert.html)/[`UPDATE`](update.html)/[`DELETE`](delete.html) statements translate to key-value operations with minimal overhead (single digit percent slowdowns). For example, the following `UPSERT` to add or replace a row in the table would translate into a single key-value Put operation: - -~~~ sql -> UPSERT INTO kv VALUES (1, b'hello') -~~~ - -This SQL table approach also offers you a well-defined query language, a known transaction model, and the flexibility to add more columns to the table if the need arises. diff --git a/src/current/_includes/v21.1/import-table-deprecate.md b/src/current/_includes/v21.1/import-table-deprecate.md deleted file mode 100644 index 715a37c6f4e..00000000000 --- a/src/current/_includes/v21.1/import-table-deprecate.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -As of v21.2, certain `IMPORT TABLE` statements that defined the table schema inline are **deprecated**. To import data into a new table, use [`CREATE TABLE`](create-table.html) followed by [`IMPORT INTO`](import-into.html). For an example, read [Import into a new table from a CSV file](import-into.html#import-into-a-new-table-from-a-csv-file). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/json/json-sample.go b/src/current/_includes/v21.1/json/json-sample.go deleted file mode 100644 index d5953a71ee2..00000000000 --- a/src/current/_includes/v21.1/json/json-sample.go +++ /dev/null @@ -1,79 +0,0 @@ -package main - -import ( - "database/sql" - "fmt" - "io/ioutil" - "net/http" - "time" - - _ "github.com/lib/pq" -) - -func main() { - db, err := sql.Open("postgres", "user=maxroach dbname=jsonb_test sslmode=disable port=26257") - if err != nil { - panic(err) - } - - // The Reddit API wants us to tell it where to start from. The first request - // we just say "null" to say "from the start", subsequent requests will use - // the value received from the last call. - after := "null" - - for i := 0; i < 41; i++ { - after, err = makeReq(db, after) - if err != nil { - panic(err) - } - // Reddit limits to 30 requests per minute, so do not do any more than that. - time.Sleep(2 * time.Second) - } -} - -func makeReq(db *sql.DB, after string) (string, error) { - // First, make a request to reddit using the appropriate "after" string. - client := &http.Client{} - req, err := http.NewRequest("GET", fmt.Sprintf("https://www.reddit.com/r/programming.json?after=%s", after), nil) - - req.Header.Add("User-Agent", `Go`) - - resp, err := client.Do(req) - if err != nil { - return "", err - } - - res, err := ioutil.ReadAll(resp.Body) - if err != nil { - return "", err - } - - // We've gotten back our JSON from reddit, we can use a couple SQL tricks to - // accomplish multiple things at once. - // The JSON reddit returns looks like this: - // { - // "data": { - // "children": [ ... ] - // }, - // "after": ... - // } - // We structure our query so that we extract the `children` field, and then - // expand that and insert each individual element into the database as a - // separate row. We then return the "after" field so we know how to make the - // next request. - r, err := db.Query(` - INSERT INTO jsonb_test.programming (posts) - SELECT json_array_elements($1->'data'->'children') - RETURNING $1->'data'->'after'`, - string(res)) - if err != nil { - return "", err - } - - // Since we did a RETURNING, we need to grab the result of our query. - r.Next() - var newAfter string - r.Scan(&newAfter) - - return newAfter, nil -} diff --git a/src/current/_includes/v21.1/json/json-sample.py b/src/current/_includes/v21.1/json/json-sample.py deleted file mode 100644 index 49e302613e0..00000000000 --- a/src/current/_includes/v21.1/json/json-sample.py +++ /dev/null @@ -1,44 +0,0 @@ -import json -import psycopg2 -import requests -import time - -conn = psycopg2.connect(database="jsonb_test", user="maxroach", host="localhost", port=26257) -conn.set_session(autocommit=True) -cur = conn.cursor() - -# The Reddit API wants us to tell it where to start from. The first request -# we just say "null" to say "from the start"; subsequent requests will use -# the value received from the last call. -url = "https://www.reddit.com/r/programming.json" -after = {"after": "null"} - -for n in range(41): - # First, make a request to reddit using the appropriate "after" string. - req = requests.get(url, params=after, headers={"User-Agent": "Python"}) - - # Decode the JSON and set "after" for the next request. - resp = req.json() - after = {"after": str(resp['data']['after'])} - - # Convert the JSON to a string to send to the database. - data = json.dumps(resp) - - # The JSON reddit returns looks like this: - # { - # "data": { - # "children": [ ... ] - # }, - # "after": ... - # } - # We structure our query so that we extract the `children` field, and then - # expand that and insert each individual element into the database as a - # separate row. - cur.execute("""INSERT INTO jsonb_test.programming (posts) - SELECT json_array_elements(%s->'data'->'children')""", (data,)) - - # Reddit limits to 30 requests per minute, so do not do any more than that. - time.sleep(2) - -cur.close() -conn.close() diff --git a/src/current/_includes/v21.1/known-limitations/backup-interleaved.md b/src/current/_includes/v21.1/known-limitations/backup-interleaved.md deleted file mode 100644 index 6f5ebee3772..00000000000 --- a/src/current/_includes/v21.1/known-limitations/backup-interleaved.md +++ /dev/null @@ -1,3 +0,0 @@ -Interleaved tables are now disabled by default in v21.1. Your backup will fail if your cluster includes interleaved data. To include interleaved tables, use the [`INCLUDE_DEPRECATED_INTERLEAVES` option](backup.html#include-deprecated-interleaves). Note that, interleaved tables will be [permanently removed from CockroachDB](interleave-in-parent.html#deprecation) in a future release, so you will be unable to `RESTORE` backups containing interleaved tables to any future versions. - -[Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/52009) diff --git a/src/current/_includes/v21.1/known-limitations/cdc.md b/src/current/_includes/v21.1/known-limitations/cdc.md deleted file mode 100644 index bc760a38e3d..00000000000 --- a/src/current/_includes/v21.1/known-limitations/cdc.md +++ /dev/null @@ -1,7 +0,0 @@ -- Changefeeds only work on tables with a single [column family](column-families.html) (which is the default for new tables). [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/28667) -- Changefeeds cannot be [backed up](backup.html) or [restored](restore.html). [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/73434) -- Changefeeds cannot be altered. To alter, cancel the changefeed and [create a new one with updated settings from where it left off](create-changefeed.html#start-a-new-changefeed-where-another-ended). [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/28668) -- Changefeed target options are limited to tables. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/73435) -- Using a [cloud storage sink](create-changefeed.html#cloud-storage-sink) only works with `JSON` and emits [newline-delimited json](http://ndjson.org) files. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/73432) -- {{ site.data.products.enterprise }} changefeeds are currently disabled for [CockroachDB {{ site.data.products.serverless }} clusters](../cockroachcloud/quickstart.html). Core changefeeds are enabled. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/73429) -- Changefeeds will emit [`NULL` values](null-handling.html) for [`VIRTUAL` computed columns](computed-columns.html) and not the column's computed value. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/74688) diff --git a/src/current/_includes/v21.1/known-limitations/copy-from-clients.md b/src/current/_includes/v21.1/known-limitations/copy-from-clients.md deleted file mode 100644 index 4428aaf74f7..00000000000 --- a/src/current/_includes/v21.1/known-limitations/copy-from-clients.md +++ /dev/null @@ -1,5 +0,0 @@ -The built-in SQL shell provided with CockroachDB ([`cockroach sql`](cockroach-sql.html) / [`cockroach demo`](cockroach-demo.html)) does not currently support importing data with the `COPY` statement. - -To load data into CockroachDB, we recommend that you use an [`IMPORT`](import.html). If you must use a `COPY` statement, you can issue the statement from the [`psql` client](https://www.postgresql.org/docs/current/app-psql.html) command provided with PostgreSQL, or from another third-party client. - -[Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/16392) \ No newline at end of file diff --git a/src/current/_includes/v21.1/known-limitations/copy-syntax.md b/src/current/_includes/v21.1/known-limitations/copy-syntax.md deleted file mode 100644 index fb38157814f..00000000000 --- a/src/current/_includes/v21.1/known-limitations/copy-syntax.md +++ /dev/null @@ -1,9 +0,0 @@ -CockroachDB does not yet support the following `COPY` syntax: - -- `COPY ... TO`. To copy data from a CockroachDB cluster to a file, use an [`EXPORT`](export.html) statement. - - [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/41608) - -- `COPY ... FROM ... WHERE ` - - [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/54580) \ No newline at end of file diff --git a/src/current/_includes/v21.1/known-limitations/correlated-ctes.md b/src/current/_includes/v21.1/known-limitations/correlated-ctes.md deleted file mode 100644 index b4ec6c99fa4..00000000000 --- a/src/current/_includes/v21.1/known-limitations/correlated-ctes.md +++ /dev/null @@ -1,20 +0,0 @@ -CockroachDB does not support correlated common table expressions. This means that a CTE cannot refer to a variable defined outside the scope of that CTE. - -For example, the following query returns an error: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM users - WHERE id = - (WITH rides_home AS - (SELECT revenue FROM rides - WHERE end_address = address) - SELECT rider_id FROM rides_home); -~~~ - -~~~ -ERROR: CTEs may not be correlated -SQLSTATE: 0A000 -~~~ - -This query returns an error because the `WITH rides_home` clause references a column (`address`) returned by the `SELECT` statement at the top level of the query, outside the `rides_home` CTE definition. \ No newline at end of file diff --git a/src/current/_includes/v21.1/known-limitations/drop-single-partition.md b/src/current/_includes/v21.1/known-limitations/drop-single-partition.md deleted file mode 100644 index 3d8166fdc04..00000000000 --- a/src/current/_includes/v21.1/known-limitations/drop-single-partition.md +++ /dev/null @@ -1 +0,0 @@ -CockroachDB does not currently support dropping a single partition from a table. In order to remove partitions, you can [repartition]({% unless page.name == "partitioning.md" %}partitioning.html{% endunless %}#repartition-a-table) the table. diff --git a/src/current/_includes/v21.1/known-limitations/drop-unique-index-from-create-table.md b/src/current/_includes/v21.1/known-limitations/drop-unique-index-from-create-table.md deleted file mode 100644 index 698a24c24ef..00000000000 --- a/src/current/_includes/v21.1/known-limitations/drop-unique-index-from-create-table.md +++ /dev/null @@ -1 +0,0 @@ -[`UNIQUE` indexes](create-index.html) created as part of a [`CREATE TABLE`](create-table.html) statement cannot be removed without using [`CASCADE`]({% unless page.name == "drop-index.md" %}drop-index.html{% endunless %}#remove-an-index-and-dependent-objects-with-cascade). Unique indexes created with [`CREATE INDEX`](create-index.html) do not have this limitation. diff --git a/src/current/_includes/v21.1/known-limitations/import-high-disk-contention.md b/src/current/_includes/v21.1/known-limitations/import-high-disk-contention.md deleted file mode 100644 index 0e016ecaac5..00000000000 --- a/src/current/_includes/v21.1/known-limitations/import-high-disk-contention.md +++ /dev/null @@ -1,6 +0,0 @@ -[`IMPORT`](import.html) can sometimes fail with a "context canceled" error, or can restart itself many times without ever finishing. If this is happening, it is likely due to a high amount of disk contention. This can be mitigated by setting the `kv.bulk_io_write.max_rate` [cluster setting](cluster-settings.html) to a value below your max disk write speed. For example, to set it to 10MB/s, execute: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING kv.bulk_io_write.max_rate = '10MB'; -~~~ diff --git a/src/current/_includes/v21.1/known-limitations/old-multi-col-stats.md b/src/current/_includes/v21.1/known-limitations/old-multi-col-stats.md deleted file mode 100644 index 595be9c7209..00000000000 --- a/src/current/_includes/v21.1/known-limitations/old-multi-col-stats.md +++ /dev/null @@ -1,3 +0,0 @@ -When a column is dropped from a multi-column index, the {% if page.name == "cost-based-optimizer.md" %} optimizer {% else %} [optimizer](cost-based-optimizer.html) {% endif %} will not collect new statistics for the deleted column. However, the optimizer never deletes the old [multi-column statistics](create-statistics.html#create-statistics-on-multiple-columns). This can cause a buildup of statistics in `system.table_statistics` leading the optimizer to use stale statistics, which could result in sub-optimal plans. To workaround this issue and avoid these scenarios, explicitly [delete those statistics](create-statistics.html#delete-statistics) from the `system.table_statistics` table. - - [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/67407) diff --git a/src/current/_includes/v21.1/known-limitations/partitioning-with-placeholders.md b/src/current/_includes/v21.1/known-limitations/partitioning-with-placeholders.md deleted file mode 100644 index b3c3345200d..00000000000 --- a/src/current/_includes/v21.1/known-limitations/partitioning-with-placeholders.md +++ /dev/null @@ -1 +0,0 @@ -When defining a [table partition](partitioning.html), either during table creation or table alteration, it is not possible to use placeholders in the `PARTITION BY` clause. diff --git a/src/current/_includes/v21.1/known-limitations/schema-change-ddl-inside-multi-statement-transactions.md b/src/current/_includes/v21.1/known-limitations/schema-change-ddl-inside-multi-statement-transactions.md deleted file mode 100644 index cf101f88ad9..00000000000 --- a/src/current/_includes/v21.1/known-limitations/schema-change-ddl-inside-multi-statement-transactions.md +++ /dev/null @@ -1,64 +0,0 @@ -Schema change [DDL](https://en.wikipedia.org/wiki/Data_definition_language#ALTER_statement) statements that run inside a multi-statement transaction with non-DDL statements can fail at [`COMMIT`](commit-transaction.html) time, even if other statements in the transaction succeed. This leaves such transactions in a "partially committed, partially aborted" state that may require manual intervention to determine whether the DDL statements succeeded. - -If such a failure occurs, CockroachDB will emit a new CockroachDB-specific error code, `XXA00`, and the following error message: - -``` -transaction committed but schema change aborted with error: -HINT: Some of the non-DDL statements may have committed successfully, but some of the DDL statement(s) failed. -Manual inspection may be required to determine the actual state of the database. -``` - -{{site.data.alerts.callout_info}} -This limitation exists in versions of CockroachDB prior to 19.2. In these older versions, CockroachDB returned the Postgres error code `40003`, `"statement completion unknown"`. -{{site.data.alerts.end}} - -{{site.data.alerts.callout_danger}} -If you must execute schema change DDL statements inside a multi-statement transaction, we **strongly recommend** checking for this error code and handling it appropriately every time you execute such transactions. -{{site.data.alerts.end}} - -This error will occur in various scenarios, including but not limited to: - -- Creating a unique index fails because values aren't unique. -- The evaluation of a computed value fails. -- Adding a constraint (or a column with a constraint) fails because the constraint is violated for the default/computed values in the column. - -To see an example of this error, start by creating the following table. - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE TABLE T(x INT); -INSERT INTO T(x) VALUES (1), (2), (3); -~~~ - -Then, enter the following multi-statement transaction, which will trigger the error. - -{% include_cached copy-clipboard.html %} -~~~ sql -BEGIN; -ALTER TABLE t ADD CONSTRAINT unique_x UNIQUE(x); -INSERT INTO T(x) VALUES (3); -COMMIT; -~~~ - -~~~ -pq: transaction committed but schema change aborted with error: (23505): duplicate key value (x)=(3) violates unique constraint "unique_x" -HINT: Some of the non-DDL statements may have committed successfully, but some of the DDL statement(s) failed. -Manual inspection may be required to determine the actual state of the database. -~~~ - -In this example, the [`INSERT`](insert.html) statement committed, but the [`ALTER TABLE`](alter-table.html) statement adding a [`UNIQUE` constraint](unique.html) failed. We can verify this by looking at the data in table `t` and seeing that the additional non-unique value `3` was successfully inserted. - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT * FROM t; -~~~ - -~~~ - x -+---+ - 1 - 2 - 3 - 3 -(4 rows) -~~~ diff --git a/src/current/_includes/v21.1/known-limitations/schema-changes-between-prepared-statements.md b/src/current/_includes/v21.1/known-limitations/schema-changes-between-prepared-statements.md deleted file mode 100644 index 736fe99df61..00000000000 --- a/src/current/_includes/v21.1/known-limitations/schema-changes-between-prepared-statements.md +++ /dev/null @@ -1,33 +0,0 @@ -When the schema of a table targeted by a prepared statement changes after the prepared statement is created, future executions of the prepared statement could result in an error. For example, adding a column to a table referenced in a prepared statement with a `SELECT *` clause will result in an error: - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE TABLE users (id INT PRIMARY KEY); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -PREPARE prep1 AS SELECT * FROM users; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER TABLE users ADD COLUMN name STRING; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -INSERT INTO users VALUES (1, 'Max Roach'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -EXECUTE prep1; -~~~ - -~~~ -ERROR: cached plan must not change result type -SQLSTATE: 0A000 -~~~ - -It's therefore recommended to explicitly list result columns instead of using `SELECT *` in prepared statements, when possible. diff --git a/src/current/_includes/v21.1/known-limitations/schema-changes-within-transactions.md b/src/current/_includes/v21.1/known-limitations/schema-changes-within-transactions.md deleted file mode 100644 index 05d4f29ff1a..00000000000 --- a/src/current/_includes/v21.1/known-limitations/schema-changes-within-transactions.md +++ /dev/null @@ -1,13 +0,0 @@ -Within a single [transaction](transactions.html): - -- DDL statements cannot be mixed with DML statements. As a workaround, you can split the statements into separate transactions. For more details, [see examples of unsupported statements](online-schema-changes.html#examples-of-statements-that-fail). -- As of version v2.1, you can run schema changes inside the same transaction as a [`CREATE TABLE`](create-table.html) statement. For more information, [see this example](online-schema-changes.html#run-schema-changes-inside-a-transaction-with-create-table). -- A `CREATE TABLE` statement containing [`FOREIGN KEY`](foreign-key.html) or [`INTERLEAVE`](interleave-in-parent.html) clauses cannot be followed by statements that reference the new table. -- Database, schema, table, and user-defined type names cannot be reused. For example, you cannot drop a table named `a` and then create (or rename) a different table with the name `a`. Similarly, you cannot rename a database named `a` to `b` and then create (or rename) a different database with the name `a`. As a workaround, split `RENAME TO`, `DROP`, and `CREATE` statements that reuse object names into separate transactions. -- [Schema change DDL statements inside a multi-statement transaction can fail while other statements succeed](#schema-change-ddl-statements-inside-a-multi-statement-transaction-can-fail-while-other-statements-succeed). -- As of v19.1, some schema changes can be used in combination in a single `ALTER TABLE` statement. For a list of commands that can be combined, see [`ALTER TABLE`](alter-table.html). For a demonstration, see [Add and rename columns atomically](rename-column.html#add-and-rename-columns-atomically). -- [`DROP COLUMN`](drop-column.html) can result in data loss if one of the other schema changes in the transaction fails or is canceled. To work around this, move the `DROP COLUMN` statement to its own explicit transaction or run it in a single statement outside the existing transaction. - -{{site.data.alerts.callout_info}} -If a schema change within a transaction fails, manual intervention may be needed to determine which has failed. After determining which schema change(s) failed, you can then retry the schema changes. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/known-limitations/set-transaction-no-rollback.md b/src/current/_includes/v21.1/known-limitations/set-transaction-no-rollback.md deleted file mode 100644 index 4ab3661f4f7..00000000000 --- a/src/current/_includes/v21.1/known-limitations/set-transaction-no-rollback.md +++ /dev/null @@ -1,17 +0,0 @@ -{% if page.name == "set-vars.md" %} `SET` {% else %} [`SET`](set-vars.html) {% endif %} does not properly apply [`ROLLBACK`](rollback-transaction.html) within a transaction. For example, in the following transaction, showing the `TIME ZONE` [variable](set-vars.html#supported-variables) does not return `2` as expected after the rollback: - -~~~sql -SET TIME ZONE +2; -BEGIN; -SET TIME ZONE +3; -ROLLBACK; -SHOW TIME ZONE; -~~~ - -~~~sql -timezone ------------- -3 -~~~ - -[Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/69396) diff --git a/src/current/_includes/v21.1/known-limitations/single-col-stats-deletion.md b/src/current/_includes/v21.1/known-limitations/single-col-stats-deletion.md deleted file mode 100644 index b8baa46c5d2..00000000000 --- a/src/current/_includes/v21.1/known-limitations/single-col-stats-deletion.md +++ /dev/null @@ -1,3 +0,0 @@ -[Single-column statistics](create-statistics.html#create-statistics-on-a-single-column) are not deleted when columns are dropped, which could cause minor performance issues. - - [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/67407) diff --git a/src/current/_includes/v21.1/known-limitations/stats-refresh-upgrade.md b/src/current/_includes/v21.1/known-limitations/stats-refresh-upgrade.md deleted file mode 100644 index f54a08b3754..00000000000 --- a/src/current/_includes/v21.1/known-limitations/stats-refresh-upgrade.md +++ /dev/null @@ -1,3 +0,0 @@ -The [automatic statistics refresher](cost-based-optimizer.html#control-statistics-refresh-rate) automatically checks whether it needs to refresh statistics for every table in the database upon startup of each node in the cluster. If statistics for a table have not been refreshed in a while, this will trigger collection of statistics for that table. If statistics have been refreshed recently, it will not force a refresh. As a result, the automatic statistics refresher does not necessarily perform a refresh of statistics after an [upgrade](upgrade-cockroach-version.html). This could cause a problem, for example, if the upgrade moves from a version without [histograms](cost-based-optimizer.html#control-histogram-collection) to a version with histograms. To refresh statistics manually, use [`CREATE STATISTICS`](create-statistics.html). - - [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/54816) diff --git a/src/current/_includes/v21.1/known-limitations/unordered-distinct-operations.md b/src/current/_includes/v21.1/known-limitations/unordered-distinct-operations.md deleted file mode 100644 index 5c3e932c4bb..00000000000 --- a/src/current/_includes/v21.1/known-limitations/unordered-distinct-operations.md +++ /dev/null @@ -1,8 +0,0 @@ -Disk spilling [isn't supported](https://github.com/cockroachdb/cockroach/issues/61411) when running `UPSERT` statements that have `nulls are distinct` and `error on duplicate` markers. You can check this by using `EXPLAIN` and looking at the statement plan. - -~~~ - ├── distinct | | - │ │ | distinct on | ... - │ │ | nulls are distinct | - │ │ | error on duplicate | -~~~ diff --git a/src/current/_includes/v21.1/known-limitations/userfile-upload-non-recursive.md b/src/current/_includes/v21.1/known-limitations/userfile-upload-non-recursive.md deleted file mode 100644 index d873b1f5e33..00000000000 --- a/src/current/_includes/v21.1/known-limitations/userfile-upload-non-recursive.md +++ /dev/null @@ -1 +0,0 @@ -- `cockroach userfile upload` does not currently allow for recursive uploads from a directory. This feature will be present with the `--recursive` flag in future versions. [Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/pull/65307) diff --git a/src/current/_includes/v21.1/metric-names.md b/src/current/_includes/v21.1/metric-names.md deleted file mode 100644 index 4bc47082ec6..00000000000 --- a/src/current/_includes/v21.1/metric-names.md +++ /dev/null @@ -1,250 +0,0 @@ -Name | Help ------|----- -`addsstable.applications` | Number of SSTable ingestions applied (i.e., applied by Replicas) -`addsstable.copies` | Number of SSTable ingestions that required copying files during application -`addsstable.proposals` | Number of SSTable ingestions proposed (i.e., sent to Raft by lease holders) -`build.timestamp` | Build information -`capacity.available` | Available storage capacity -`capacity.reserved` | Capacity reserved for snapshots -`capacity.used` | Used storage capacity -`capacity` | Total storage capacity -`changefeed.failures` | Total number of changefeed jobs which have failed -`changefeed.running` | Number of currently running changefeeds, including sinkless -`clock-offset.meannanos` | Mean clock offset with other nodes in nanoseconds -`clock-offset.stddevnanos` | Std dev clock offset with other nodes in nanoseconds -`compactor.compactingnanos` | Number of nanoseconds spent compacting ranges -`compactor.compactions.failure` | Number of failed compaction requests sent to the storage engine -`compactor.compactions.success` | Number of successful compaction requests sent to the storage engine -`compactor.suggestionbytes.compacted` | Number of logical bytes compacted from suggested compactions -`compactor.suggestionbytes.queued` | Number of logical bytes in suggested compactions in the queue -`compactor.suggestionbytes.skipped` | Number of logical bytes in suggested compactions which were not compacted -`distsender.batches.partial` | Number of partial batches processed -`distsender.batches` | Number of batches processed -`distsender.errors.notleaseholder` | Number of NotLeaseHolderErrors encountered -`distsender.rpc.sent.local` | Number of local RPCs sent -`distsender.rpc.sent.nextreplicaerror` | Number of RPCs sent due to per-replica errors -`distsender.rpc.sent` | Number of RPCs sent -`exec.error` | Number of batch KV requests that failed to execute on this node -`exec.latency` | Latency in nanoseconds of batch KV requests executed on this node -`exec.success` | Number of batch KV requests executed successfully on this node -`gcbytesage` | Cumulative age of non-live data in seconds -`gossip.bytes.received` | Number of received gossip bytes -`gossip.bytes.sent` | Number of sent gossip bytes -`gossip.connections.incoming` | Number of active incoming gossip connections -`gossip.connections.outgoing` | Number of active outgoing gossip connections -`gossip.connections.refused` | Number of refused incoming gossip connections -`gossip.infos.received` | Number of received gossip Info objects -`gossip.infos.sent` | Number of sent gossip Info objects -`intentage` | Cumulative age of intents in seconds -`intentbytes` | Number of bytes in intent KV pairs -`intentcount` | Count of intent keys -`keybytes` | Number of bytes taken up by keys -`keycount` | Count of all keys -`lastupdatenanos` | Time in nanoseconds since Unix epoch at which bytes/keys/intents metrics were last updated -`leases.epoch` | Number of replica leaseholders using epoch-based leases -`leases.error` | Number of failed lease requests -`leases.expiration` | Number of replica leaseholders using expiration-based leases -`leases.success` | Number of successful lease requests -`leases.transfers.error` | Number of failed lease transfers -`leases.transfers.success` | Number of successful lease transfers -`livebytes` | Number of bytes of live data (keys plus values) -`livecount` | Count of live keys -`liveness.epochincrements` | Number of times this node has incremented its liveness epoch -`liveness.heartbeatfailures` | Number of failed node liveness heartbeats from this node -`liveness.heartbeatlatency` | Node liveness heartbeat latency in nanoseconds -`liveness.heartbeatsuccesses` | Number of successful node liveness heartbeats from this node -`liveness.livenodes` | Number of live nodes in the cluster (will be 0 if this node is not itself live) -`node-id` | node ID with labels for advertised RPC and HTTP addresses -`queue.consistency.pending` | Number of pending replicas in the consistency checker queue -`queue.consistency.process.failure` | Number of replicas which failed processing in the consistency checker queue -`queue.consistency.process.success` | Number of replicas successfully processed by the consistency checker queue -`queue.consistency.processingnanos` | Nanoseconds spent processing replicas in the consistency checker queue -`queue.gc.info.abortspanconsidered` | Number of AbortSpan entries old enough to be considered for removal -`queue.gc.info.abortspangcnum` | Number of AbortSpan entries fit for removal -`queue.gc.info.abortspanscanned` | Number of transactions present in the AbortSpan scanned from the engine -`queue.gc.info.intentsconsidered` | Number of 'old' intents -`queue.gc.info.intenttxns` | Number of associated distinct transactions -`queue.gc.info.numkeysaffected` | Number of keys with GC'able data -`queue.gc.info.pushtxn` | Number of attempted pushes -`queue.gc.info.resolvesuccess` | Number of successful intent resolutions -`queue.gc.info.resolvetotal` | Number of attempted intent resolutions -`queue.gc.info.transactionspangcaborted` | Number of GC'able entries corresponding to aborted txns -`queue.gc.info.transactionspangccommitted` | Number of GC'able entries corresponding to committed txns -`queue.gc.info.transactionspangcpending` | Number of GC'able entries corresponding to pending txns -`queue.gc.info.transactionspanscanned` | Number of entries in transaction spans scanned from the engine -`queue.gc.pending` | Number of pending replicas in the GC queue -`queue.gc.process.failure` | Number of replicas which failed processing in the GC queue -`queue.gc.process.success` | Number of replicas successfully processed by the GC queue -`queue.gc.processingnanos` | Nanoseconds spent processing replicas in the GC queue -`queue.raftlog.pending` | Number of pending replicas in the Raft log queue -`queue.raftlog.process.failure` | Number of replicas which failed processing in the Raft log queue -`queue.raftlog.process.success` | Number of replicas successfully processed by the Raft log queue -`queue.raftlog.processingnanos` | Nanoseconds spent processing replicas in the Raft log queue -`queue.raftsnapshot.pending` | Number of pending replicas in the Raft repair queue -`queue.raftsnapshot.process.failure` | Number of replicas which failed processing in the Raft repair queue -`queue.raftsnapshot.process.success` | Number of replicas successfully processed by the Raft repair queue -`queue.raftsnapshot.processingnanos` | Nanoseconds spent processing replicas in the Raft repair queue -`queue.replicagc.pending` | Number of pending replicas in the replica GC queue -`queue.replicagc.process.failure` | Number of replicas which failed processing in the replica GC queue -`queue.replicagc.process.success` | Number of replicas successfully processed by the replica GC queue -`queue.replicagc.processingnanos` | Nanoseconds spent processing replicas in the replica GC queue -`queue.replicagc.removereplica` | Number of replica removals attempted by the replica gc queue -`queue.replicate.addreplica` | Number of replica additions attempted by the replicate queue -`queue.replicate.pending` | Number of pending replicas in the replicate queue -`queue.replicate.process.failure` | Number of replicas which failed processing in the replicate queue -`queue.replicate.process.success` | Number of replicas successfully processed by the replicate queue -`queue.replicate.processingnanos` | Nanoseconds spent processing replicas in the replicate queue -`queue.replicate.purgatory` | Number of replicas in the replicate queue's purgatory, awaiting allocation options -`queue.replicate.rebalancereplica` | Number of replica rebalancer-initiated additions attempted by the replicate queue -`queue.replicate.removedeadreplica` | Number of dead replica removals attempted by the replicate queue (typically in response to a node outage) -`queue.replicate.removereplica` | Number of replica removals attempted by the replicate queue (typically in response to a rebalancer-initiated addition) -`queue.replicate.transferlease` | Number of range lease transfers attempted by the replicate queue -`queue.split.pending` | Number of pending replicas in the split queue -`queue.split.process.failure` | Number of replicas which failed processing in the split queue -`queue.split.process.success` | Number of replicas successfully processed by the split queue -`queue.split.processingnanos` | Nanoseconds spent processing replicas in the split queue -`queue.tsmaintenance.pending` | Number of pending replicas in the time series maintenance queue -`queue.tsmaintenance.process.failure` | Number of replicas which failed processing in the time series maintenance queue -`queue.tsmaintenance.process.success` | Number of replicas successfully processed by the time series maintenance queue -`queue.tsmaintenance.processingnanos` | Nanoseconds spent processing replicas in the time series maintenance queue -`raft.commandsapplied` | Count of Raft commands applied -`raft.enqueued.pending` | Number of pending outgoing messages in the Raft Transport queue -`raft.heartbeats.pending` | Number of pending heartbeats and responses waiting to be coalesced -`raft.process.commandcommit.latency` | Latency histogram in nanoseconds for committing Raft commands -`raft.process.logcommit.latency` | Latency histogram in nanoseconds for committing Raft log entries -`raft.process.tickingnanos` | Nanoseconds spent in store.processRaft() processing replica.Tick() -`raft.process.workingnanos` | Nanoseconds spent in store.processRaft() working -`raft.rcvd.app` | Number of MsgApp messages received by this store -`raft.rcvd.appresp` | Number of MsgAppResp messages received by this store -`raft.rcvd.dropped` | Number of dropped incoming Raft messages -`raft.rcvd.heartbeat` | Number of (coalesced, if enabled) MsgHeartbeat messages received by this store -`raft.rcvd.heartbeatresp` | Number of (coalesced, if enabled) MsgHeartbeatResp messages received by this store -`raft.rcvd.prevote` | Number of MsgPreVote messages received by this store -`raft.rcvd.prevoteresp` | Number of MsgPreVoteResp messages received by this store -`raft.rcvd.prop` | Number of MsgProp messages received by this store -`raft.rcvd.snap` | Number of MsgSnap messages received by this store -`raft.rcvd.timeoutnow` | Number of MsgTimeoutNow messages received by this store -`raft.rcvd.transferleader` | Number of MsgTransferLeader messages received by this store -`raft.rcvd.vote` | Number of MsgVote messages received by this store -`raft.rcvd.voteresp` | Number of MsgVoteResp messages received by this store -`raft.ticks` | Number of Raft ticks queued -`raftlog.behind` | Number of Raft log entries followers on other stores are behind -`raftlog.truncated` | Number of Raft log entries truncated -`range.adds` | Number of range additions -`range.raftleadertransfers` | Number of raft leader transfers -`range.removes` | Number of range removals -`range.snapshots.generated` | Number of generated snapshots -`range.snapshots.normal-applied` | Number of applied snapshots -`range.snapshots.preemptive-applied` | Number of applied preemptive snapshots -`range.splits` | Number of range splits -`ranges.unavailable` | Number of ranges with fewer live replicas than needed for quorum -`ranges.underreplicated` | Number of ranges with fewer live replicas than the replication target -`ranges` | Number of ranges -`rebalancing.writespersecond` | Number of keys written (i.e., applied by raft) per second to the store, averaged over a large time period as used in rebalancing decisions -`replicas.commandqueue.combinedqueuesize` | Number of commands in all CommandQueues combined -`replicas.commandqueue.combinedreadcount` | Number of read-only commands in all CommandQueues combined -`replicas.commandqueue.combinedwritecount` | Number of read-write commands in all CommandQueues combined -`replicas.commandqueue.maxoverlaps` | Largest number of overlapping commands seen when adding to any CommandQueue -`replicas.commandqueue.maxreadcount` | Largest number of read-only commands in any CommandQueue -`replicas.commandqueue.maxsize` | Largest number of commands in any CommandQueue -`replicas.commandqueue.maxtreesize` | Largest number of intervals in any CommandQueue's interval tree -`replicas.commandqueue.maxwritecount` | Largest number of read-write commands in any CommandQueue -`replicas.leaders_not_leaseholders` | Number of replicas that are Raft leaders whose range lease is held by another store -`replicas.leaders` | Number of raft leaders -`replicas.leaseholders` | Number of lease holders -`replicas.quiescent` | Number of quiesced replicas -`replicas.reserved` | Number of replicas reserved for snapshots -`replicas` | Number of replicas -`requests.backpressure.split` | Number of backpressured writes waiting on a Range split -`requests.slow.commandqueue` | Number of requests that have been stuck for a long time in the command queue -`requests.slow.distsender` | Number of requests that have been stuck for a long time in the dist sender -`requests.slow.lease` | Number of requests that have been stuck for a long time acquiring a lease -`requests.slow.raft` | Number of requests that have been stuck for a long time in raft -`rocksdb.block.cache.hits` | Count of block cache hits -`rocksdb.block.cache.misses` | Count of block cache misses -`rocksdb.block.cache.pinned-usage` | Bytes pinned by the block cache -`rocksdb.block.cache.usage` | Bytes used by the block cache -`rocksdb.bloom.filter.prefix.checked` | Number of times the bloom filter was checked -`rocksdb.bloom.filter.prefix.useful` | Number of times the bloom filter helped avoid iterator creation -`rocksdb.compactions` | Number of table compactions -`rocksdb.flushes` | Number of table flushes -`rocksdb.memtable.total-size` | Current size of memtable in bytes -`rocksdb.num-sstables` | Number of storage engine SSTables -`rocksdb.read-amplification` | Number of disk reads per query -`rocksdb.table-readers-mem-estimate` | Memory used by index and filter blocks -`round-trip-latency` | Distribution of round-trip latencies with other nodes in nanoseconds -`security.certificate.expiration.ca` | Expiration timestamp in seconds since Unix epoch for the CA certificate. 0 means no certificate or error. -`security.certificate.expiration.node` | Expiration timestamp in seconds since Unix epoch for the node certificate. 0 means no certificate or error. -`sql.bytesin` | Number of sql bytes received -`sql.bytesout` | Number of sql bytes sent -`sql.conns` | Number of active sql connections -`sql.ddl.count` | Number of SQL DDL statements -`sql.delete.count` | Number of SQL DELETE statements -`sql.distsql.exec.latency` | Latency in nanoseconds of SQL statement executions running on the distributed execution engine. This metric does not include the time to parse and plan the statement. -`sql.distsql.flows.active` | Number of distributed SQL flows currently active -`sql.distsql.flows.total` | Number of distributed SQL flows executed -`sql.distsql.queries.active` | Number of distributed SQL queries currently active -`sql.distsql.queries.total` | Number of distributed SQL queries executed -`sql.distsql.select.count` | Number of DistSQL SELECT statements -`sql.distsql.service.latency` | Latency in nanoseconds of SQL statement executions running on the distributed execution engine, including the time to parse and plan the statement. -`sql.exec.latency` | Latency in nanoseconds of all SQL statement executions. This metric does not include the time to parse and plan the statement. -`sql.guardrails.max_row_size_err.count` | Number of times a large row violates the corresponding `sql.guardrails.max_row_size_err` limit. -`sql.guardrails.max_row_size_log.count` | Number of times a large row violates the corresponding `sql.guardrails.max_row_size_log` limit. -`sql.insert.count` | Number of SQL INSERT statements -`sql.mem.current` | Current sql statement memory usage -`sql.mem.distsql.current` | Current sql statement memory usage for distsql -`sql.mem.distsql.max` | Memory usage per sql statement for distsql -`sql.mem.max` | Memory usage per sql statement -`sql.mem.session.current` | Current sql session memory usage -`sql.mem.session.max` | Memory usage per sql session -`sql.mem.txn.current` | Current sql transaction memory usage -`sql.mem.txn.max` | Memory usage per sql transaction -`sql.misc.count` | Number of other SQL statements -`sql.query.count` | Number of SQL queries -`sql.select.count` | Number of SQL SELECT statements -`sql.service.latency` | Latency in nanoseconds of SQL request execution, including the time to parse and plan the statement. -`sql.txn.abort.count` | Number of SQL transaction ABORT statements -`sql.txn.begin.count` | Number of SQL transaction BEGIN statements -`sql.txn.commit.count` | Number of SQL transaction COMMIT statements -`sql.txn.rollback.count` | Number of SQL transaction ROLLBACK statements -`sql.update.count` | Number of SQL UPDATE statements -`sys.cgo.allocbytes` | Current bytes of memory allocated by cgo -`sys.cgo.totalbytes` | Total bytes of memory allocated by cgo, but not released -`sys.cgocalls` | Total number of cgo call -`sys.cpu.sys.ns` | Total system cpu time in nanoseconds -`sys.cpu.sys.percent` | Current system cpu percentage -`sys.cpu.user.ns` | Total user cpu time in nanoseconds -`sys.cpu.user.percent` | Current user cpu percentage -`sys.fd.open` | Process open file descriptors -`sys.fd.softlimit` | Process open FD soft limit -`sys.gc.count` | Total number of GC runs -`sys.gc.pause.ns` | Total GC pause in nanoseconds -`sys.gc.pause.percent` | Current GC pause percentage -`sys.go.allocbytes` | Current bytes of memory allocated by go -`sys.go.totalbytes` | Total bytes of memory allocated by go, but not released -`sys.goroutines` | Current number of goroutines -`sys.rss` | Current process RSS -`sys.uptime` | Process uptime in seconds -`sysbytes` | Number of bytes in system KV pairs -`syscount` | Count of system KV pairs -`timeseries.write.bytes` | Total size in bytes of metric samples written to disk -`timeseries.write.errors` | Total errors encountered while attempting to write metrics to disk -`timeseries.write.samples` | Total number of metric samples written to disk -`totalbytes` | Total number of bytes taken up by keys and values including non-live data -`tscache.skl.read.pages` | Number of pages in the read timestamp cache -`tscache.skl.read.rotations` | Number of page rotations in the read timestamp cache -`tscache.skl.write.pages` | Number of pages in the write timestamp cache -`tscache.skl.write.rotations` | Number of page rotations in the write timestamp cache -`txn.abandons` | Number of abandoned KV transactions -`txn.aborts` | Number of aborted KV transactions -`txn.autoretries` | Number of automatic retries to avoid serializable restarts -`txn.commits1PC` | Number of committed one-phase KV transactions -`txn.commits` | Number of committed KV transactions (including 1PC) -`txn.durations` | KV transaction durations in nanoseconds -`txn.restarts.deleterange` | Number of restarts due to a forwarded commit timestamp and a DeleteRange command -`txn.restarts.possiblereplay` | Number of restarts due to possible replays of command batches at the storage layer -`txn.restarts.serializable` | Number of restarts due to a forwarded commit timestamp and isolation=SERIALIZABLE -`txn.restarts.writetooold` | Number of restarts due to a concurrent writer committing first -`txn.restarts` | Number of restarted KV transactions -`valbytes` | Number of bytes taken up by values -`valcount` | Count of all values diff --git a/src/current/_includes/v21.1/misc/available-capacity-metric.md b/src/current/_includes/v21.1/misc/available-capacity-metric.md deleted file mode 100644 index 61dbcb9cbf2..00000000000 --- a/src/current/_includes/v21.1/misc/available-capacity-metric.md +++ /dev/null @@ -1 +0,0 @@ -If you are testing your deployment locally with multiple CockroachDB nodes running on a single machine (this is [not recommended in production](recommended-production-settings.html#topology)), you must explicitly [set the store size](cockroach-start.html#store) per node in order to display the correct capacity. Otherwise, the machine's actual disk capacity will be counted as a separate store for each node, thus inflating the computed capacity. \ No newline at end of file diff --git a/src/current/_includes/v21.1/misc/aws-locations.md b/src/current/_includes/v21.1/misc/aws-locations.md deleted file mode 100644 index 8b073c1f230..00000000000 --- a/src/current/_includes/v21.1/misc/aws-locations.md +++ /dev/null @@ -1,18 +0,0 @@ -| Location | SQL Statement | -| ------ | ------ | -| US East (N. Virginia) | `INSERT into system.locations VALUES ('region', 'us-east-1', 37.478397, -76.453077)`| -| US East (Ohio) | `INSERT into system.locations VALUES ('region', 'us-east-2', 40.417287, -76.453077)` | -| US West (N. California) | `INSERT into system.locations VALUES ('region', 'us-west-1', 38.837522, -120.895824)` | -| US West (Oregon) | `INSERT into system.locations VALUES ('region', 'us-west-2', 43.804133, -120.554201)` | -| Canada (Central) | `INSERT into system.locations VALUES ('region', 'ca-central-1', 56.130366, -106.346771)` | -| EU (Frankfurt) | `INSERT into system.locations VALUES ('region', 'eu-central-1', 50.110922, 8.682127)` | -| EU (Ireland) | `INSERT into system.locations VALUES ('region', 'eu-west-1', 53.142367, -7.692054)` | -| EU (London) | `INSERT into system.locations VALUES ('region', 'eu-west-2', 51.507351, -0.127758)` | -| EU (Paris) | `INSERT into system.locations VALUES ('region', 'eu-west-3', 48.856614, 2.352222)` | -| Asia Pacific (Tokyo) | `INSERT into system.locations VALUES ('region', 'ap-northeast-1', 35.689487, 139.691706)` | -| Asia Pacific (Seoul) | `INSERT into system.locations VALUES ('region', 'ap-northeast-2', 37.566535, 126.977969)` | -| Asia Pacific (Osaka-Local) | `INSERT into system.locations VALUES ('region', 'ap-northeast-3', 34.693738, 135.502165)` | -| Asia Pacific (Singapore) | `INSERT into system.locations VALUES ('region', 'ap-southeast-1', 1.352083, 103.819836)` | -| Asia Pacific (Sydney) | `INSERT into system.locations VALUES ('region', 'ap-southeast-2', -33.86882, 151.209296)` | -| Asia Pacific (Mumbai) | `INSERT into system.locations VALUES ('region', 'ap-south-1', 19.075984, 72.877656)` | -| South America (São Paulo) | `INSERT into system.locations VALUES ('region', 'sa-east-1', -23.55052, -46.633309)` | diff --git a/src/current/_includes/v21.1/misc/azure-locations.md b/src/current/_includes/v21.1/misc/azure-locations.md deleted file mode 100644 index 7119ff8b7cb..00000000000 --- a/src/current/_includes/v21.1/misc/azure-locations.md +++ /dev/null @@ -1,30 +0,0 @@ -| Location | SQL Statement | -| -------- | ------------- | -| eastasia (East Asia) | `INSERT into system.locations VALUES ('region', 'eastasia', 22.267, 114.188)` | -| southeastasia (Southeast Asia) | `INSERT into system.locations VALUES ('region', 'southeastasia', 1.283, 103.833)` | -| centralus (Central US) | `INSERT into system.locations VALUES ('region', 'centralus', 41.5908, -93.6208)` | -| eastus (East US) | `INSERT into system.locations VALUES ('region', 'eastus', 37.3719, -79.8164)` | -| eastus2 (East US 2) | `INSERT into system.locations VALUES ('region', 'eastus2', 36.6681, -78.3889)` | -| westus (West US) | `INSERT into system.locations VALUES ('region', 'westus', 37.783, -122.417)` | -| northcentralus (North Central US) | `INSERT into system.locations VALUES ('region', 'northcentralus', 41.8819, -87.6278)` | -| southcentralus (South Central US) | `INSERT into system.locations VALUES ('region', 'southcentralus', 29.4167, -98.5)` | -| northeurope (North Europe) | `INSERT into system.locations VALUES ('region', 'northeurope', 53.3478, -6.2597)` | -| westeurope (West Europe) | `INSERT into system.locations VALUES ('region', 'westeurope', 52.3667, 4.9)` | -| japanwest (Japan West) | `INSERT into system.locations VALUES ('region', 'japanwest', 34.6939, 135.5022)` | -| japaneast (Japan East) | `INSERT into system.locations VALUES ('region', 'japaneast', 35.68, 139.77)` | -| brazilsouth (Brazil South) | `INSERT into system.locations VALUES ('region', 'brazilsouth', -23.55, -46.633)` | -| australiaeast (Australia East) | `INSERT into system.locations VALUES ('region', 'australiaeast', -33.86, 151.2094)` | -| australiasoutheast (Australia Southeast) | `INSERT into system.locations VALUES ('region', 'australiasoutheast', -37.8136, 144.9631)` | -| southindia (South India) | `INSERT into system.locations VALUES ('region', 'southindia', 12.9822, 80.1636)` | -| centralindia (Central India) | `INSERT into system.locations VALUES ('region', 'centralindia', 18.5822, 73.9197)` | -| westindia (West India) | `INSERT into system.locations VALUES ('region', 'westindia', 19.088, 72.868)` | -| canadacentral (Canada Central) | `INSERT into system.locations VALUES ('region', 'canadacentral', 43.653, -79.383)` | -| canadaeast (Canada East) | `INSERT into system.locations VALUES ('region', 'canadaeast', 46.817, -71.217)` | -| uksouth (UK South) | `INSERT into system.locations VALUES ('region', 'uksouth', 50.941, -0.799)` | -| ukwest (UK West) | `INSERT into system.locations VALUES ('region', 'ukwest', 53.427, -3.084)` | -| westcentralus (West Central US) | `INSERT into system.locations VALUES ('region', 'westcentralus', 40.890, -110.234)` | -| westus2 (West US 2) | `INSERT into system.locations VALUES ('region', 'westus2', 47.233, -119.852)` | -| koreacentral (Korea Central) | `INSERT into system.locations VALUES ('region', 'koreacentral', 37.5665, 126.9780)` | -| koreasouth (Korea South) | `INSERT into system.locations VALUES ('region', 'koreasouth', 35.1796, 129.0756)` | -| francecentral (France Central) | `INSERT into system.locations VALUES ('region', 'francecentral', 46.3772, 2.3730)` | -| francesouth (France South) | `INSERT into system.locations VALUES ('region', 'francesouth', 43.8345, 2.1972)` | diff --git a/src/current/_includes/v21.1/misc/basic-terms.md b/src/current/_includes/v21.1/misc/basic-terms.md deleted file mode 100644 index afba9a87cb5..00000000000 --- a/src/current/_includes/v21.1/misc/basic-terms.md +++ /dev/null @@ -1,9 +0,0 @@ -Term | Definition ------|------------ -**Cluster** | Your CockroachDB deployment, which acts as a single logical application. -**Node** | An individual machine running CockroachDB. One or more nodes comprise a cluster. -**Range** | CockroachDB stores all user data (tables, indexes, etc.) and almost all system data in a giant sorted map of key-value pairs. This keyspace is divided into contiguous chunks called "ranges," such that every key is found in one range.

From a SQL perspective, a table and its secondary indexes initially map to a single range, where each key-value pair in the range represents a single row in the table (also called the primary index because the table is sorted by the primary key) or a single row in a secondary index. As soon as that range reaches 512 MiB in size ([the default](../configure-replication-zones.html#range-max-bytes)), it splits into two ranges. This process continues for these new ranges as the table and its indexes continue growing. -**Replica** | CockroachDB replicates each range (3 times [by default](../configure-replication-zones.html#num_replicas)) and stores each replica on a different node. -**Leaseholder** | For each range, one of the replicas holds the "range lease." This replica, referred to as the "leaseholder," is the one that receives and coordinates all read and write requests for the range.

Unlike writes, read requests access the leaseholder and send the results to the client without needing to coordinate with any of the other range replicas. This reduces the network round trips involved and is possible because the leaseholder is guaranteed to be up-to-date due to the fact that all write requests also go to the leaseholder. -**Raft Leader** | For each range, one of the replicas is the "leader" for write requests. Via the [Raft consensus protocol](replication-layer.html#raft), this replica ensures that a majority of replicas (the leader and enough followers) agree, based on their Raft logs, before committing the write. The Raft leader is almost always the same replica as the leaseholder. -**Raft Log** | For each range, there is a time-ordered log of writes to the range that its replicas have agreed on. This log exists on-disk with each replica and is the range's source of truth for consistent replication. diff --git a/src/current/_includes/v21.1/misc/chrome-localhost.md b/src/current/_includes/v21.1/misc/chrome-localhost.md deleted file mode 100644 index d794ff339d0..00000000000 --- a/src/current/_includes/v21.1/misc/chrome-localhost.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -If you are using Google Chrome, and you are getting an error about not being able to reach `localhost` because its certificate has been revoked, go to chrome://flags/#allow-insecure-localhost, enable "Allow invalid certificates for resources loaded from localhost", and then restart the browser. Enabling this Chrome feature degrades security for all sites running on `localhost`, not just CockroachDB's DB Console, so be sure to enable the feature only temporarily. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/misc/client-side-intervention-example.md b/src/current/_includes/v21.1/misc/client-side-intervention-example.md deleted file mode 100644 index d0bbfc33695..00000000000 --- a/src/current/_includes/v21.1/misc/client-side-intervention-example.md +++ /dev/null @@ -1,28 +0,0 @@ -The Python-like pseudocode below shows how to implement an application-level retry loop; it does not require your driver or ORM to implement [advanced retry handling logic](advanced-client-side-transaction-retries.html), so it can be used from any programming language or environment. In particular, your retry loop must: - -- Raise an error if the `max_retries` limit is reached -- Retry on `40001` error codes -- [`COMMIT`](commit-transaction.html) at the end of the `try` block -- Implement [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff) logic as shown below for best performance - -~~~ python -while true: - n++ - if n == max_retries: - throw Error("did not succeed within N retries") - try: - # add logic here to run all your statements - conn.exec('COMMIT') - break - catch error: - if error.code != "40001": - throw error - else: - # This is a retry error, so we roll back the current transaction - # and sleep for a bit before retrying. The sleep time increases - # for each failed transaction. Adapted from - # https://colintemple.com/2017/03/java-exponential-backoff/ - conn.exec('ROLLBACK'); - sleep_ms = int(((2**n) * 100) + rand( 100 - 1 ) + 1) - sleep(sleep_ms) # Assumes your sleep() takes milliseconds -~~~ diff --git a/src/current/_includes/v21.1/misc/csv-import-callout.md b/src/current/_includes/v21.1/misc/csv-import-callout.md deleted file mode 100644 index 60555c5d0b6..00000000000 --- a/src/current/_includes/v21.1/misc/csv-import-callout.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -The column order in your schema must match the column order in the file being imported. -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v21.1/misc/customizing-the-savepoint-name.md b/src/current/_includes/v21.1/misc/customizing-the-savepoint-name.md deleted file mode 100644 index ed895f906f3..00000000000 --- a/src/current/_includes/v21.1/misc/customizing-the-savepoint-name.md +++ /dev/null @@ -1,5 +0,0 @@ -Set the `force_savepoint_restart` [session variable](set-vars.html#supported-variables) to `true` to enable using a custom name for the [retry savepoint](advanced-client-side-transaction-retries.html#retry-savepoints). - -Once this variable is set, the [`SAVEPOINT`](savepoint.html) statement will accept any name for the retry savepoint, not just `cockroach_restart`. In addition, it causes every savepoint name to be equivalent to `cockroach_restart`, therefore disallowing the use of [nested transactions](transactions.html#nested-transactions). - -This feature exists to support applications that want to use the [advanced client-side transaction retry protocol](advanced-client-side-transaction-retries.html), but cannot customize the name of savepoints to be `cockroach_restart`. For example, this may be necessary because you are using an ORM that requires its own names for savepoints. diff --git a/src/current/_includes/v21.1/misc/debug-subcommands.md b/src/current/_includes/v21.1/misc/debug-subcommands.md deleted file mode 100644 index 5c7e7b7761d..00000000000 --- a/src/current/_includes/v21.1/misc/debug-subcommands.md +++ /dev/null @@ -1,3 +0,0 @@ -While the `cockroach debug` command has a few subcommands, users are expected to use only the [`zip`](cockroach-debug-zip.html), [`encryption-active-key`](cockroach-debug-encryption-active-key.html), [`merge-logs`](cockroach-debug-merge-logs.html), [`list-files`](cockroach-debug-list-files.html), and [`ballast`](cockroach-debug-ballast.html) subcommands. - -The other `debug` subcommands are useful only to CockroachDB's developers and contributors. diff --git a/src/current/_includes/v21.1/misc/delete-statistics.md b/src/current/_includes/v21.1/misc/delete-statistics.md deleted file mode 100644 index 6341dd93af3..00000000000 --- a/src/current/_includes/v21.1/misc/delete-statistics.md +++ /dev/null @@ -1,17 +0,0 @@ -To delete statistics for all tables in all databases: - -{% include_cached copy-clipboard.html %} -~~~ sql -> DELETE FROM system.table_statistics WHERE true; -~~~ - -To delete a named set of statistics (e.g, one named "users_stats"), run a query like the following: - -{% include_cached copy-clipboard.html %} -~~~ sql -> DELETE FROM system.table_statistics WHERE name = 'users_stats'; -~~~ - -After deleting statistics, restart the nodes in your cluster to clear the statistics caches. - -For more information about the `DELETE` statement, see [`DELETE`](delete.html). diff --git a/src/current/_includes/v21.1/misc/diagnostics-callout.html b/src/current/_includes/v21.1/misc/diagnostics-callout.html deleted file mode 100644 index a969a8cf152..00000000000 --- a/src/current/_includes/v21.1/misc/diagnostics-callout.html +++ /dev/null @@ -1 +0,0 @@ -{{site.data.alerts.callout_info}}By default, each node of a CockroachDB cluster periodically shares anonymous usage details with Cockroach Labs. For an explanation of the details that get shared and how to opt-out of reporting, see Diagnostics Reporting.{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/misc/enterprise-features.md b/src/current/_includes/v21.1/misc/enterprise-features.md deleted file mode 100644 index 670bd4869be..00000000000 --- a/src/current/_includes/v21.1/misc/enterprise-features.md +++ /dev/null @@ -1,10 +0,0 @@ -Feature | Description ---------+------------------------- -[Multi-Region Capabilities](multiregion-overview.html) | This feature gives you row-level control of how and where your data is stored to dramatically reduce read and write latencies and assist in meeting regulatory requirements in multi-region deployments. -[Follower Reads](follower-reads.html) | This feature reduces read latency in multi-region deployments by using the closest replica at the expense of reading slightly historical data. -[`BACKUP`](backup.html) | This feature creates backups of your cluster's schema and data that are consistent as of a given timestamp, stored on a service such as AWS S3, Google Cloud Storage, NFS, or HTTP storage.

[Incremental backups](take-full-and-incremental-backups.html), [backups with revision history](take-backups-with-revision-history-and-restore-from-a-point-in-time.html), [locality-aware backups](take-and-restore-locality-aware-backups.html), and [encrypted backups](take-and-restore-encrypted-backups.html) require an Enterprise license. [Full backups](take-full-and-incremental-backups.html) do not require an Enterprise license. -[Changefeeds into a Configurable Sink](create-changefeed.html) | This feature targets an allowlist of tables. For every change, it emits a record to a configurable sink, either Apache Kafka or a cloud-storage sink, for downstream processing such as reporting, caching, or full-text indexing. -[Node Map](enable-node-map.html) | This feature visualizes the geographical configuration of a cluster by plotting node localities on a world map. -[Encryption at Rest](encryption.html#encryption-at-rest-enterprise) | Supplementing CockroachDB's encryption in flight capabilities, this feature provides transparent encryption of a node's data on the local disk. It allows encryption of all files on disk using AES in counter mode, with all key sizes allowed. -[GSSAPI with Kerberos Authentication](gssapi_authentication.html) | CockroachDB supports the Generic Security Services API (GSSAPI) with Kerberos authentication, which lets you use an external enterprise directory system that supports Kerberos, such as Active Directory. -[Single Sign-on (SSO)](sso.html) | This feature lets you use an external identity provider for user access to the DB Console in a secure cluster. diff --git a/src/current/_includes/v21.1/misc/experimental-warning.md b/src/current/_includes/v21.1/misc/experimental-warning.md deleted file mode 100644 index d38a9755593..00000000000 --- a/src/current/_includes/v21.1/misc/experimental-warning.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_danger}} -**This is an experimental feature**. The interface and output are subject to change. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/misc/explore-benefits-see-also.md b/src/current/_includes/v21.1/misc/explore-benefits-see-also.md deleted file mode 100644 index 6b1a3afed71..00000000000 --- a/src/current/_includes/v21.1/misc/explore-benefits-see-also.md +++ /dev/null @@ -1,7 +0,0 @@ -- [Replication & Rebalancing](demo-replication-and-rebalancing.html) -- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html) -- [Low Latency Multi-Region Deployment](demo-low-latency-multi-region-deployment.html) -- [Serializable Transactions](demo-serializable.html) -- [Cross-Cloud Migration](demo-automatic-cloud-migration.html) -- [Orchestration](orchestrate-a-local-cluster-with-kubernetes-insecure.html) -- [JSON Support](demo-json-support.html) diff --git a/src/current/_includes/v21.1/misc/force-index-selection.md b/src/current/_includes/v21.1/misc/force-index-selection.md deleted file mode 100644 index 386d23492b9..00000000000 --- a/src/current/_includes/v21.1/misc/force-index-selection.md +++ /dev/null @@ -1,128 +0,0 @@ -By using the explicit index annotation, you can override [CockroachDB's index selection](https://www.cockroachlabs.com/blog/index-selection-cockroachdb-2/) and use a specific [index](indexes.html) when reading from a named table. - -{{site.data.alerts.callout_info}} -Index selection can impact [performance](performance-best-practices-overview.html), but does not change the result of a query. -{{site.data.alerts.end}} - -The syntax to force a scan of a specific index is: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM table@my_idx; -~~~ - -This is equivalent to the longer expression: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM table@{FORCE_INDEX=my_idx}; -~~~ - -The syntax to force a **reverse scan** of a specific index is: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM table@{FORCE_INDEX=my_idx,DESC}; -~~~ - -Forcing a reverse scan is sometimes useful during [performance tuning](performance-best-practices-overview.html). For reference, the full syntax for choosing an index and its scan direction is - -{% include_cached copy-clipboard.html %} -~~~ sql -SELECT * FROM table@{FORCE_INDEX=idx[,DIRECTION]} -~~~ - -where the optional `DIRECTION` is either `ASC` (ascending) or `DESC` (descending). - -When a direction is specified, that scan direction is forced; otherwise the [cost-based optimizer](cost-based-optimizer.html) is free to choose the direction it calculates will result in the best performance. - -You can verify that the optimizer is choosing your desired scan direction using [`EXPLAIN (OPT)`](explain.html#opt-option). For example, given the table - -{% include_cached copy-clipboard.html %} -~~~ sql -> CREATE TABLE kv (K INT PRIMARY KEY, v INT); -~~~ - -you can check the scan direction with: - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN (opt) SELECT * FROM users@{FORCE_INDEX=primary,DESC}; -~~~ - -~~~ - text -+-------------------------------------+ - scan users,rev - └── flags: force-index=primary,rev -(2 rows) -~~~ - -To force a [partial index scan](partial-indexes.html), your statement must have a `WHERE` clause that implies the partial index filter. - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE TABLE t ( - a INT, - INDEX idx (a) WHERE a > 0); -INSERT INTO t(a) VALUES (5); -SELECT * FROM t@idx WHERE a > 0; -~~~ - -~~~ -CREATE TABLE - -Time: 13ms total (execution 12ms / network 0ms) - -INSERT 1 - -Time: 22ms total (execution 21ms / network 0ms) - - a ------ - 5 -(1 row) - -Time: 1ms total (execution 1ms / network 0ms) -~~~ - -To force a [partial GIN index](inverted-indexes.html#partial-gin-indexes) scan, your statement must have a `WHERE` clause that: - -- Implies the partial index. -- Constrains the GIN index scan. - -{% include_cached copy-clipboard.html %} -~~~ sql -DROP TABLE t; -CREATE TABLE t ( - j JSON, - INVERTED INDEX idx (j) WHERE j->'a' = '1'); -INSERT INTO t(j) - VALUES ('{"a": 1}'), - ('{"a": 3, "b": 2}'), - ('{"a": 1, "b": 2}'); -SELECT * FROM t@idx WHERE j->'a' = '1' AND j->'b' = '2'; -~~~ - -~~~ -DROP TABLE - -Time: 68ms total (execution 22ms / network 45ms) - -CREATE TABLE - -Time: 10ms total (execution 10ms / network 0ms) - -INSERT 3 - -Time: 22ms total (execution 22ms / network 0ms) - - j --------------------- - {"a": 1, "b": 2} -(1 row) - -Time: 1ms total (execution 1ms / network 0ms) -~~~ - -To see all indexes available on a table, use [`SHOW INDEXES`](show-index.html). diff --git a/src/current/_includes/v21.1/misc/gce-locations.md b/src/current/_includes/v21.1/misc/gce-locations.md deleted file mode 100644 index 22122aae78d..00000000000 --- a/src/current/_includes/v21.1/misc/gce-locations.md +++ /dev/null @@ -1,18 +0,0 @@ -| Location | SQL Statement | -| ------ | ------ | -| us-east1 (South Carolina) | `INSERT into system.locations VALUES ('region', 'us-east1', 33.836082, -81.163727)` | -| us-east4 (N. Virginia) | `INSERT into system.locations VALUES ('region', 'us-east4', 37.478397, -76.453077)` | -| us-central1 (Iowa) | `INSERT into system.locations VALUES ('region', 'us-central1', 42.032974, -93.581543)` | -| us-west1 (Oregon) | `INSERT into system.locations VALUES ('region', 'us-west1', 43.804133, -120.554201)` | -| northamerica-northeast1 (Montreal) | `INSERT into system.locations VALUES ('region', 'northamerica-northeast1', 56.130366, -106.346771)` | -| europe-west1 (Belgium) | `INSERT into system.locations VALUES ('region', 'europe-west1', 50.44816, 3.81886)` | -| europe-west2 (London) | `INSERT into system.locations VALUES ('region', 'europe-west2', 51.507351, -0.127758)` | -| europe-west3 (Frankfurt) | `INSERT into system.locations VALUES ('region', 'europe-west3', 50.110922, 8.682127)` | -| europe-west4 (Netherlands) | `INSERT into system.locations VALUES ('region', 'europe-west4', 53.4386, 6.8355)` | -| europe-west6 (Zürich) | `INSERT into system.locations VALUES ('region', 'europe-west6', 47.3769, 8.5417)` | -| asia-east1 (Taiwan) | `INSERT into system.locations VALUES ('region', 'asia-east1', 24.0717, 120.5624)` | -| asia-northeast1 (Tokyo) | `INSERT into system.locations VALUES ('region', 'asia-northeast1', 35.689487, 139.691706)` | -| asia-southeast1 (Singapore) | `INSERT into system.locations VALUES ('region', 'asia-southeast1', 1.352083, 103.819836)` | -| australia-southeast1 (Sydney) | `INSERT into system.locations VALUES ('region', 'australia-southeast1', -33.86882, 151.209296)` | -| asia-south1 (Mumbai) | `INSERT into system.locations VALUES ('region', 'asia-south1', 19.075984, 72.877656)` | -| southamerica-east1 (São Paulo) | `INSERT into system.locations VALUES ('region', 'southamerica-east1', -23.55052, -46.633309)` | diff --git a/src/current/_includes/v21.1/misc/geojson_geometry_note.md b/src/current/_includes/v21.1/misc/geojson_geometry_note.md deleted file mode 100644 index ba5fe199657..00000000000 --- a/src/current/_includes/v21.1/misc/geojson_geometry_note.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -The screenshots in these examples were generated using [geojson.io](http://geojson.io), but they are designed to showcase the shapes, not the map. Representing `GEOMETRY` data in GeoJSON can lead to unexpected results if using geometries with [SRIDs](spatial-glossary.html#srid) other than 4326 (as shown below). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/misc/haproxy.md b/src/current/_includes/v21.1/misc/haproxy.md deleted file mode 100644 index 375af8e937d..00000000000 --- a/src/current/_includes/v21.1/misc/haproxy.md +++ /dev/null @@ -1,39 +0,0 @@ -By default, the generated configuration file is called `haproxy.cfg` and looks as follows, with the `server` addresses pre-populated correctly: - - ~~~ - global - maxconn 4096 - - defaults - mode tcp - # Timeout values should be configured for your specific use. - # See: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-timeout%20connect - timeout connect 10s - timeout client 1m - timeout server 1m - # TCP keep-alive on client side. Server already enables them. - option clitcpka - - listen psql - bind :26257 - mode tcp - balance roundrobin - option httpchk GET /health?ready=1 - server cockroach1 :26257 check port 8080 - server cockroach2 :26257 check port 8080 - server cockroach3 :26257 check port 8080 - ~~~ - - The file is preset with the minimal [configurations](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html) needed to work with your running cluster: - - Field | Description - ------|------------ - `timeout connect`
`timeout client`
`timeout server` | Timeout values that should be suitable for most deployments. - `bind` | The port that HAProxy listens on. This is the port clients will connect to and thus needs to be allowed by your network configuration.

This tutorial assumes HAProxy is running on a separate machine from CockroachDB nodes. If you run HAProxy on the same machine as a node (not recommended), you'll need to change this port, as `26257` is likely already being used by the CockroachDB node. - `balance` | The balancing algorithm. This is set to `roundrobin` to ensure that connections get rotated amongst nodes (connection 1 on node 1, connection 2 on node 2, etc.). Check the [HAProxy Configuration Manual](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-balance) for details about this and other balancing algorithms. - `option httpchk` | The HTTP endpoint that HAProxy uses to check node health. [`/health?ready=1`](monitoring-and-alerting.html#health-ready-1) ensures that HAProxy doesn't direct traffic to nodes that are live but not ready to receive requests. - `server` | For each included node, this field specifies the address the node advertises to other nodes in the cluster, i.e., the addressed pass in the [`--advertise-addr` flag](cockroach-start.html#networking) on node startup. Make sure hostnames are resolvable and IP addresses are routable from HAProxy. - - {{site.data.alerts.callout_info}} - For full details on these and other configuration settings, see the [HAProxy Configuration Manual](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html). - {{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/misc/import-perf.md b/src/current/_includes/v21.1/misc/import-perf.md deleted file mode 100644 index b0520a9c392..00000000000 --- a/src/current/_includes/v21.1/misc/import-perf.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_success}} -For best practices for optimizing import performance in CockroachDB, see [Import Performance Best Practices](import-performance-best-practices.html). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/misc/install-next-steps.html b/src/current/_includes/v21.1/misc/install-next-steps.html deleted file mode 100644 index 228da86edd1..00000000000 --- a/src/current/_includes/v21.1/misc/install-next-steps.html +++ /dev/null @@ -1,16 +0,0 @@ - diff --git a/src/current/_includes/v21.1/misc/interleave-deprecation-note.md b/src/current/_includes/v21.1/misc/interleave-deprecation-note.md deleted file mode 100644 index 7bff77893bd..00000000000 --- a/src/current/_includes/v21.1/misc/interleave-deprecation-note.md +++ /dev/null @@ -1 +0,0 @@ -{{site.data.alerts.callout_danger}}Interleaving data was deprecated in v20.2, disabled by default in v21.1, and permanently removed in v21.2. For details, see the [interleaving deprecation notice](interleave-in-parent.html#deprecation).{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v21.1/misc/linux-binary-prereqs.md b/src/current/_includes/v21.1/misc/linux-binary-prereqs.md deleted file mode 100644 index 541183fe71b..00000000000 --- a/src/current/_includes/v21.1/misc/linux-binary-prereqs.md +++ /dev/null @@ -1 +0,0 @@ -

The CockroachDB binary for Linux requires glibc, libncurses, and tzdata, which are found by default on nearly all Linux distributions, with Alpine as the notable exception.

diff --git a/src/current/_includes/v21.1/misc/logging-defaults.md b/src/current/_includes/v21.1/misc/logging-defaults.md deleted file mode 100644 index 1a7ae68a536..00000000000 --- a/src/current/_includes/v21.1/misc/logging-defaults.md +++ /dev/null @@ -1,3 +0,0 @@ -By default, this command logs messages to `stderr`. This includes events with `WARNING` [severity](logging.html#logging-levels-severities) and higher. - -If you need to troubleshoot this command's behavior, you can [customize its logging behavior](configure-logs.html). \ No newline at end of file diff --git a/src/current/_includes/v21.1/misc/logging-flags.md b/src/current/_includes/v21.1/misc/logging-flags.md deleted file mode 100644 index 63c1aa396d8..00000000000 --- a/src/current/_includes/v21.1/misc/logging-flags.md +++ /dev/null @@ -1,11 +0,0 @@ -Flag | Description ------|------------ -`--log` | **New in v21.1:** Configure logging parameters by specifying a YAML payload. For details, see [Configure logs](configure-logs.html#flag). If a YAML configuration is not specified, the [default configuration](configure-logs.html#default-logging-configuration) is used.

`--log-config-file` can also be used.

**Note:** The deprecated logging flags below cannot be combined with `--log`, and can be defined instead in the YAML payload. -`--log-config-file` | **New in v21.1:** Configure logging parameters by specifying a path to a YAML file. For details, see [Configure logs](configure-logs.html#flag). If a YAML configuration is not specified, the [default configuration](configure-logs.html#default-logging-configuration) is used.

`--log` can also be used.

**Note:** The deprecated logging flags below cannot be combined with `--log-config-file`, and can be defined instead in the YAML payload. -`--log-dir` | **Deprecated.** To enable logging to files and write logs to the specified directory, use [`--log`](configure-logs.html#flag) and set `dir` in the YAML configuration.

Setting `--log-dir` to a blank directory (`--log-dir=`) disables logging to files. Do not use `--log-dir=""`; this creates a new directory named `""` and stores log files in that directory. -`--log-group-max-size` | **Deprecated.** This is now configured with [`--log`](configure-logs.html#flag) or [`--log-config-file`](configure-logs.html#flag) and a YAML payload. After the logging group (i.e., `cockroach`, `cockroach-sql-audit`, `cockroach-auth`, `cockroach-sql-exec`, `cockroach-pebble`) reaches the specified size, delete the oldest log file. The flag's argument takes standard file sizes, such as `--log-group-max-size=1GiB`.

**Default**: 100MiB -`--log-file-max-size` | **Deprecated.** This is now configured with [`--log`](configure-logs.html#flag) or [`--log-config-file`](configure-logs.html#flag) and a YAML payload. After logs reach the specified size, begin writing logs to a new file. The flag's argument takes standard file sizes, such as `--log-file-max-size=2MiB`.

**Default**: 10MiB -`--log-file-verbosity` | **Deprecated.** This is now configured with [`--log`](configure-logs.html#flag) or [`--log-config-file`](configure-logs.html#flag) and a YAML payload. Only writes messages to log files if they are at or above the specified [severity level](logging.html#logging-levels-severities), such as `--log-file-verbosity=WARNING`. **Requires** logging to files.

**Default**: `INFO` -`--logtostderr` | **Deprecated.** This is now configured with [`--log`](configure-logs.html#flag) or [`--log-config-file`](configure-logs.html#flag) and a YAML payload. Enable logging to `stderr` for messages at or above the specified [severity level](logging.html#logging-levels-severities), such as `--logtostderr=ERROR`

If you use this flag without specifying the severity level (e.g., `cockroach start --logtostderr`), it prints messages of *all* severities to `stderr`.

Setting `--logtostderr=NONE` disables logging to `stderr`. -`--no-color` | Do not colorize `stderr`. Possible values: `true` or `false`.

When set to `false`, messages logged to `stderr` are colorized based on [severity level](logging.html#logging-levels-severities).

**Default:** `false` -`--sql-audit-dir` | **Deprecated.** This is now configured with [`--log`](configure-logs.html#flag) or [`--log-config-file`](configure-logs.html#flag) and a YAML payload. If non-empty, output the `SENSITIVE_ACCESS` [logging channel](logging-overview.html#logging-channels) to this directory.

Note that enabling `SENSITIVE_ACCESS` logs can negatively impact performance. As a result, we recommend using the `SENSITIVE_ACCESS` channel for security purposes only. For more information, see [Logging use cases](logging-use-cases.html#security-and-audit-monitoring). diff --git a/src/current/_includes/v21.1/misc/movr-live-demo.md b/src/current/_includes/v21.1/misc/movr-live-demo.md deleted file mode 100644 index f8cfb24cb21..00000000000 --- a/src/current/_includes/v21.1/misc/movr-live-demo.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_success}} -For a live demo of the deployed example application, see [https://movr.cloud](https://movr.cloud). -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v21.1/misc/movr-schema.md b/src/current/_includes/v21.1/misc/movr-schema.md deleted file mode 100644 index 9e2b99b4425..00000000000 --- a/src/current/_includes/v21.1/misc/movr-schema.md +++ /dev/null @@ -1,12 +0,0 @@ -The six tables in the `movr` database store user, vehicle, and ride data for MovR: - -Table | Description ---------|---------------------------- -`users` | People registered for the service. -`vehicles` | The pool of vehicles available for the service. -`rides` | When and where users have rented a vehicle. -`promo_codes` | Promotional codes for users. -`user_promo_codes` | Promotional codes in use by users. -`vehicle_location_histories` | Vehicle location history. - -Geo-partitioning schema diff --git a/src/current/_includes/v21.1/misc/movr-workflow.md b/src/current/_includes/v21.1/misc/movr-workflow.md deleted file mode 100644 index 948d95dc1de..00000000000 --- a/src/current/_includes/v21.1/misc/movr-workflow.md +++ /dev/null @@ -1,76 +0,0 @@ -The workflow for MovR is as follows: - -1. A user loads the app and sees the 25 closest vehicles. - - For example: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SELECT id, city, status FROM vehicles WHERE city='amsterdam' limit 25; - ~~~ - -2. The user signs up for the service. - - For example: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO users (id, name, address, city, credit_card) - VALUES ('66666666-6666-4400-8000-00000000000f', 'Mariah Lam', '88194 Angela Gardens Suite 60', 'amsterdam', '123245696'); - ~~~ - - {{site.data.alerts.callout_info}}Usually for Universally Unique Identifier (UUID) you would need to generate it automatically but for the sake of this follow up we will use predetermined UUID to keep track of them in our examples.{{site.data.alerts.end}} - -3. In some cases, the user adds their own vehicle to share. - - For example: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO vehicles (id, city, type, owner_id,creation_time,status, current_location, ext) - VALUES ('ffffffff-ffff-4400-8000-00000000000f', 'amsterdam', 'skateboard', '66666666-6666-4400-8000-00000000000f', current_timestamp(), 'available', '88194 Angela Gardens Suite 60', '{"color": "blue"}'); - ~~~ -4. More often, the user reserves a vehicle and starts a ride, applying a promo code, if available and valid. - - For example: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SELECT code FROM user_promo_codes WHERE user_id ='66666666-6666-4400-8000-00000000000f'; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > UPDATE vehicles SET status = 'in_use' WHERE id='bbbbbbbb-bbbb-4800-8000-00000000000b'; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO rides (id, city, vehicle_city, rider_id, vehicle_id, start_address,end_address, start_time, end_time, revenue) - VALUES ('cd032f56-cf1a-4800-8000-00000000066f', 'amsterdam', 'amsterdam', '66666666-6666-4400-8000-00000000000f', 'bbbbbbbb-bbbb-4800-8000-00000000000b', '70458 Mary Crest', '', TIMESTAMP '2020-10-01 10:00:00.123456', NULL, 0.0); - ~~~ - -5. During the ride, MovR tracks the location of the vehicle. - - For example: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO vehicle_location_histories (city, ride_id, timestamp, lat, long) - VALUES ('amsterdam', 'cd032f56-cf1a-4800-8000-00000000066f', current_timestamp(), -101, 60); - ~~~ - -6. The user ends the ride and releases the vehicle. - - For example: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > UPDATE vehicles SET status = 'available' WHERE id='bbbbbbbb-bbbb-4800-8000-00000000000b'; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > UPDATE rides SET end_address ='33862 Charles Junctions Apt. 49', end_time=TIMESTAMP '2020-10-01 10:30:00.123456', revenue=88.6 - WHERE id='cd032f56-cf1a-4800-8000-00000000066f'; - ~~~ diff --git a/src/current/_includes/v21.1/misc/multiregion-max-offset.md b/src/current/_includes/v21.1/misc/multiregion-max-offset.md deleted file mode 100644 index 93a2faccba2..00000000000 --- a/src/current/_includes/v21.1/misc/multiregion-max-offset.md +++ /dev/null @@ -1 +0,0 @@ -For new clusters using the [multi-region SQL abstractions](multiregion-overview.html), we recommend lowering the [`--max-offset`](cockroach-start.html#flags-max-offset) setting to `250ms`. This is especially helpful for lowering the write latency of [global tables](multiregion-overview.html#global-tables). Note that this will require restarting all of the nodes in your cluster at the same time; it cannot be done with a rolling restart. diff --git a/src/current/_includes/v21.1/misc/non-http-source-privileges.md b/src/current/_includes/v21.1/misc/non-http-source-privileges.md deleted file mode 100644 index 9f0f5a880a0..00000000000 --- a/src/current/_includes/v21.1/misc/non-http-source-privileges.md +++ /dev/null @@ -1,12 +0,0 @@ -The source file URL does **not** require the [`admin` role](authorization.html#admin-role) in the following scenarios: - -- S3 and GS using `SPECIFIED` (and not `IMPLICIT`) credentials. Azure is always `SPECIFIED` by default. -- [Userfile](use-userfile-for-bulk-operations.html) - -The source file URL **does** require the [`admin` role](authorization.html#admin-role) in the following scenarios: - -- S3 or GS using `IMPLICIT` credentials -- Use of a [custom endpoint](https://docs.aws.amazon.com/sdk-for-go/api/aws/endpoints/) on S3 -- [Nodelocal](cockroach-nodelocal-upload.html) - -We recommend using [cloud storage for bulk operations](use-cloud-storage-for-bulk-operations.html). diff --git a/src/current/_includes/v21.1/misc/schema-change-stmt-note.md b/src/current/_includes/v21.1/misc/schema-change-stmt-note.md deleted file mode 100644 index b522b658652..00000000000 --- a/src/current/_includes/v21.1/misc/schema-change-stmt-note.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -This statement performs a schema change. For more information about how online schema changes work in CockroachDB, see [Online Schema Changes](online-schema-changes.html). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/misc/schema-change-view-job.md b/src/current/_includes/v21.1/misc/schema-change-view-job.md deleted file mode 100644 index 8861174d621..00000000000 --- a/src/current/_includes/v21.1/misc/schema-change-view-job.md +++ /dev/null @@ -1 +0,0 @@ -This schema change statement is registered as a job. You can view long-running jobs with [`SHOW JOBS`](show-jobs.html). diff --git a/src/current/_includes/v21.1/misc/session-vars.html b/src/current/_includes/v21.1/misc/session-vars.html deleted file mode 100644 index d8c07c55589..00000000000 --- a/src/current/_includes/v21.1/misc/session-vars.html +++ /dev/null @@ -1,811 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Variable nameDescriptionInitial valueModify with - SET - ?View with - SHOW - ?
- application_name - The current application name for statistics collection.Empty string, or cockroach for sessions from the built-in SQL client.YesYes
- bytea_output - The mode for conversions from STRING to BYTES.hexYesYes
- client_min_messages - The severity level of notices displayed in the SQL shell. -
Accepted values include debug5, debug4, debug3, debug2, debug1, log, notice, warning, and error.
- notice - YesYes
- crdb_version - The version of CockroachDB.`CockroachDB OSS version`NoYes
- database - The current database.Database in connection string, or empty if not specified.YesYes
- default_int_size - The size, in bytes, of an INT type. - 8 - YesYes
- default_transaction_isolation - All transactions execute with SERIALIZABLE isolation. See Transactions: Isolation levels. - SERIALIZABLE - NoYes
- default_transaction_priority - The default transaction priority for the current session. -
The supported options include LOW, NORMAL, and HIGH.
- NORMAL - YesYes
- default_transaction_read_only - The default transaction access mode for the current session. -
If set to on, only read operations are allowed in transactions in the current session; if set to off, both read and write operations are allowed. See SET TRANSACTION for more details.
- off - YesYes
- default_transaction_use_follower_reads - {% include_cached new-in.html version="v21.1" %} If set to on, all read-only transactions use AS OF SYSTEM TIME follower_read_timestamp(), to allow the transaction to use follower reads.
If set to off, read-only transactions will only use follower reads if an AS OF SYSTEM TIME clause is specified in the statement, with an interval of at least 4.8 seconds.
- off - YesYes
- disallow_full_table_scans - If set to on, all queries that have planned a full table or full secondary index scan will return an error message. -
This setting does not apply to internal queries, which may plan full table or index scans without checking the session variable.
- off - YesYes
- distsql - The query distribution mode for the session. -
By default, CockroachDB determines which queries are faster to execute if distributed across multiple nodes, and all other queries are run through the gateway node.
- auto - YesYes
- enable_drop_enum_value - Indicates whether DROP VALUE clauses are enabled for ALTER TYPE statements. - off - YesYes
- - enable_implicit_select_for_update - Indicates whether UPDATE and UPSERT statements acquire locks using the FOR UPDATE locking mode during their initial row scan, which improves performance for contended workloads. -
For more information about how FOR UPDATE locking works, see the documentation for SELECT FOR UPDATE.
- on - YesYes
- enable_insert_fast_path - Indicates whether CockroachDB will use a specialized execution operator for inserting into a table. We recommend leaving this setting on. - on - YesYes
- enable_zigzag_join - Indicates whether the cost-based optimizer will plan certain queries using a zig-zag merge join algorithm, which searches for the desired intersection by jumping back and forth between the indexes based on the fact that after constraining indexes, they share an ordering. - on - YesYes
- extra_float_digits - The number of digits displayed for floating-point values. -
Only values between -15 and 3 are supported.
- 0 - YesYes
force_savepoint_restartWhen set to true, allows the SAVEPOINT statement to accept any name for a savepoint. - off - YesYes
foreign_key_cascades_limit Limits the number of cascading operations that run as part of a single query. - 10000 - YesYes
idle_in_session_timeoutAutomatically terminates sessions that idle past the specified threshold.
When set to 0, the session will not timeout.
The value set by the sql.defaults.idle_in_session_timeout cluster setting (0s, by default).YesYes
- idle_in_transaction_session_timeout - Automatically terminates sessions that are idle in a transaction past the specified threshold.
When set to 0, the session will not timeout.
The value set by the sql.defaults.idle_in_transaction_session_timeout cluster setting (0s, by default).YesYes
- large_full_scan_rows - Determines which tables are considered "large" such that disallow_full_table_scans rejects full table or index scans of "large" tables. The default value is 1000. To reject all full table or index scans, set to 0.User-dependentNoYes
- locality - The location of the node.
For more information, see Locality.
Node-dependentNoYes
- node_id - The ID of the node currently connected to.
-
This variable is particularly useful for verifying load balanced connections.
Node-dependentNoYes
- optimizer_use_histograms - If on, the optimizer uses collected histograms for cardinality estimation. - on - NoYes
- optimizer_use_multicol_stats - If on, the optimizer uses collected multi-column statistics for cardinality estimation. - on - NoYes
- prefer_lookup_joins_for_fks - If on, the optimizer prefers lookup joins to merge joins when performing foreign key checks. - off - YesYes
- reorder_joins_limit - Maximum number of joins that the optimizer will attempt to reorder when searching for an optimal query execution plan. -
For more information, see Join reordering.
- 4 - YesYes
- results_buffer_size - The default size of the buffer that accumulates results for a statement or a batch of statements before they are sent to the client. -
This can also be set for all connections using the sql.defaults.results_buffer_size cluster setting. Note that auto-retries generally only happen while no results have been delivered to the client, so reducing this size can increase the number of retriable errors a client receives. On the other hand, increasing the buffer size can increase the delay until the client receives the first result row. Setting to 0 disables any buffering. -
- 16384 - YesYes
- require_explicit_primary_keys - If on, CockroachDB throws on error for all tables created without an explicit primary key defined. - - off - YesYes
- search_path - A list of schemas that will be searched to resolve unqualified table or function names. -
For more details, see SQL name resolution.
- public - YesYes
- serial_normalization - Specifies the default handling of SERIAL in table definitions. Valid options include 'rowid', 'virtual_sequence', sql_sequence, and sql_sequence_cached. -
If set to 'virtual_sequence', the SERIAL type auto-creates a sequence for better compatibility with Hibernate sequences. -
If set to sql_sequence_cached, the sql.defaults.serial_sequences_cache_size cluster setting can be used to control the number of values to cache in a user's session, with a default of 256.
- 'rowid' - YesYes
- server_version - The version of PostgreSQL that CockroachDB emulates.Version-dependentNoYes
- server_version_num - The version of PostgreSQL that CockroachDB emulates.Version-dependentYesYes
- session_id - The ID of the current session.Session-dependentNoYes
- session_user - The user connected for the current session.User in connection stringNoYes
- sql_safe_updates - If false, potentially unsafe SQL statements are allowed, including DROP of a non-empty database and all dependent objects, DELETE without a WHERE clause, UPDATE without a WHERE clause, and ALTER TABLE .. DROP COLUMN. -
See Allow Potentially Unsafe SQL Statements for more details.
- true for interactive sessions from the built-in SQL client,
false for sessions from other clients
YesYes
- statement_timeout - The amount of time a statement can run before being stopped. -
This value can be an int (e.g., 10) and will be interpreted as milliseconds. It can also be an interval or string argument, where the string can be parsed as a valid interval (e.g., '4s'). -
A value of 0 turns it off.
The value set by the sql.defaults.statement_timeout cluster setting (0s, by default).YesYes
- stub_catalog_tables - {% include_cached new-in.html version="v21.1" %} If off, querying an unimplemented, empty pg_catalog table will result in an error, as is the case in v20.2 and earlier. -
If on, querying an unimplemented, empty pg_catalog table simply returns no rows.
- on - YesYes
- timezone - The default time zone for the current session. -
This session variable was named "time zone" (with a space) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL.
- UTC - YesYes
- tracing - The trace recording state. - off - - Yes
- transaction_isolation - All transactions execute with SERIALIZABLE isolation. -
See Transactions: Isolation levels. -
This session variable was called transaction isolation level (with spaces) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL.
- SERIALIZABLE - NoYes
- transaction_priority - The priority of the current transaction. -
See Transactions: Transaction priorities for more details. -
This session variable was called transaction priority (with a space) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL.
- NORMAL - YesYes
- transaction_read_only - The access mode of the current transaction. -
See Set Transaction for more details.
- off - YesYes
- transaction_rows_read_err - The limit for the number of rows read by a SQL transaction. If this value is exceeded the transaction will fail (or the event will be logged to SQL_INTERNAL_PERF for internal transactions). - 0 - YesYes
- transaction_rows_read_log - The threshold for the number of rows read by a SQL transaction. If this value is exceeded, the event will be logged to SQL_PERF (or SQL_INTERNAL_PERF for internal transactions). - 0 - YesYes
- transaction_rows_written_err - The limit for the number of rows written by a SQL transaction. If this value is exceeded the transaction will fail (or the event will be logged to SQL_INTERNAL_PERF for internal transactions). - 0 - YesYes
- transaction_rows_written_log - The threshold for the number of rows written by a SQL transaction. If this value is exceeded, the event will be logged to SQL_PERF (or SQL_INTERNAL_PERF for internal transactions). - 0 - YesYes
- transaction_status - The state of the current transaction. -
See Transactions for more details. -
This session variable was called transaction status (with a space) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL.
- NoTxn - NoYes
- vectorize - The vectorized execution engine mode. -
Options include on and off. -
For more details, see Configuring vectorized execution for CockroachDB. -
- on - YesYes
- vectorize_row_count_threshold - The minimum number of rows required to use the vectorized engine to execute a query plan. - - 1000 - YesYes
- client_encoding - (Reserved; exposed only for ORM compatibility.) - UTF8 - NoYes
- datestyle - (Reserved; exposed only for ORM compatibility.) - ISO - NoYes
- default_tablespace - (Reserved; exposed only for ORM compatibility.) - - NoYes
- enable_seqscan - (Reserved; exposed only for ORM compatibility.) - on - YesYes
- escape_string_warning - (Reserved; exposed only for ORM compatibility.) - on - NoYes
- integer_datetimes - (Reserved; exposed only for ORM compatibility.) - on - NoYes
- intervalstyle - (Reserved; exposed only for ORM compatibility.) - postgres - NoYes
- lock_timeout - (Reserved; exposed only for ORM compatibility.) - 0 - NoYes
- max_identifier_length - (Reserved; exposed only for ORM compatibility.) - 128 - NoYes
- max_index_keys - (Reserved; exposed only for ORM compatibility.) - 32 - NoYes
- row_security - (Reserved; exposed only for ORM compatibility.) - off - NoYes
- standard_conforming_strings - (Reserved; exposed only for ORM compatibility.) - on - NoYes
- server_encoding - (Reserved; exposed only for ORM compatibility.) - UTF8 - YesYes
- synchronize_seqscans - (Reserved; exposed only for ORM compatibility.) - on - NoYes
- synchronous_commit - (Reserved; exposed only for ORM compatibility.) - on - YesYes
diff --git a/src/current/_includes/v21.1/misc/set-enterprise-license.md b/src/current/_includes/v21.1/misc/set-enterprise-license.md deleted file mode 100644 index 55d71273c32..00000000000 --- a/src/current/_includes/v21.1/misc/set-enterprise-license.md +++ /dev/null @@ -1,16 +0,0 @@ -As the CockroachDB `root` user, open the [built-in SQL shell](cockroach-sql.html) in insecure or secure mode, as per your CockroachDB setup. In the following example, we assume that CockroachDB is running in insecure mode. Then use the [`SET CLUSTER SETTING`](set-cluster-setting.html) command to set the name of your organization and the license key: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING cluster.organization = 'Acme Company'; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING enterprise.license = 'xxxxxxxxxxxx'; -~~~ diff --git a/src/current/_includes/v21.1/misc/sorting-delete-output.md b/src/current/_includes/v21.1/misc/sorting-delete-output.md deleted file mode 100644 index a67c7cb3229..00000000000 --- a/src/current/_includes/v21.1/misc/sorting-delete-output.md +++ /dev/null @@ -1,9 +0,0 @@ -To sort the output of a `DELETE` statement, use: - -{% include_cached copy-clipboard.html %} -~~~ sql -> WITH a AS (DELETE ... RETURNING ...) - SELECT ... FROM a ORDER BY ... -~~~ - -For an example, see [Sort and return deleted rows](delete.html#sort-and-return-deleted-rows). diff --git a/src/current/_includes/v21.1/misc/source-privileges.md b/src/current/_includes/v21.1/misc/source-privileges.md deleted file mode 100644 index cb45a4ace92..00000000000 --- a/src/current/_includes/v21.1/misc/source-privileges.md +++ /dev/null @@ -1,12 +0,0 @@ -The source file URL does _not_ require the [`admin` role](authorization.html#admin-role) in the following scenarios: - -- S3 and GS using `SPECIFIED` (and not `IMPLICIT`) credentials. Azure is always `SPECIFIED` by default. -- [Userfile](use-userfile-for-bulk-operations.html) - -The source file URL _does_ require the [`admin` role](authorization.html#admin-role) in the following scenarios: - -- S3 or GS using `IMPLICIT` credentials -- Use of a [custom endpoint](https://docs.aws.amazon.com/sdk-for-go/api/aws/endpoints/) on S3 -- [Nodelocal](cockroach-nodelocal-upload.html), [HTTP](use-a-local-file-server-for-bulk-operations.html) or [HTTPS] (use-a-local-file-server-for-bulk-operations.html) - -We recommend using [cloud storage for bulk operations](use-cloud-storage-for-bulk-operations.html). diff --git a/src/current/_includes/v21.1/misc/tooling.md b/src/current/_includes/v21.1/misc/tooling.md deleted file mode 100644 index ba0949383c1..00000000000 --- a/src/current/_includes/v21.1/misc/tooling.md +++ /dev/null @@ -1,69 +0,0 @@ -## Support levels - -Cockroach Labs has partnered with open-source projects, vendors, and individuals to offer the following levels of support with third-party tools: - -- **Full support** indicates that Cockroach Labs is committed to maintaining compatibility with the vast majority of the tool's features. CockroachDB is regularly tested against the latest version documented in the table below. -- **Partial support** indicates that Cockroach Labs is working towards full support for the tool. The primary features of the tool are compatible with CockroachDB (e.g., connecting and basic database operations), but full integration may require additional steps, lack support for all features, or exhibit unexpected behavior. - -{{site.data.alerts.callout_info}} -Unless explicitly stated, support for a [driver](#drivers) or [data access framework](#data-access-frameworks-e-g-orms) does not include [automatic, client-side transaction retry handling](transactions.html#client-side-intervention). For client-side transaction retry handling samples, see [Example Apps](example-apps.html). -{{site.data.alerts.end}} - -If you encounter problems using CockroachDB with any of the tools listed on this page, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward better support. - -For a list of tools supported by the CockroachDB community, see [Third-Party Tools Supported by the Community](community-tooling.html). -## Drivers - -| Language | Driver | Latest tested version | Support level | CockroachDB adapter | Tutorial | -|----------+--------+-----------------------+---------------------+---------------------+----------| -| C | [libpq](http://www.postgresql.org/docs/13/static/libpq.html)| PostgreSQL 13 | Beta | N/A | N/A | -| C# (.NET) | [Npgsql](https://www.nuget.org/packages/Npgsql/) | 4.1.3.1 | Beta | N/A | [Build a C# App with CockroachDB (Npgsql)](build-a-csharp-app-with-cockroachdb.html) | -| Go | [pgx](https://github.com/jackc/pgx/releases)


[pq](https://github.com/lib/pq) | {% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/master/pkg/cmd/roachtest/tests/pgx.go ||var supportedPGXTag = "||"\n\n %}
(use latest version of CockroachDB adapter)
{% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/master/pkg/cmd/roachtest/tests/libpq.go ||var libPQSupportedTag = "||"\n\n %} | Full


Full | [`crdbpgx`](https://pkg.go.dev/github.com/cockroachdb/cockroach-go/crdb/crdbpgx)
(includes client-side transaction retry handling)
N/A | [Build a Go App with CockroachDB (pgx)](build-a-go-app-with-cockroachdb.html)


[Build a Go App with CockroachDB (pq)](build-a-go-app-with-cockroachdb-pq.html) | -| Java | [JDBC](https://jdbc.postgresql.org/download/) | {% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/master/pkg/cmd/roachtest/tests/pgjdbc.go ||var supportedPGJDBCTag = "||"\n\n %} | Full | N/A | [Build a Java App with CockroachDB (JDBC)](build-a-java-app-with-cockroachdb.html) | -| JavaScript | [pg](https://www.npmjs.com/package/pg) | 8.2.1 | Full | N/A | [Build a Node.js App with CockroachDB (pg)](build-a-nodejs-app-with-cockroachdb.html) | -| Python | [psycopg2](https://www.psycopg.org/docs/install.html) | 2.8.6 | Full | N/A | [Build a Python App with CockroachDB (psycopg2)](build-a-python-app-with-cockroachdb.html) | -| Ruby | [pg](https://rubygems.org/gems/pg) | {% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/master/pkg/cmd/roachtest/tests/ruby_pg.go ||var rubyPGVersion = "||"\n\n %} | Full | N/A | [Build a Ruby App with CockroachDB (pg)](build-a-ruby-app-with-cockroachdb.html) | -| Rust | [rust-postgres](https://github.com/sfackler/rust-postgres) | 0.19.2 | Beta | N/A | [Build a Rust App with CockroachDB](build-a-rust-app-with-cockroachdb.html) | - -## Data access frameworks (e.g., ORMs) - -| Language | Framework | Latest tested version | Support level | CockroachDB adapter | Tutorial | -|----------+-----------+-----------------------+---------------+---------------------+----------| -| Go | [GORM](https://github.com/jinzhu/gorm/releases)


[go-pg](https://github.com/go-pg/pg)
[upper/db](https://github.com/upper/db) | {% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/master/pkg/cmd/roachtest/tests/gorm.go ||var gormSupportedTag = "||"\n\n %}


{% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/master/pkg/cmd/roachtest/tests/gopg.go ||var gopgSupportedTag = "||"\n\n %}
v4 | Full


Full
Full | [`crdbgorm`](https://pkg.go.dev/github.com/cockroachdb/cockroach-go/crdb/crdbgorm)
(includes client-side transaction retry handling)
N/A
N/A | [Build a Go App with CockroachDB (GORM)](build-a-go-app-with-cockroachdb-gorm.html)


N/A
[Build a Go App with CockroachDB (upper/db)](build-a-go-app-with-cockroachdb-upperdb.html) | -| Java | [Hibernate](https://hibernate.org/orm/)
(including [Hibernate Spatial](https://docs.jboss.org/hibernate/orm/current/userguide/html_single/Hibernate_User_Guide.html#spatial))
[jOOQ](https://www.jooq.org/)
[MyBatis](https://mybatis.org/mybatis-3/) | {% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/master/pkg/cmd/roachtest/tests/hibernate.go ||var supportedHibernateTag = "||"\n\n %} (must be 5.4.19)


3.13.2 (must be 3.13.0)
3.5.5| Full


Full
Full | N/A


N/A
N/A | [Build a Java App with CockroachDB (Hibernate)](build-a-java-app-with-cockroachdb-hibernate.html)


[Build a Java App with CockroachDB (jOOQ)](build-a-java-app-with-cockroachdb-jooq.html)
[Build a Spring App with CockroachDB (MyBatis)](build-a-spring-app-with-cockroachdb-mybatis.html) | -| JavaScript/TypeScript | [Sequelize](https://www.npmjs.com/package/sequelize)


[Knex.js](https://knexjs.org/)
[Prisma](https://prisma.io)
[TypeORM](https://www.npmjs.com/package/typeorm) | {% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/master/pkg/cmd/roachtest/tests/sequelize.go ||var supportedSequelizeCockroachDBRelease = "||"\n\n %}
(use latest version of CockroachDB adapter)
{% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/master/pkg/cmd/roachtest/tests/knex.go ||const supportedKnexTag = "||"\n\n %}
3.9.0
0.3.17 {% comment %}{% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/master/pkg/cmd/roachtest/tests/typeorm.go ||const supportedTypeORMRelease = "||"\n %}{% endcomment %} | Full


Full
Full
Full | [`sequelize-cockroachdb`](https://www.npmjs.com/package/sequelize-cockroachdb)


N/A
N/A
N/A | [Build a Node.js App with CockroachDB (Sequelize)](build-a-nodejs-app-with-cockroachdb-sequelize.html)


[Build a Node.js App with CockroachDB (Knex.js)](build-a-nodejs-app-with-cockroachdb-knexjs.html)
[Build a Node.js App with CockroachDB (Prisma)](build-a-nodejs-app-with-cockroachdb-prisma.html)
[Build a TypeScript App with CockroachDB (TypeORM)](build-a-typescript-app-with-cockroachdb.html) | -| Ruby | [ActiveRecord](https://rubygems.org/gems/activerecord)
[RGeo/RGeo-ActiveRecord](https://github.com/cockroachdb/activerecord-cockroachdb-adapter#working-with-spatial-data) | {% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/master/pkg/cmd/roachtest/tests/activerecord.go ||var supportedRailsVersion = "||"\nvar %}
(use latest version of CockroachDB adapter) | Full | [`activerecord-cockroachdb-adapter`](https://rubygems.org/gems/activerecord-cockroachdb-adapter)
(includes client-side transaction retry handling) | [Build a Ruby App with CockroachDB (ActiveRecord)](build-a-ruby-app-with-cockroachdb-activerecord.html) | -| Python | [Django](https://pypi.org/project/Django/)
(including [GeoDjango](https://docs.djangoproject.com/en/3.1/ref/contrib/gis/))
[peewee](https://github.com/coleifer/peewee/)
[SQLAlchemy](https://www.sqlalchemy.org/) | {% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/master/pkg/cmd/roachtest/tests/django.go ||var djangoSupportedTag = "cockroach-||"\nvar %}
(use latest version of CockroachDB adapter)

3.13.3
0.7.13
1.4.17
(use latest version of CockroachDB adapter) | Full


Full
Full
Full | [`django-cockroachdb`](https://pypi.org/project/django-cockroachdb/)


N/A
N/A
[`sqlalchemy-cockroachdb`](https://pypi.org/project/sqlalchemy-cockroachdb)
(includes client-side transaction retry handling) | [Build a Python App with CockroachDB (Django)](build-a-python-app-with-cockroachdb-django.html)


N/A (See [peewee docs](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#cockroach-database).)
[Build a Python App with CockroachDB (SQLAlchemy)](build-a-python-app-with-cockroachdb-sqlalchemy.html) | - -## Application frameworks - -| Framework | Data access | Latest tested version | Support level | Tutorial | -|-----------+-------------+-----------------------+---------------+----------| -| Spring | [JDBC](build-a-spring-app-with-cockroachdb-jdbc.html)
[JPA (Hibernate)](build-a-spring-app-with-cockroachdb-jpa.html)
[MyBatis](build-a-spring-app-with-cockroachdb-mybatis.html) | See individual Java ORM or [driver](#drivers) for data access version support. | See individual Java ORM or [driver](#drivers) for data access support level. | [Build a Spring App with CockroachDB (JDBC)](build-a-spring-app-with-cockroachdb-jdbc.html)
[Build a Spring App with CockroachDB (JPA)](build-a-spring-app-with-cockroachdb-jpa.html)
[Build a Spring App with CockroachDB (MyBatis)](build-a-spring-app-with-cockroachdb-mybatis.html) - -## Graphical user interfaces (GUIs) - -| GUI | Latest tested version | Support level | Tutorial | -|-----+-----------------------+---------------+----------| -| [DBeaver](https://dbeaver.com/) | 5.2.3 | Full | [Visualize CockroachDB Schemas with DBeaver](dbeaver.html) - -## Integrated development environments (IDEs) - -| IDE | Latest tested version | Support level | Tutorial | -|-----+-----------------------+---------------+----------| -| [DataGrip](https://www.jetbrains.com/datagrip/) | 2021.1 | Full | N/A -| [IntelliJ IDEA](https://www.jetbrains.com/idea/) | 2021.1 | Full | [Use IntelliJ IDEA with CockroachDB](intellij-idea.html) - -## Schema migration tools - -| Tool | Latest tested version | Support level | Tutorial | -|-----+------------------------+----------------+----------| -| [Alembic](https://alembic.sqlalchemy.org/en/latest/) | 1.7 | Full | [Migrate CockroachDB Schemas with Alembic](alembic.html) -| [Flyway](https://flywaydb.org/documentation/commandline/#download-and-installation) | 7.1.0 | Full | [Migrate CockroachDB Schemas with Flyway](flyway.html) -| [Liquibase](https://www.liquibase.org/download) | 4.2.0 | Full | [Migrate CockroachDB Schemas with Liquibase](liquibase.html) - -## Other tools - -| Tool | Latest tested version | Support level | Tutorial | -|-----+------------------------+---------------+----------| -| [Flowable](https://github.com/flowable/flowable-engine) | 6.4.2 | Full | [Getting Started with Flowable and CockroachDB (external)](https://blog.flowable.org/2019/07/11/getting-started-with-flowable-and-cockroachdb/) diff --git a/src/current/_includes/v21.1/misc/userfile.md b/src/current/_includes/v21.1/misc/userfile.md deleted file mode 100644 index 1a23d5d2c39..00000000000 --- a/src/current/_includes/v21.1/misc/userfile.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} - CockroachDB now supports uploading files to a [user-scoped file storage](use-userfile-for-bulk-operations.html) using a SQL connection. We recommend using `userfile` instead of `nodelocal`, as it is user-scoped and more secure. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/orchestration/kubernetes-basic-sql.md b/src/current/_includes/v21.1/orchestration/kubernetes-basic-sql.md deleted file mode 100644 index f7cfbd76641..00000000000 --- a/src/current/_includes/v21.1/orchestration/kubernetes-basic-sql.md +++ /dev/null @@ -1,44 +0,0 @@ -1. Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE bank; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL); - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO bank.accounts VALUES (1, 1000.50); - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SELECT * FROM bank.accounts; - ~~~ - - ~~~ - id | balance - +----+---------+ - 1 | 1000.50 - (1 row) - ~~~ - -1. [Create a user with a password](create-user.html#create-a-user-with-a-password): - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE USER roach WITH PASSWORD 'Q7gc8rEdS'; - ~~~ - - You will need this username and password to access the DB Console later. - -1. Exit the SQL shell and pod: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ \ No newline at end of file diff --git a/src/current/_includes/v21.1/orchestration/kubernetes-cockroach-cert.md b/src/current/_includes/v21.1/orchestration/kubernetes-cockroach-cert.md deleted file mode 100644 index ff44cf183a4..00000000000 --- a/src/current/_includes/v21.1/orchestration/kubernetes-cockroach-cert.md +++ /dev/null @@ -1,90 +0,0 @@ -{{site.data.alerts.callout_info}} -The below steps use [`cockroach cert` commands](cockroach-cert.html) to quickly generate and sign the CockroachDB node and client certificates. Read our [Authentication](authentication.html#using-digital-certificates-with-cockroachdb) docs to learn about other methods of signing certificates. -{{site.data.alerts.end}} - -1. Create two directories: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir certs my-safe-directory - ~~~ - - Directory | Description - ----------|------------ - `certs` | You'll generate your CA certificate and all node and client certificates and keys in this directory. - `my-safe-directory` | You'll generate your CA key in this directory and then reference the key when generating node and client certificates. - -1. Create the CA certificate and key pair: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-ca \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -1. Create a client certificate and key pair for the root user: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-client \ - root \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -1. Upload the client certificate and key to the Kubernetes cluster as a secret: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl create secret \ - generic cockroachdb.client.root \ - --from-file=certs - ~~~ - - ~~~ - secret/cockroachdb.client.root created - ~~~ - -1. Create the certificate and key pair for your CockroachDB nodes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - localhost 127.0.0.1 \ - cockroachdb-public \ - cockroachdb-public.default \ - cockroachdb-public.default.svc.cluster.local \ - *.cockroachdb \ - *.cockroachdb.default \ - *.cockroachdb.default.svc.cluster.local \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -1. Upload the node certificate and key to the Kubernetes cluster as a secret: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl create secret \ - generic cockroachdb.node \ - --from-file=certs - ~~~ - - ~~~ - secret/cockroachdb.node created - ~~~ - -1. Check that the secrets were created on the cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get secrets - ~~~ - - ~~~ - NAME TYPE DATA AGE - cockroachdb.client.root Opaque 3 41m - cockroachdb.node Opaque 5 14s - default-token-6qjdb kubernetes.io/service-account-token 3 4m - ~~~ \ No newline at end of file diff --git a/src/current/_includes/v21.1/orchestration/kubernetes-expand-disk-helm.md b/src/current/_includes/v21.1/orchestration/kubernetes-expand-disk-helm.md deleted file mode 100644 index b09fd93e68c..00000000000 --- a/src/current/_includes/v21.1/orchestration/kubernetes-expand-disk-helm.md +++ /dev/null @@ -1,118 +0,0 @@ -You can expand certain [types of persistent volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes -) (including GCE Persistent Disk and Amazon Elastic Block Store) by editing their persistent volume claims. - -{{site.data.alerts.callout_info}} -These steps assume you followed the tutorial [Deploy CockroachDB on Kubernetes](deploy-cockroachdb-with-kubernetes.html). -{{site.data.alerts.end}} - -1. Get the persistent volume claims for the volumes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pvc - ~~~ - - ~~~ - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - datadir-my-release-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-my-release-cockroachdb-1 Bound pvc-75e143ca-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-my-release-cockroachdb-2 Bound pvc-75ef409a-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - ~~~ - -1. In order to expand a persistent volume claim, `AllowVolumeExpansion` in its storage class must be `true`. Examine the storage class: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl describe storageclass standard - ~~~ - - ~~~ - Name: standard - IsDefaultClass: Yes - Annotations: storageclass.kubernetes.io/is-default-class=true - Provisioner: kubernetes.io/gce-pd - Parameters: type=pd-standard - AllowVolumeExpansion: False - MountOptions: - ReclaimPolicy: Delete - VolumeBindingMode: Immediate - Events: - ~~~ - - If necessary, edit the storage class: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl patch storageclass standard -p '{"allowVolumeExpansion": true}' - ~~~ - - ~~~ - storageclass.storage.k8s.io/standard patched - ~~~ - -1. Edit one of the persistent volume claims to request more space: - - {{site.data.alerts.callout_info}} - The requested `storage` value must be larger than the previous value. You cannot use this method to decrease the disk size. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl patch pvc datadir-my-release-cockroachdb-0 -p '{"spec": {"resources": {"requests": {"storage": "200Gi"}}}}' - ~~~ - - ~~~ - persistentvolumeclaim/datadir-my-release-cockroachdb-0 patched - ~~~ - -1. Check the capacity of the persistent volume claim: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pvc datadir-my-release-cockroachdb-0 - ~~~ - - ~~~ - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - datadir-my-release-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 18m - ~~~ - - If the PVC capacity has not changed, this may be because `AllowVolumeExpansion` was initially set to `false` or because the [volume has a file system](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim) that has to be expanded. You will need to start or restart a pod in order to have it reflect the new capacity. - - {{site.data.alerts.callout_success}} - Running `kubectl get pv` will display the persistent volumes with their *requested* capacity and not their actual capacity. This can be misleading, so it's best to use `kubectl get pvc`. - {{site.data.alerts.end}} - -1. Examine the persistent volume claim. If the volume has a file system, you will see a `FileSystemResizePending` condition with an accompanying message: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl describe pvc datadir-my-release-cockroachdb-0 - ~~~ - - ~~~ - Waiting for user to (re-)start a pod to finish file system resize of volume on node. - ~~~ - -1. Delete the corresponding pod to restart it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete pod my-release-cockroachdb-0 - ~~~ - - The `FileSystemResizePending` condition and message will be removed. - -1. View the updated persistent volume claim: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pvc datadir-my-release-cockroachdb-0 - ~~~ - - ~~~ - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - datadir-my-release-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 200Gi RWO standard 20m - ~~~ - -1. The CockroachDB cluster needs to be expanded one node at a time. Repeat steps 3 - 6 to increase the capacities of the remaining volumes by the same amount. \ No newline at end of file diff --git a/src/current/_includes/v21.1/orchestration/kubernetes-expand-disk-manual.md b/src/current/_includes/v21.1/orchestration/kubernetes-expand-disk-manual.md deleted file mode 100644 index 1c42463dfaf..00000000000 --- a/src/current/_includes/v21.1/orchestration/kubernetes-expand-disk-manual.md +++ /dev/null @@ -1,118 +0,0 @@ -You can expand certain [types of persistent volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes -) (including GCE Persistent Disk and Amazon Elastic Block Store) by editing their persistent volume claims. - -{{site.data.alerts.callout_info}} -These steps assume you followed the tutorial [Deploy CockroachDB on Kubernetes](deploy-cockroachdb-with-kubernetes.html). -{{site.data.alerts.end}} - -1. Get the persistent volume claims for the volumes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pvc - ~~~ - - ~~~ - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - datadir-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-cockroachdb-1 Bound pvc-75e143ca-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-cockroachdb-2 Bound pvc-75ef409a-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - ~~~ - -1. In order to expand a persistent volume claim, `AllowVolumeExpansion` in its storage class must be `true`. Examine the storage class: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl describe storageclass standard - ~~~ - - ~~~ - Name: standard - IsDefaultClass: Yes - Annotations: storageclass.kubernetes.io/is-default-class=true - Provisioner: kubernetes.io/gce-pd - Parameters: type=pd-standard - AllowVolumeExpansion: False - MountOptions: - ReclaimPolicy: Delete - VolumeBindingMode: Immediate - Events: - ~~~ - - If necessary, edit the storage class: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl patch storageclass standard -p '{"allowVolumeExpansion": true}' - ~~~ - - ~~~ - storageclass.storage.k8s.io/standard patched - ~~~ - -1. Edit one of the persistent volume claims to request more space: - - {{site.data.alerts.callout_info}} - The requested `storage` value must be larger than the previous value. You cannot use this method to decrease the disk size. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl patch pvc datadir-cockroachdb-0 -p '{"spec": {"resources": {"requests": {"storage": "200Gi"}}}}' - ~~~ - - ~~~ - persistentvolumeclaim/datadir-cockroachdb-0 patched - ~~~ - -1. Check the capacity of the persistent volume claim: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pvc datadir-cockroachdb-0 - ~~~ - - ~~~ - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - datadir-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 18m - ~~~ - - If the PVC capacity has not changed, this may be because `AllowVolumeExpansion` was initially set to `false` or because the [volume has a file system](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim) that has to be expanded. You will need to start or restart a pod in order to have it reflect the new capacity. - - {{site.data.alerts.callout_success}} - Running `kubectl get pv` will display the persistent volumes with their *requested* capacity and not their actual capacity. This can be misleading, so it's best to use `kubectl get pvc`. - {{site.data.alerts.end}} - -1. Examine the persistent volume claim. If the volume has a file system, you will see a `FileSystemResizePending` condition with an accompanying message: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl describe pvc datadir-cockroachdb-0 - ~~~ - - ~~~ - Waiting for user to (re-)start a pod to finish file system resize of volume on node. - ~~~ - -1. Delete the corresponding pod to restart it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete pod cockroachdb-0 - ~~~ - - The `FileSystemResizePending` condition and message will be removed. - -1. View the updated persistent volume claim: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pvc datadir-cockroachdb-0 - ~~~ - - ~~~ - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - datadir-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 200Gi RWO standard 20m - ~~~ - -1. The CockroachDB cluster needs to be expanded one node at a time. Repeat steps 3 - 6 to increase the capacities of the remaining volumes by the same amount. \ No newline at end of file diff --git a/src/current/_includes/v21.1/orchestration/kubernetes-limitations.md b/src/current/_includes/v21.1/orchestration/kubernetes-limitations.md deleted file mode 100644 index ec07a00b692..00000000000 --- a/src/current/_includes/v21.1/orchestration/kubernetes-limitations.md +++ /dev/null @@ -1,35 +0,0 @@ -#### Kubernetes version - -To deploy CockroachDB {{page.version.version}}, Kubernetes 1.18 or higher is required. Cockroach Labs strongly recommends that you use a Kubernetes version that is [eligible for patch support by the Kubernetes project](https://kubernetes.io/releases/). - -#### Kubernetes Operator - -The CockroachDB Kubernetes Operator currently deploys clusters in a single region. For multi-region deployments using manual configs, see [Orchestrate CockroachDB Across Multiple Kubernetes Clusters](orchestrate-cockroachdb-with-kubernetes-multi-cluster.html). - -#### Helm version - -The CockroachDB Helm chart requires Helm 3.0 or higher. If you attempt to use an incompatible Helm version, an error like the following occurs: - -~~~ shell -Error: UPGRADE FAILED: template: cockroachdb/templates/tests/client.yaml:6:14: executing "cockroachdb/templates/tests/client.yaml" at <.Values.networkPolicy.enabled>: nil pointer evaluating interface {}.enabled -~~~ - -The CockroachDB Helm chart is compatible with Kubernetes versions 1.22 and earlier. - -The CockroachDB Helm chart is currently not under active development, and no new features are planned. However, Cockroach Labs remains committed to fully supporting the Helm chart by addressing defects, providing security patches, and addressing breaking changes due to deprecations in Kubernetes APIs. - -A deprecation notice for the Helm chart will be provided to customers a minimum of 6 months in advance of actual deprecation. - -#### Network - -Service Name Indication (SNI) is an extension to the TLS protocol which allows a client to indicate which hostname it is attempting to connect to at the start of the TCP handshake process. The server can present multiple certificates on the same IP address and TCP port number, and one server can serve multiple secure websites or API services even if they use different certificates. - -Due to its order of operations, the PostgreSQL wire protocol's implementation of TLS is not compatible with SNI-based routing in the Kubernetes ingress controller. Instead, use a TCP load balancer for CockroachDB that is not shared with other services. - -#### Resources - -When starting Kubernetes, select machines with at least **4 vCPUs** and **16 GiB** of memory, and provision at least **2 vCPUs** and **8 Gi** of memory to CockroachDB per pod. These minimum settings are used by default in this deployment guide, and are appropriate for testing purposes only. On a production deployment, you should adjust the resource settings for your workload. For details, see [Operate CockroachDB on Kubernetes](operate-cockroachdb-kubernetes.html#allocate-resources). - -#### Storage - -At this time, orchestrations of CockroachDB with Kubernetes use external persistent volumes that are often replicated by the provider. Because CockroachDB already replicates data automatically, this additional layer of replication is unnecessary and can negatively impact performance. High-performance use cases on a private Kubernetes cluster may want to consider using [local volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local). diff --git a/src/current/_includes/v21.1/orchestration/kubernetes-remove-nodes-helm.md b/src/current/_includes/v21.1/orchestration/kubernetes-remove-nodes-helm.md deleted file mode 100644 index e8c73d8d6de..00000000000 --- a/src/current/_includes/v21.1/orchestration/kubernetes-remove-nodes-helm.md +++ /dev/null @@ -1,127 +0,0 @@ -Before removing a node from your cluster, you must first decommission the node. This lets a node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node. - -{{site.data.alerts.callout_danger}} -If you remove nodes without first telling CockroachDB to decommission them, you may cause data or even cluster unavailability. For more details about how this works and what to consider before removing nodes, see [Decommission Nodes](remove-nodes.html). -{{site.data.alerts.end}} - -1. Use the [`cockroach node status`](cockroach-node.html) command to get the internal IDs of nodes. For example, if you followed the steps in [Deploy CockroachDB with Kubernetes](deploy-cockroachdb-with-kubernetes.html#step-3-use-the-built-in-sql-client) to launch a secure client pod, get a shell into the `cockroachdb-client-secure` pod: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach node status \ - --certs-dir=/cockroach-certs \ - --host=my-release-cockroachdb-public - ~~~ - - ~~~ - id | address | build | started_at | updated_at | is_available | is_live - +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+ - 1 | my-release-cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true - 2 | my-release-cockroachdb-2.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true - 3 | my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true - 4 | my-release-cockroachdb-3.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true - (4 rows) - ~~~ - - The pod uses the `root` client certificate created earlier to initialize the cluster, so there's no CSR approval required. - -1. Use the [`cockroach node decommission`](cockroach-node.html) command to decommission the node with the highest number in its address, specifying its ID (in this example, node ID `4` because its address is `my-release-cockroachdb-3`): - - {{site.data.alerts.callout_info}} - You must decommission the node with the highest number in its address. Kubernetes will remove the pod for the node with the highest number in its address when you reduce the replica count. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach node decommission 4 \ - --certs-dir=/cockroach-certs \ - --host=my-release-cockroachdb-public - ~~~ - - You'll then see the decommissioning status print to `stderr` as it changes: - - ~~~ - id | is_live | replicas | is_decommissioning | is_draining - +---+---------+----------+--------------------+-------------+ - 4 | true | 73 | true | false - (1 row) - ~~~ - - Once the node has been fully decommissioned and stopped, you'll see a confirmation: - - ~~~ - id | is_live | replicas | is_decommissioning | is_draining - +---+---------+----------+--------------------+-------------+ - 4 | true | 0 | true | false - (1 row) - - No more data reported on target nodes. Please verify cluster health before removing the nodes. - ~~~ - -1. Once the node has been decommissioned, scale down your StatefulSet: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm upgrade \ - my-release \ - cockroachdb/cockroachdb \ - --set statefulset.replicas=3 \ - --reuse-values - ~~~ - -1. Verify that the pod was successfully removed: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-0 1/1 Running 0 51m - my-release-cockroachdb-1 1/1 Running 0 47m - my-release-cockroachdb-2 1/1 Running 0 3m - cockroachdb-client-secure 1/1 Running 0 15m - ... - ~~~ - -1. You should also remove the persistent volume that was mounted to the pod. Get the persistent volume claims for the volumes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pvc - ~~~ - - ~~~ - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - datadir-my-release-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-my-release-cockroachdb-1 Bound pvc-75e143ca-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-my-release-cockroachdb-2 Bound pvc-75ef409a-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-my-release-cockroachdb-3 Bound pvc-75e561ba-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - ~~~ - -1. Verify that the PVC with the highest number in its name is no longer mounted to a pod: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl describe pvc datadir-my-release-cockroachdb-3 - ~~~ - - ~~~ - Name: datadir-my-release-cockroachdb-3 - ... - Mounted By: - ~~~ - -1. Remove the persistent volume by deleting the PVC: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete pvc datadir-my-release-cockroachdb-3 - ~~~ - - ~~~ - persistentvolumeclaim "datadir-my-release-cockroachdb-3" deleted - ~~~ \ No newline at end of file diff --git a/src/current/_includes/v21.1/orchestration/kubernetes-remove-nodes-insecure.md b/src/current/_includes/v21.1/orchestration/kubernetes-remove-nodes-insecure.md deleted file mode 100644 index a8b061fc02d..00000000000 --- a/src/current/_includes/v21.1/orchestration/kubernetes-remove-nodes-insecure.md +++ /dev/null @@ -1,130 +0,0 @@ -To safely remove a node from your cluster, you must first decommission the node and only then adjust the `spec.replicas` value of your StatefulSet configuration to permanently remove it. This sequence is important because the decommissioning process lets a node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node. - -{{site.data.alerts.callout_danger}} -If you remove nodes without first telling CockroachDB to decommission them, you may cause data or even cluster unavailability. For more details about how this works and what to consider before removing nodes, see [Decommission Nodes](remove-nodes.html). -{{site.data.alerts.end}} - -1. Launch a temporary interactive pod and use the `cockroach node status` command to get the internal IDs of nodes: - -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach:{{page.release_info.version}} \ - --rm \ - --restart=Never \ - -- node status \ - --insecure \ - --host=cockroachdb-public - ~~~ - - ~~~ - id | address | build | started_at | updated_at | is_available | is_live - +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+ - 1 | cockroachdb-0.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true - 2 | cockroachdb-2.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true - 3 | cockroachdb-1.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true - 4 | cockroachdb-3.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true - (4 rows) - ~~~ - -
- -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach:{{page.release_info.version}} \ - --rm \ - --restart=Never \ - -- node status \ - --insecure \ - --host=my-release-cockroachdb-public - ~~~ - - ~~~ - id | address | build | started_at | updated_at | is_available | is_live - +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+ - 1 | my-release-cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true - 2 | my-release-cockroachdb-2.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true - 3 | my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true - 4 | my-release-cockroachdb-3.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true - (4 rows) - ~~~ -
- -2. Note the ID of the node with the highest number in its address (in this case, the address including `cockroachdb-3`) and use the [`cockroach node decommission`](cockroach-node.html) command to decommission it: - - {{site.data.alerts.callout_info}} - It's important to decommission the node with the highest number in its address because, when you reduce the replica count, Kubernetes will remove the pod for that node. - {{site.data.alerts.end}} - -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach:{{page.release_info.version}} \ - --rm \ - --restart=Never \ - -- node decommission \ - --insecure \ - --host=cockroachdb-public - ~~~ -
- -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach:{{page.release_info.version}} \ - --rm \ - --restart=Never \ - -- node decommission \ - --insecure \ - --host=my-release-cockroachdb-public - ~~~ -
- - You'll then see the decommissioning status print to `stderr` as it changes: - - ~~~ - id | is_live | replicas | is_decommissioning | is_draining - +---+---------+----------+--------------------+-------------+ - 4 | true | 73 | true | false - (1 row) - ~~~ - - Once the node has been fully decommissioned and stopped, you'll see a confirmation: - - ~~~ - id | is_live | replicas | is_decommissioning | is_draining - +---+---------+----------+--------------------+-------------+ - 4 | true | 0 | true | false - (1 row) - - No more data reported on target nodes. Please verify cluster health before removing the nodes. - ~~~ - -3. Once the node has been decommissioned, remove a pod from your StatefulSet: - -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl scale statefulset cockroachdb --replicas=3 - ~~~ - - ~~~ - statefulset "cockroachdb" scaled - ~~~ -
- -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm upgrade \ - my-release \ - cockroachdb/cockroachdb \ - --set statefulset.replicas=3 \ - --reuse-values - ~~~ -
diff --git a/src/current/_includes/v21.1/orchestration/kubernetes-remove-nodes-manual.md b/src/current/_includes/v21.1/orchestration/kubernetes-remove-nodes-manual.md deleted file mode 100644 index 058ed9bf141..00000000000 --- a/src/current/_includes/v21.1/orchestration/kubernetes-remove-nodes-manual.md +++ /dev/null @@ -1,127 +0,0 @@ -Before removing a node from your cluster, you must first decommission the node. This lets a node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node. - -{{site.data.alerts.callout_danger}} -If you remove nodes without first telling CockroachDB to decommission them, you may cause data or even cluster unavailability. For more details about how this works and what to consider before removing nodes, see [Decommission Nodes](remove-nodes.html). -{{site.data.alerts.end}} - -1. Use the [`cockroach node status`](cockroach-node.html) command to get the internal IDs of nodes. For example, if you followed the steps in [Deploy CockroachDB with Kubernetes](deploy-cockroachdb-with-kubernetes.html#step-3-use-the-built-in-sql-client) to launch a secure client pod, get a shell into the `cockroachdb-client-secure` pod: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach node status \ - --certs-dir=/cockroach-certs \ - --host=cockroachdb-public - ~~~ - - ~~~ - id | address | build | started_at | updated_at | is_available | is_live - +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+ - 1 | cockroachdb-0.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true - 2 | cockroachdb-2.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true - 3 | cockroachdb-1.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true - 4 | cockroachdb-3.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true - (4 rows) - ~~~ - - The pod uses the `root` client certificate created earlier to initialize the cluster, so there's no CSR approval required. - -1. Use the [`cockroach node decommission`](cockroach-node.html) command to decommission the node with the highest number in its address, specifying its ID (in this example, node ID `4` because its address is `cockroachdb-3`): - - {{site.data.alerts.callout_info}} - You must decommission the node with the highest number in its address. Kubernetes will remove the pod for the node with the highest number in its address when you reduce the replica count. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach node decommission 4 \ - --certs-dir=/cockroach-certs \ - --host=cockroachdb-public - ~~~ - - You'll then see the decommissioning status print to `stderr` as it changes: - - ~~~ - id | is_live | replicas | is_decommissioning | is_draining - +---+---------+----------+--------------------+-------------+ - 4 | true | 73 | true | false - (1 row) - ~~~ - - Once the node has been fully decommissioned and stopped, you'll see a confirmation: - - ~~~ - id | is_live | replicas | is_decommissioning | is_draining - +---+---------+----------+--------------------+-------------+ - 4 | true | 0 | true | false - (1 row) - - No more data reported on target nodes. Please verify cluster health before removing the nodes. - ~~~ - -1. Once the node has been decommissioned, scale down your StatefulSet: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl scale statefulset cockroachdb --replicas=3 - ~~~ - - ~~~ - statefulset.apps/cockroachdb scaled - ~~~ - -1. Verify that the pod was successfully removed: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 1/1 Running 0 51m - cockroachdb-1 1/1 Running 0 47m - cockroachdb-2 1/1 Running 0 3m - cockroachdb-client-secure 1/1 Running 0 15m - ... - ~~~ - -1. You should also remove the persistent volume that was mounted to the pod. Get the persistent volume claims for the volumes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pvc - ~~~ - - ~~~ - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - datadir-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-cockroachdb-1 Bound pvc-75e143ca-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-cockroachdb-2 Bound pvc-75ef409a-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-cockroachdb-3 Bound pvc-75e561ba-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - ~~~ - -1. Verify that the PVC with the highest number in its name is no longer mounted to a pod: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl describe pvc datadir-cockroachdb-3 - ~~~ - - ~~~ - Name: datadir-cockroachdb-3 - ... - Mounted By: - ~~~ - -1. Remove the persistent volume by deleting the PVC: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete pvc datadir-cockroachdb-3 - ~~~ - - ~~~ - persistentvolumeclaim "datadir-cockroachdb-3" deleted - ~~~ \ No newline at end of file diff --git a/src/current/_includes/v21.1/orchestration/kubernetes-scale-cluster-helm.md b/src/current/_includes/v21.1/orchestration/kubernetes-scale-cluster-helm.md deleted file mode 100644 index 8556b822651..00000000000 --- a/src/current/_includes/v21.1/orchestration/kubernetes-scale-cluster-helm.md +++ /dev/null @@ -1,118 +0,0 @@ -Before scaling CockroachDB, ensure that your Kubernetes cluster has enough worker nodes to host the number of pods you want to add. This is to ensure that two pods are not placed on the same worker node, as recommended in our [production guidance](recommended-production-settings.html#topology). - -For example, if you want to scale from 3 CockroachDB nodes to 4, your Kubernetes cluster should have at least 4 worker nodes. You can verify the size of your Kubernetes cluster by running `kubectl get nodes`. - -1. Edit your StatefulSet configuration to add another pod for the new CockroachDB node: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm upgrade \ - my-release \ - cockroachdb/cockroachdb \ - --set statefulset.replicas=4 \ - --reuse-values - ~~~ - - ~~~ - Release "my-release" has been upgraded. Happy Helming! - LAST DEPLOYED: Tue May 14 14:06:43 2019 - NAMESPACE: default - STATUS: DEPLOYED - - RESOURCES: - ==> v1beta1/PodDisruptionBudget - NAME AGE - my-release-cockroachdb-budget 51m - - ==> v1/Pod(related) - - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-0 1/1 Running 0 38m - my-release-cockroachdb-1 1/1 Running 0 39m - my-release-cockroachdb-2 1/1 Running 0 39m - my-release-cockroachdb-3 0/1 Pending 0 0s - my-release-cockroachdb-init-nwjkh 0/1 Completed 0 39m - - ... - ~~~ - -1. Get the name of the `Pending` CSR for the new pod: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get csr - ~~~ - - ~~~ - NAME AGE REQUESTOR CONDITION - default.client.root 1h system:serviceaccount:default:default Approved,Issued - default.node.my-release-cockroachdb-0 1h system:serviceaccount:default:default Approved,Issued - default.node.my-release-cockroachdb-1 1h system:serviceaccount:default:default Approved,Issued - default.node.my-release-cockroachdb-2 1h system:serviceaccount:default:default Approved,Issued - default.node.my-release-cockroachdb-3 2m system:serviceaccount:default:default Pending - node-csr-0Xmb4UTVAWMEnUeGbW4KX1oL4XV_LADpkwjrPtQjlZ4 1h kubelet Approved,Issued - node-csr-NiN8oDsLhxn0uwLTWa0RWpMUgJYnwcFxB984mwjjYsY 1h kubelet Approved,Issued - node-csr-aU78SxyU69pDK57aj6txnevr7X-8M3XgX9mTK0Hso6o 1h kubelet Approved,Issued - ... - ~~~ - - If you do not see a `Pending` CSR, wait a minute and try again. - -1. Examine the CSR for the new pod: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl describe csr default.node.my-release-cockroachdb-3 - ~~~ - - ~~~ - Name: default.node.my-release-cockroachdb-3 - Labels: - Annotations: - CreationTimestamp: Thu, 09 Nov 2017 13:39:37 -0500 - Requesting User: system:serviceaccount:default:default - Status: Pending - Subject: - Common Name: node - Serial Number: - Organization: Cockroach - Subject Alternative Names: - DNS Names: localhost - my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local - my-release-cockroachdb-1.my-release-cockroachdb - my-release-cockroachdb-public - my-release-cockroachdb-public.default.svc.cluster.local - IP Addresses: 127.0.0.1 - 10.48.1.6 - Events: - ~~~ - -1. If everything looks correct, approve the CSR for the new pod: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl certificate approve default.node.my-release-cockroachdb-3 - ~~~ - - ~~~ - certificatesigningrequest.certificates.k8s.io/default.node.my-release-cockroachdb-3 approved - ~~~ - -1. Verify that the new pod started successfully: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-0 1/1 Running 0 51m - my-release-cockroachdb-1 1/1 Running 0 47m - my-release-cockroachdb-2 1/1 Running 0 3m - my-release-cockroachdb-3 1/1 Running 0 1m - cockroachdb-client-secure 1/1 Running 0 15m - ... - ~~~ - -1. You can also open the [**Node List**](ui-cluster-overview-page.html#node-list) in the DB Console to ensure that the fourth node successfully joined the cluster. \ No newline at end of file diff --git a/src/current/_includes/v21.1/orchestration/kubernetes-scale-cluster-manual.md b/src/current/_includes/v21.1/orchestration/kubernetes-scale-cluster-manual.md deleted file mode 100644 index f42775704d3..00000000000 --- a/src/current/_includes/v21.1/orchestration/kubernetes-scale-cluster-manual.md +++ /dev/null @@ -1,51 +0,0 @@ -Before scaling up CockroachDB, note the following [topology recommendations](recommended-production-settings.html#topology): - -- Each CockroachDB node (running in its own pod) should run on a separate Kubernetes worker node. -- Each availability zone should have the same number of CockroachDB nodes. - -If your cluster has 3 CockroachDB nodes distributed across 3 availability zones (as in our [deployment example](deploy-cockroachdb-with-kubernetes.html?filters=manual)), we recommend scaling up by a multiple of 3 to retain an even distribution of nodes. You should therefore scale up to a minimum of 6 CockroachDB nodes, with 2 nodes in each zone. - -1. Run `kubectl get nodes` to list the worker nodes in your Kubernetes cluster. There should be at least as many worker nodes as pods you plan to add. This ensures that no more than one pod will be placed on each worker node. - -1. Add worker nodes if necessary: - - On GKE, [resize your cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/resizing-a-cluster). If you deployed a [regional cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-regional-cluster) as we recommended, you will use `--num-nodes` to specify the desired number of worker nodes in each zone. For example: - - {% include_cached copy-clipboard.html %} - ~~~ shell - gcloud container clusters resize {cluster-name} --region {region-name} --num-nodes 2 - ~~~ - - On EKS, resize your [Worker Node Group](https://eksctl.io/usage/managing-nodegroups/#scaling). - - On GCE, resize your [Managed Instance Group](https://cloud.google.com/compute/docs/instance-groups/). - - On AWS, resize your [Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/latest/userguide/as-manual-scaling.html). - -1. Edit your StatefulSet configuration to add pods for each new CockroachDB node: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl scale statefulset cockroachdb --replicas=6 - ~~~ - - ~~~ - statefulset.apps/cockroachdb scaled - ~~~ - -1. Verify that the new pod started successfully: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 1/1 Running 0 51m - cockroachdb-1 1/1 Running 0 47m - cockroachdb-2 1/1 Running 0 3m - cockroachdb-3 1/1 Running 0 1m - cockroachdb-4 1/1 Running 0 1m - cockroachdb-5 1/1 Running 0 1m - cockroachdb-client-secure 1/1 Running 0 15m - ... - ~~~ - -1. You can also open the [**Node List**](ui-cluster-overview-page.html#node-list) in the DB Console to ensure that the fourth node successfully joined the cluster. \ No newline at end of file diff --git a/src/current/_includes/v21.1/orchestration/kubernetes-simulate-failure.md b/src/current/_includes/v21.1/orchestration/kubernetes-simulate-failure.md deleted file mode 100644 index 5738885935e..00000000000 --- a/src/current/_includes/v21.1/orchestration/kubernetes-simulate-failure.md +++ /dev/null @@ -1,79 +0,0 @@ -Based on the `replicas: 3` line in the StatefulSet configuration, Kubernetes ensures that three pods/nodes are running at all times. When a pod/node fails, Kubernetes automatically creates another pod/node with the same network identity and persistent storage. - -To see this in action: - -1. Terminate one of the CockroachDB nodes: - -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete pod cockroachdb-2 - ~~~ - - ~~~ - pod "cockroachdb-2" deleted - ~~~ -
- -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete pod cockroachdb-2 - ~~~ - - ~~~ - pod "cockroachdb-2" deleted - ~~~ -
- -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete pod my-release-cockroachdb-2 - ~~~ - - ~~~ - pod "my-release-cockroachdb-2" deleted - ~~~ -
- - -2. In the DB Console, the **Cluster Overview** will soon show one node as **Suspect**. As Kubernetes auto-restarts the node, watch how the node once again becomes healthy. - -3. Back in the terminal, verify that the pod was automatically restarted: - -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pod cockroachdb-2 - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-2 1/1 Running 0 12s - ~~~ -
- -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pod cockroachdb-2 - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-2 1/1 Running 0 12s - ~~~ -
- -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pod my-release-cockroachdb-2 - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-2 1/1 Running 0 44s - ~~~ -
diff --git a/src/current/_includes/v21.1/orchestration/kubernetes-stop-cluster.md b/src/current/_includes/v21.1/orchestration/kubernetes-stop-cluster.md deleted file mode 100644 index afc17479b82..00000000000 --- a/src/current/_includes/v21.1/orchestration/kubernetes-stop-cluster.md +++ /dev/null @@ -1,145 +0,0 @@ -To shut down the CockroachDB cluster: - -
-{% capture latest_operator_version %}{% include_cached latest_operator_version.md %}{% endcapture %} - -1. Delete the previously created custom resource: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete -f example.yaml - ~~~ - -1. Remove the Operator: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete -f https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v{{ latest_operator_version }}/install/operator.yaml - ~~~ - - This will delete the CockroachDB cluster being run by the Operator. It will *not* delete the persistent volumes that were attached to the pods. - - {{site.data.alerts.callout_danger}} - If you want to delete the persistent volumes and free up the storage used by CockroachDB, be sure you have a backup copy of your data. Data **cannot** be recovered once the persistent volumes are deleted. For more information, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/run-application/delete-stateful-set/#persistent-volumes). - {{site.data.alerts.end}} - -{{site.data.alerts.callout_info}} -This does not delete any secrets you may have created. For more information on managing secrets, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl). -{{site.data.alerts.end}} -
- -
-1. Delete the resources associated with the `cockroachdb` label, including the logs and Prometheus and Alertmanager resources: - - {{site.data.alerts.callout_danger}} - This does not include deleting the persistent volumes that were attached to the pods. If you want to delete the persistent volumes and free up the storage used by CockroachDB, be sure you have a backup copy of your data. Data **cannot** be recovered once the persistent volumes are deleted. For more information, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/run-application/delete-stateful-set/#persistent-volumes). - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete pods,statefulsets,services,poddisruptionbudget,jobs,rolebinding,clusterrolebinding,role,clusterrole,serviceaccount,alertmanager,prometheus,prometheusrule,serviceMonitor -l app=cockroachdb - ~~~ - - ~~~ - pod "cockroachdb-0" deleted - pod "cockroachdb-1" deleted - pod "cockroachdb-2" deleted - statefulset.apps "alertmanager-cockroachdb" deleted - statefulset.apps "prometheus-cockroachdb" deleted - service "alertmanager-cockroachdb" deleted - service "cockroachdb" deleted - service "cockroachdb-public" deleted - poddisruptionbudget.policy "cockroachdb-budget" deleted - job.batch "cluster-init-secure" deleted - rolebinding.rbac.authorization.k8s.io "cockroachdb" deleted - clusterrolebinding.rbac.authorization.k8s.io "cockroachdb" deleted - clusterrolebinding.rbac.authorization.k8s.io "prometheus" deleted - role.rbac.authorization.k8s.io "cockroachdb" deleted - clusterrole.rbac.authorization.k8s.io "cockroachdb" deleted - clusterrole.rbac.authorization.k8s.io "prometheus" deleted - serviceaccount "cockroachdb" deleted - serviceaccount "prometheus" deleted - alertmanager.monitoring.coreos.com "cockroachdb" deleted - prometheus.monitoring.coreos.com "cockroachdb" deleted - prometheusrule.monitoring.coreos.com "prometheus-cockroachdb-rules" deleted - servicemonitor.monitoring.coreos.com "cockroachdb" deleted - ~~~ - -1. Delete the pod created for `cockroach` client commands, if you didn't do so earlier: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete pod cockroachdb-client-secure - ~~~ - - ~~~ - pod "cockroachdb-client-secure" deleted - ~~~ - -{{site.data.alerts.callout_info}} -This does not delete any secrets you may have created. For more information on managing secrets, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl). -{{site.data.alerts.end}} -
- -
-1. Uninstall the release: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm uninstall my-release - ~~~ - - ~~~ - release "my-release" deleted - ~~~ - -1. Delete the pod created for `cockroach` client commands, if you didn't do so earlier: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete pod cockroachdb-client-secure - ~~~ - - ~~~ - pod "cockroachdb-client-secure" deleted - ~~~ - -1. Get the names of any CSRs for the cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get csr - ~~~ - - ~~~ - NAME AGE REQUESTOR CONDITION - default.client.root 1h system:serviceaccount:default:default Approved,Issued - default.node.my-release-cockroachdb-0 1h system:serviceaccount:default:default Approved,Issued - default.node.my-release-cockroachdb-1 1h system:serviceaccount:default:default Approved,Issued - default.node.my-release-cockroachdb-2 1h system:serviceaccount:default:default Approved,Issued - default.node.my-release-cockroachdb-3 12m system:serviceaccount:default:default Approved,Issued - node-csr-0Xmb4UTVAWMEnUeGbW4KX1oL4XV_LADpkwjrPtQjlZ4 1h kubelet Approved,Issued - node-csr-NiN8oDsLhxn0uwLTWa0RWpMUgJYnwcFxB984mwjjYsY 1h kubelet Approved,Issued - node-csr-aU78SxyU69pDK57aj6txnevr7X-8M3XgX9mTK0Hso6o 1h kubelet Approved,Issued - ... - ~~~ - -1. Delete any CSRs that you created: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete csr default.client.root default.node.my-release-cockroachdb-0 default.node.my-release-cockroachdb-1 default.node.my-release-cockroachdb-2 default.node.my-release-cockroachdb-3 - ~~~ - - ~~~ - certificatesigningrequest "default.client.root" deleted - certificatesigningrequest "default.node.my-release-cockroachdb-0" deleted - certificatesigningrequest "default.node.my-release-cockroachdb-1" deleted - certificatesigningrequest "default.node.my-release-cockroachdb-2" deleted - certificatesigningrequest "default.node.my-release-cockroachdb-3" deleted - ~~~ - - {{site.data.alerts.callout_info}} - This does not delete any secrets you may have created. For more information on managing secrets, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl). - {{site.data.alerts.end}} -
diff --git a/src/current/_includes/v21.1/orchestration/kubernetes-upgrade-cluster-helm.md b/src/current/_includes/v21.1/orchestration/kubernetes-upgrade-cluster-helm.md deleted file mode 100644 index f7c83e53024..00000000000 --- a/src/current/_includes/v21.1/orchestration/kubernetes-upgrade-cluster-helm.md +++ /dev/null @@ -1,253 +0,0 @@ -1. Verify that you can upgrade. - - To upgrade to a new major version, you must first be on a production release of the previous version. The release does not need to be the latest production release of the previous version, but it must be a production [release](../releases/index.html) and not a testing release (alpha/beta). - - Therefore, in order to upgrade to v21.1, you must be on a production release of v20.2. - - 1. If you are upgrading to v21.1 from a production release earlier than v20.2, or from a testing release (alpha/beta), first [upgrade to a production release of v20.2](../v20.2/orchestrate-cockroachdb-with-kubernetes.html#upgrade-the-cluster). Be sure to complete all the steps. - - 1. Then return to this page and perform a second upgrade to v21.1. - - 1. If you are upgrading from any production release of v20.2, or from any earlier v21.1 release, you do not have to go through intermediate releases; continue to step 2. - -1. Verify the overall health of your cluster using the [DB Console](ui-overview.html). On the **Overview**: - - Under **Node Status**, make sure all nodes that should be live are listed as such. If any nodes are unexpectedly listed as suspect or dead, identify why the nodes are offline and either restart them or [decommission](#remove-nodes) them before beginning your upgrade. If there are dead and non-decommissioned nodes in your cluster, it will not be possible to finalize the upgrade (either automatically or manually). - - Under **Replication Status**, make sure there are 0 under-replicated and unavailable ranges. Otherwise, performing a rolling upgrade increases the risk that ranges will lose a majority of their replicas and cause cluster unavailability. Therefore, it's important to [identify and resolve the cause of range under-replication and/or unavailability](cluster-setup-troubleshooting.html#replication-issues) before beginning your upgrade. - - In the **Node List**: - - Make sure all nodes are on the same version. If not all nodes are on the same version, upgrade them to the cluster's highest current version first, and then start this process over. - - Make sure capacity and memory usage are reasonable for each node. Nodes must be able to tolerate some increase in case the new version uses more resources for your workload. Also go to **Metrics > Dashboard: Hardware** and make sure CPU percent is reasonable across the cluster. If there's not enough headroom on any of these metrics, consider [adding nodes](#add-nodes) to your cluster before beginning your upgrade. - -1. Review the [backward-incompatible changes in v21.1](../releases/v21.1.html#v21-1-0-backward-incompatible-changes) and [deprecated features](../releases/v21.1.html#v21-1-0-deprecations). If any affect your deployment, make the necessary changes before starting the rolling upgrade to v21.1. - -1. Decide how the upgrade will be finalized. - - By default, after all nodes are running the new version, the upgrade process will be **auto-finalized**. This will enable certain [features and performance improvements introduced in v21.1](upgrade-cockroach-version.html#features-that-require-upgrade-finalization). After finalization, however, it will no longer be possible to perform a downgrade to v20.2. In the event of a catastrophic failure or corruption, the only option is to start a new cluster using the old binary and then restore from a [backup](take-full-and-incremental-backups.html) created prior to the upgrade. For this reason, **we recommend disabling auto-finalization** so you can monitor the stability and performance of the upgraded cluster before finalizing the upgrade, but note that you will need to follow all of the subsequent directions, including the manual finalization in a later step. - - {{site.data.alerts.callout_info}} - Finalization only applies when performing a major version upgrade (for example, from v20.2.x to v21.1). Patch version upgrades (for example, within the v21.1.x series) can always be downgraded. - {{site.data.alerts.end}} - - {% if page.secure == true %} - - 1. Get a shell into the pod with the `cockroach` binary created earlier and start the CockroachDB [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=my-release-cockroachdb-public - ~~~ - - {% else %} - - 1. Launch a temporary interactive pod and start the [built-in SQL client](cockroach-sql.html) inside it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach \ - --rm \ - --restart=Never \ - -- sql \ - --insecure \ - --host=my-release-cockroachdb-public - ~~~ - - {% endif %} - - 1. Set the `cluster.preserve_downgrade_option` [cluster setting](cluster-settings.html) to the version you are upgrading from: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING cluster.preserve_downgrade_option = '20.2'; - ~~~ - - 1. Exit the SQL shell and delete the temporary pod: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ - -1. Add a [partition](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#staging-an-update) to the update strategy defined in the StatefulSet. Only the pods numbered greater than or equal to the partition value will be updated. For a cluster with 3 pods (e.g., `cockroachdb-0`, `cockroachdb-1`, `cockroachdb-2`) the partition value should be 2: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm upgrade \ - my-release \ - cockroachdb/cockroachdb \ - --set statefulset.updateStrategy.rollingUpdate.partition=2 - ~~~ - -1. Kick off the upgrade process by changing the Docker image used in the CockroachDB StatefulSet: - - {{site.data.alerts.callout_info}} - For Helm, you must remove the cluster initialization job from when the cluster was created before the cluster version can be changed. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl delete job my-release-cockroachdb-init - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm upgrade \ - my-release \ - cockroachdb/cockroachdb \ - --set image.tag={{page.release_info.version}} \ - --reuse-values - ~~~ - -1. Check the status of your cluster's pods. You should see one of them being restarted: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-0 1/1 Running 0 2m - my-release-cockroachdb-1 1/1 Running 0 3m - my-release-cockroachdb-2 0/1 ContainerCreating 0 25s - my-release-cockroachdb-init-nwjkh 0/1 ContainerCreating 0 6s - ... - ~~~ - - {{site.data.alerts.callout_info}} - Ignore the pod for cluster initialization. It is re-created as a byproduct of the StatefulSet configuration but does not impact your existing cluster. - {{site.data.alerts.end}} - -1. After the pod has been restarted with the new image, start the CockroachDB [built-in SQL client](cockroach-sql.html): - - {% if page.secure == true %} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=my-release-cockroachdb-public - ~~~ - - {% else %} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach \ - --rm \ - --restart=Never \ - -- sql \ - --insecure \ - --host=my-release-cockroachdb-public - ~~~ - {% endif %} - -1. Run the following SQL query to verify that the number of underreplicated ranges is zero: - - {% include_cached copy-clipboard.html %} - ~~~ sql - SELECT sum((metrics->>'ranges.underreplicated')::DECIMAL)::INT AS ranges_underreplicated FROM crdb_internal.kv_store_status; - ~~~ - - ~~~ - ranges_underreplicated - -------------------------- - 0 - (1 row) - ~~~ - - This indicates that it is safe to proceed to the next pod. - -1. Exit the SQL shell: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ - -1. Decrement the partition value by 1 to allow the next pod in the cluster to update: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm upgrade \ - my-release \ - cockroachdb/cockroachdb \ - --set statefulset.updateStrategy.rollingUpdate.partition=1 \ - ~~~ - -1. Repeat steps 4-8 until all pods have been restarted and are running the new image (the final partition value should be `0`). - -1. Check the image of each pod to confirm that all have been upgraded: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods \ - -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].image}{"\n"}' - ~~~ - - ~~~ - my-release-cockroachdb-0 cockroachdb/cockroach:{{page.release_info.version}} - my-release-cockroachdb-1 cockroachdb/cockroach:{{page.release_info.version}} - my-release-cockroachdb-2 cockroachdb/cockroach:{{page.release_info.version}} - ... - ~~~ - - You can also check the CockroachDB version of each node in the [DB Console](ui-cluster-overview-page.html#node-details). - - -1. If you disabled auto-finalization earlier, monitor the stability and performance of your cluster until you are comfortable with the upgrade (generally at least a day). - - If you decide to roll back the upgrade, repeat the rolling restart procedure with the old binary. - - {{site.data.alerts.callout_info}} - This is only possible when performing a major version upgrade (for example, from v20.2.x to v21.1). Patch version upgrades (for example, within the v21.1.x series) are auto-finalized. - {{site.data.alerts.end}} - - To finalize the upgrade, re-enable auto-finalization: - - {% if page.secure == true %} - - 1. Get a shell into the pod with the `cockroach` binary created earlier and start the CockroachDB [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=my-release-cockroachdb-public - ~~~ - - {% else %} - - 1. Launch a temporary interactive pod and start the [built-in SQL client](cockroach-sql.html) inside it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach \ - --rm \ - --restart=Never \ - -- sql \ - --insecure \ - --host=my-release-cockroachdb-public - ~~~ - - {% endif %} - - 2. Re-enable auto-finalization: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > RESET CLUSTER SETTING cluster.preserve_downgrade_option; - ~~~ - - 3. Exit the SQL shell and delete the temporary pod: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ diff --git a/src/current/_includes/v21.1/orchestration/kubernetes-upgrade-cluster-manual.md b/src/current/_includes/v21.1/orchestration/kubernetes-upgrade-cluster-manual.md deleted file mode 100644 index 2abc0bad94e..00000000000 --- a/src/current/_includes/v21.1/orchestration/kubernetes-upgrade-cluster-manual.md +++ /dev/null @@ -1,242 +0,0 @@ -1. Verify that you can upgrade. - - To upgrade to a new major version, you must first be on a production release of the previous version. The release does not need to be the latest production release of the previous version, but it must be a production [release](../releases/index.html) and not a testing release (alpha/beta). - - Therefore, in order to upgrade to v21.1, you must be on a production release of v20.2. - - 1. First [upgrade to a production release of v20.2](../v20.2/orchestrate-cockroachdb-with-kubernetes.html?filters=manual#upgrade-the-cluster). Be sure to complete all the steps. - - 1. Then return to this page and perform a second upgrade to v21.1. - - 1. If you are upgrading from any production release of v20.2, or from any earlier v21.1 release, you do not have to go through intermediate releases; continue to step 2. - -1. Verify the overall health of your cluster using the [DB Console](ui-overview.html). On the **Overview**: - - Under **Node Status**, make sure all nodes that should be live are listed as such. If any nodes are unexpectedly listed as suspect or dead, identify why the nodes are offline and either restart them or [decommission](#remove-nodes) them before beginning your upgrade. If there are dead and non-decommissioned nodes in your cluster, it will not be possible to finalize the upgrade (either automatically or manually). - - Under **Replication Status**, make sure there are 0 under-replicated and unavailable ranges. Otherwise, performing a rolling upgrade increases the risk that ranges will lose a majority of their replicas and cause cluster unavailability. Therefore, it's important to [identify and resolve the cause of range under-replication and/or unavailability](cluster-setup-troubleshooting.html#replication-issues) before beginning your upgrade. - - In the **Node List**: - - Make sure all nodes are on the same version. If not all nodes are on the same version, upgrade them to the cluster's highest current version first, and then start this process over. - - Make sure capacity and memory usage are reasonable for each node. Nodes must be able to tolerate some increase in case the new version uses more resources for your workload. Also go to **Metrics > Dashboard: Hardware** and make sure CPU percent is reasonable across the cluster. If there's not enough headroom on any of these metrics, consider [adding nodes](#add-nodes) to your cluster before beginning your upgrade. - -1. Review the [backward-incompatible changes in v21.1](../releases/v21.1.html#v21-1-0-backward-incompatible-changes) and [deprecated features](../releases/v21.1.html#v21-1-0-deprecations). If any affect your deployment, make the necessary changes before starting the rolling upgrade to v21.1. - -1. Decide how the upgrade will be finalized. - - By default, after all nodes are running the new version, the upgrade process will be **auto-finalized**. This will enable certain [features and performance improvements introduced in v21.1](upgrade-cockroach-version.html#features-that-require-upgrade-finalization). After finalization, however, it will no longer be possible to perform a downgrade to v20.2. In the event of a catastrophic failure or corruption, the only option is to start a new cluster using the old binary and then restore from a [backup](take-full-and-incremental-backups.html) created prior to the upgrade. For this reason, **we recommend disabling auto-finalization** so you can monitor the stability and performance of the upgraded cluster before finalizing the upgrade, but note that you will need to follow all of the subsequent directions, including the manual finalization in a later step. - - {{site.data.alerts.callout_info}} - Finalization only applies when performing a major version upgrade (for example, from v20.2.x to v21.1). Patch version upgrades (for example, within the v21.1.x series) can always be downgraded. - {{site.data.alerts.end}} - - {% if page.secure == true %} - - 1. Start the CockroachDB [built-in SQL client](cockroach-sql.html). For example, if you followed the steps in [Deploy CockroachDB with Kubernetes](deploy-cockroachdb-with-kubernetes.html?filters=manual#step-3-use-the-built-in-sql-client) to launch a secure client pod, get a shell into the `cockroachdb-client-secure` pod: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \-- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=cockroachdb-public - ~~~ - - {% else %} - - 1. Launch a temporary interactive pod and start the [built-in SQL client](cockroach-sql.html) inside it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach \ - --rm \ - --restart=Never \ - -- sql \ - --insecure \ - --host=cockroachdb-public - ~~~ - - {% endif %} - - 1. Set the `cluster.preserve_downgrade_option` [cluster setting](cluster-settings.html) to the version you are upgrading from: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING cluster.preserve_downgrade_option = '20.2'; - ~~~ - - 1. Exit the SQL shell and delete the temporary pod: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ - -1. Add a [partition](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#staging-an-update) to the update strategy defined in the StatefulSet. Only the pods numbered greater than or equal to the partition value will be updated. For a cluster with 3 pods (e.g., `cockroachdb-0`, `cockroachdb-1`, `cockroachdb-2`) the partition value should be 2: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl patch statefulset cockroachdb \ - -p='{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":2}}}}' - ~~~ - - ~~~ - statefulset.apps/cockroachdb patched - ~~~ - -1. Kick off the upgrade process by changing the Docker image used in the CockroachDB StatefulSet: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl patch statefulset cockroachdb \ - --type='json' \ - -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"cockroachdb/cockroach:{{page.release_info.version}}"}]' - ~~~ - - ~~~ - statefulset.apps/cockroachdb patched - ~~~ - -1. Check the status of your cluster's pods. You should see one of them being restarted: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 1/1 Running 0 2m - cockroachdb-1 1/1 Running 0 2m - cockroachdb-2 0/1 Terminating 0 1m - ... - ~~~ - -1. After the pod has been restarted with the new image, start the CockroachDB [built-in SQL client](cockroach-sql.html): - - {% if page.secure == true %} - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \-- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=cockroachdb-public - ~~~ - - {% else %} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach \ - --rm \ - --restart=Never \ - -- sql \ - --insecure \ - --host=cockroachdb-public - ~~~ - - {% endif %} - -1. Run the following SQL query to verify that the number of under-replicated ranges is zero: - - {% include_cached copy-clipboard.html %} - ~~~ sql - SELECT sum((metrics->>'ranges.underreplicated')::DECIMAL)::INT AS ranges_underreplicated FROM crdb_internal.kv_store_status; - ~~~ - - ~~~ - ranges_underreplicated - -------------------------- - 0 - (1 row) - ~~~ - - This indicates that it is safe to proceed to the next pod. - -1. Exit the SQL shell: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ - -1. Decrement the partition value by 1 to allow the next pod in the cluster to update: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl patch statefulset cockroachdb \ - -p='{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":1}}}}' - ~~~ - - ~~~ - statefulset.apps/cockroachdb patched - ~~~ - -1. Repeat steps 4-8 until all pods have been restarted and are running the new image (the final partition value should be `0`). - -1. Check the image of each pod to confirm that all have been upgraded: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods \ - -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].image}{"\n"}' - ~~~ - - ~~~ - cockroachdb-0 cockroachdb/cockroach:{{page.release_info.version}} - cockroachdb-1 cockroachdb/cockroach:{{page.release_info.version}} - cockroachdb-2 cockroachdb/cockroach:{{page.release_info.version}} - ... - ~~~ - - You can also check the CockroachDB version of each node in the [DB Console](ui-cluster-overview-page.html#node-details). - -1. If you disabled auto-finalization earlier, monitor the stability and performance of your cluster until you are comfortable with the upgrade (generally at least a day). - - If you decide to roll back the upgrade, repeat the rolling restart procedure with the old binary. - - {{site.data.alerts.callout_info}} - This is only possible when performing a major version upgrade (for example, from v20.2.x to v21.1). Patch version upgrades (for example, within the v21.1.x series) are auto-finalized. - {{site.data.alerts.end}} - - To finalize the upgrade, re-enable auto-finalization: - - {% if page.secure == true %} - - 1. Start the CockroachDB [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=cockroachdb-public - ~~~ - - {% else %} - - 1. Launch a temporary interactive pod and start the [built-in SQL client](cockroach-sql.html) inside it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach \ - --rm \ - --restart=Never \ - -- sql \ - --insecure \ - --host=cockroachdb-public - ~~~ - - {% endif %} - - 2. Re-enable auto-finalization: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > RESET CLUSTER SETTING cluster.preserve_downgrade_option; - ~~~ - - 3. Exit the SQL shell and delete the temporary pod: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ diff --git a/src/current/_includes/v21.1/orchestration/local-start-kubernetes.md b/src/current/_includes/v21.1/orchestration/local-start-kubernetes.md deleted file mode 100644 index e504d052dbe..00000000000 --- a/src/current/_includes/v21.1/orchestration/local-start-kubernetes.md +++ /dev/null @@ -1,24 +0,0 @@ -## Before you begin - -Before getting started, it's helpful to review some Kubernetes-specific terminology: - -Feature | Description ---------|------------ -[minikube](http://kubernetes.io/docs/getting-started-guides/minikube/) | This is the tool you'll use to run a Kubernetes cluster inside a VM on your local workstation. -[pod](http://kubernetes.io/docs/user-guide/pods/) | A pod is a group of one of more Docker containers. In this tutorial, all pods will run on your local workstation, each containing one Docker container running a single CockroachDB node. You'll start with 3 pods and grow to 4. -[StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) | A StatefulSet is a group of pods treated as stateful units, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. StatefulSets are considered stable as of Kubernetes version 1.9 after reaching beta in version 1.5. -[persistent volume](http://kubernetes.io/docs/user-guide/persistent-volumes/) | A persistent volume is a piece of storage mounted into a pod. The lifetime of a persistent volume is decoupled from the lifetime of the pod that's using it, ensuring that each CockroachDB node binds back to the same storage on restart.

When using `minikube`, persistent volumes are external temporary directories that endure until they are manually deleted or until the entire Kubernetes cluster is deleted. -[persistent volume claim](http://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims) | When pods are created (one per CockroachDB node), each pod will request a persistent volume claim to “claim” durable storage for its node. - -## Step 1. Start Kubernetes - -1. Follow Kubernetes' [documentation](https://kubernetes.io/docs/tasks/tools/install-minikube/) to install `minikube`, the tool used to run Kubernetes locally, for your OS. This includes installing a hypervisor and `kubectl`, the command-line tool used to manage Kubernetes from your local workstation. - - {{site.data.alerts.callout_info}}Make sure you install minikube version 0.21.0 or later. Earlier versions do not include a Kubernetes server that supports the maxUnavailability field and PodDisruptionBudget resource type used in the CockroachDB StatefulSet configuration.{{site.data.alerts.end}} - -2. Start a local Kubernetes cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ minikube start - ~~~ diff --git a/src/current/_includes/v21.1/orchestration/monitor-cluster.md b/src/current/_includes/v21.1/orchestration/monitor-cluster.md deleted file mode 100644 index 5cadf9609a3..00000000000 --- a/src/current/_includes/v21.1/orchestration/monitor-cluster.md +++ /dev/null @@ -1,95 +0,0 @@ -To access the cluster's [DB Console](ui-overview.html): - -{% if page.secure == true %} - -1. On secure clusters, [certain pages of the DB Console](ui-overview.html#db-console-access) can only be accessed by `admin` users. - - Get a shell into the pod and start the CockroachDB [built-in SQL client](cockroach-sql.html): - -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach/cockroach-certs \ - --host=cockroachdb-public - ~~~ -
- -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=cockroachdb-public - ~~~ -
- -
- $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=my-release-cockroachdb-public -
- -1. Assign `roach` to the `admin` role (you only need to do this once): - - {% include_cached copy-clipboard.html %} - ~~~ sql - > GRANT admin TO roach; - ~~~ - -1. Exit the SQL shell and pod: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ - -{% endif %} - -1. In a new terminal window, port-forward from your local machine to the `cockroachdb-public` service: - -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl port-forward service/cockroachdb-public 8080 - ~~~ -
- -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl port-forward service/cockroachdb-public 8080 - ~~~ -
- -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl port-forward service/my-release-cockroachdb-public 8080 - ~~~ -
- - ~~~ - Forwarding from 127.0.0.1:8080 -> 8080 - ~~~ - - {{site.data.alerts.callout_info}}The port-forward command must be run on the same machine as the web browser in which you want to view the DB Console. If you have been running these commands from a cloud instance or other non-local shell, you will not be able to view the UI without configuring kubectl locally and running the above port-forward command on your local machine.{{site.data.alerts.end}} - -{% if page.secure == true %} - -1. Go to https://localhost:8080 and log in with the username and password you created earlier. - - {% include {{ page.version.version }}/misc/chrome-localhost.md %} - -{% else %} - -1. Go to http://localhost:8080. - -{% endif %} - -1. In the UI, verify that the cluster is running as expected: - - View the [Node List](ui-cluster-overview-page.html#node-list) to ensure that all nodes successfully joined the cluster. - - Click the **Databases** tab on the left to verify that `bank` is listed. diff --git a/src/current/_includes/v21.1/orchestration/operator-check-namespace.md b/src/current/_includes/v21.1/orchestration/operator-check-namespace.md deleted file mode 100644 index d6c70aa03dc..00000000000 --- a/src/current/_includes/v21.1/orchestration/operator-check-namespace.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -All `kubectl` steps should be performed in the [namespace where you installed the Operator](deploy-cockroachdb-with-kubernetes.html#install-the-operator). By default, this is `cockroach-operator-system`. -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v21.1/orchestration/start-cockroachdb-helm-insecure.md b/src/current/_includes/v21.1/orchestration/start-cockroachdb-helm-insecure.md deleted file mode 100644 index d1417b2dbd2..00000000000 --- a/src/current/_includes/v21.1/orchestration/start-cockroachdb-helm-insecure.md +++ /dev/null @@ -1,116 +0,0 @@ -{{site.data.alerts.callout_danger}} -The CockroachDB Helm chart is undergoing maintenance for compatibility with Kubernetes versions 1.17 through 1.21 (the latest version as of this writing). No new feature development is currently planned. For new production and local deployments, we currently recommend using a manual configuration (**Configs** option). If you are experiencing issues with a Helm deployment on production, contact our [Support team](https://support.cockroachlabs.com/). -{{site.data.alerts.end}} - -1. [Install the Helm client](https://helm.sh/docs/intro/install) (version 3.0 or higher) and add the `cockroachdb` chart repository: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm repo add cockroachdb https://charts.cockroachdb.com/ - ~~~ - - ~~~ - "cockroachdb" has been added to your repositories - ~~~ - -1. Update your Helm chart repositories to ensure that you're using the [latest CockroachDB chart](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/Chart.yaml): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm repo update - ~~~ - -1. Modify our Helm chart's [`values.yaml`](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/values.yaml) parameters for your deployment scenario. - - Create a `my-values.yaml` file to override the defaults in `values.yaml`, substituting your own values in this example based on the guidelines below. - - {% include_cached copy-clipboard.html %} - ~~~ - statefulset: - resources: - limits: - memory: "8Gi" - requests: - memory: "8Gi" - conf: - cache: "2Gi" - max-sql-memory: "2Gi" - ~~~ - - 1. To avoid running out of memory when CockroachDB is not the only pod on a Kubernetes node, you *must* set memory limits explicitly. This is because CockroachDB does not detect the amount of memory allocated to its pod when run in Kubernetes. We recommend setting `conf.cache` and `conf.max-sql-memory` each to 1/4 of the `memory` allocation specified in `statefulset.resources.requests` and `statefulset.resources.limits`. - - {{site.data.alerts.callout_success}} - For example, if you are allocating 8Gi of `memory` to each CockroachDB node, allocate 2Gi to `cache` and 2Gi to `max-sql-memory`. - {{site.data.alerts.end}} - - 1. For an insecure deployment, set `tls.enabled` to `false`. For clarity, this example includes the example configuration from the previous steps. - - {% include_cached copy-clipboard.html %} - ~~~ - statefulset: - resources: - limits: - memory: "8Gi" - requests: - memory: "8Gi" - conf: - cache: "2Gi" - max-sql-memory: "2Gi" - tls: - enabled: false - ~~~ - - 1. You may want to modify `storage.persistentVolume.size` and `storage.persistentVolume.storageClass` for your use case. This chart defaults to 100Gi of disk space per pod. For more details on customizing disks for performance, see [these instructions](kubernetes-performance.html#disk-type). - - {{site.data.alerts.callout_info}} - If necessary, you can [expand disk size](/docs/{{site.versions["stable"]}}/configure-cockroachdb-kubernetes.html?filters=helm#expand-disk-size) after the cluster is live. - {{site.data.alerts.end}} - - -1. Install the CockroachDB Helm chart. - - Provide a "release" name to identify and track this particular deployment of the chart, and override the default values with those in `my-values.yaml`. - - {{site.data.alerts.callout_info}} - This tutorial uses `my-release` as the release name. If you use a different value, be sure to adjust the release name in subsequent commands. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm install my-release --values my-values.yaml cockroachdb/cockroachdb - ~~~ - - Behind the scenes, this command uses our `cockroachdb-statefulset.yaml` file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. - -1. Confirm that CockroachDB cluster initialization has completed successfully, with the pods for CockroachDB showing `1/1` under `READY` and the pod for initialization showing `COMPLETED` under `STATUS`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-0 1/1 Running 0 8m - my-release-cockroachdb-1 1/1 Running 0 8m - my-release-cockroachdb-2 1/1 Running 0 8m - my-release-cockroachdb-init-hxzsc 0/1 Completed 0 1h - ~~~ - -1. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pv - ~~~ - - ~~~ - NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE - pvc-71019b3a-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-0 standard 11m - pvc-7108e172-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-1 standard 11m - pvc-710dcb66-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-2 standard 11m - ~~~ - -{{site.data.alerts.callout_success}} -The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to logs for a pod, use `kubectl logs ` rather than checking the log on the persistent volume. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/orchestration/start-cockroachdb-helm-secure.md b/src/current/_includes/v21.1/orchestration/start-cockroachdb-helm-secure.md deleted file mode 100644 index 608bf70daac..00000000000 --- a/src/current/_includes/v21.1/orchestration/start-cockroachdb-helm-secure.md +++ /dev/null @@ -1,121 +0,0 @@ -The CockroachDB Helm chart is compatible with Kubernetes versions 1.22 and earlier. - -The CockroachDB Helm chart is currently not under active development, and no new features are planned. However, Cockroach Labs remains committed to fully supporting the Helm chart by addressing defects, providing security patches, and addressing breaking changes due to deprecations in Kubernetes APIs. - -A deprecation notice for the Helm chart will be provided to customers a minimum of 6 months in advance of actual deprecation. - -{{site.data.alerts.callout_danger}} -If you are running a secure Helm deployment on Kubernetes 1.22 and later, you must migrate away from using the Kubernetes CA for cluster authentication. For details, see [Operate CockroachDB on Kubernetes](operate-cockroachdb-kubernetes.html?filters=helm#migration-to-self-signer). -{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}} -Secure CockroachDB deployments on Amazon EKS via Helm are [not yet supported](https://github.com/cockroachdb/cockroach/issues/38847). -{{site.data.alerts.end}} - -1. [Install the Helm client](https://helm.sh/docs/intro/install) (version 3.0 or higher) and add the `cockroachdb` chart repository: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm repo add cockroachdb https://charts.cockroachdb.com/ - ~~~ - - ~~~ - "cockroachdb" has been added to your repositories - ~~~ - -1. Update your Helm chart repositories to ensure that you're using the [latest CockroachDB chart](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/Chart.yaml): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm repo update - ~~~ - -1. The cluster configuration is set in the Helm chart's [values file](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/values.yaml). - - {{site.data.alerts.callout_info}} - By default, the Helm chart specifies CPU and memory resources that are appropriate for the virtual machines used in this deployment example. On a production cluster, you should substitute values that are appropriate for your machines and workload. For details on configuring your deployment, see [Operate CockroachDB on Kubernetes](operate-cockroachdb-kubernetes.html?filters=helm#allocate-resources). - {{site.data.alerts.end}} - - Before deploying, modify some parameters in our Helm chart's [values file](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/values.yaml): - - 1. Create a local YAML file (e.g., `my-values.yaml`) to specify your custom values. These will be used to override the defaults in `values.yaml`. - - 1. To avoid running out of memory when CockroachDB is not the only pod on a Kubernetes node, you *must* set memory limits explicitly. This is because CockroachDB does not detect the amount of memory allocated to its pod when run in Kubernetes. We recommend setting `conf.cache` and `conf.max-sql-memory` each to 1/4 of the `memory` allocation specified in `statefulset.resources.requests` and `statefulset.resources.limits`. - - {{site.data.alerts.callout_success}} - For example, if you are allocating 8Gi of `memory` to each CockroachDB node, allocate 2Gi to `cache` and 2Gi to `max-sql-memory`. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ - conf: - cache: "2Gi" - max-sql-memory: "2Gi" - ~~~ - - The Helm chart defaults to a secure deployment by automatically setting `tls.enabled` to `true`. - - {{site.data.alerts.callout_info}} - By default, the Helm chart will generate and sign 1 client and 1 node certificate to secure the cluster. To authenticate using your own CA, see [Certificate management](/docs/{{site.versions["stable"]}}/secure-cockroachdb-kubernetes.html?filters=helm#use-a-custom-ca). - {{site.data.alerts.end}} - -1. Install the CockroachDB Helm chart, specifying your custom values file. - - Provide a "release" name to identify and track this particular deployment of the chart, and override the default values with those in `my-values.yaml`. - - {{site.data.alerts.callout_info}} - This tutorial uses `my-release` as the release name. If you use a different value, be sure to adjust the release name in subsequent commands. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm install my-release --values {custom-values}.yaml cockroachdb/cockroachdb - ~~~ - - Behind the scenes, this command uses our `cockroachdb-statefulset.yaml` file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. - -1. Confirm that CockroachDB cluster initialization has completed successfully, with the pods for CockroachDB showing `1/1` under `READY` and the pod for initialization showing `COMPLETED` under `STATUS`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-0 1/1 Running 0 8m - my-release-cockroachdb-1 1/1 Running 0 8m - my-release-cockroachdb-2 1/1 Running 0 8m - my-release-cockroachdb-init-hxzsc 0/1 Completed 0 1h - ~~~ - -1. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pv - ~~~ - - ~~~ - NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE - pvc-71019b3a-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-0 standard 11m - pvc-7108e172-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-1 standard 11m - pvc-710dcb66-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-2 standard 11m - ~~~ - -1. Check that the secrets were created on the cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get secrets - ~~~ - - ~~~ - crdb-cockroachdb-ca-secret Opaque 2 23s - crdb-cockroachdb-client-secret kubernetes.io/tls 3 22s - crdb-cockroachdb-node-secret kubernetes.io/tls 3 23s - ~~~ - -{{site.data.alerts.callout_success}} -The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to logs for a pod, use `kubectl logs ` rather than checking the log on the persistent volume. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/orchestration/start-cockroachdb-insecure.md b/src/current/_includes/v21.1/orchestration/start-cockroachdb-insecure.md deleted file mode 100644 index e602cbe4203..00000000000 --- a/src/current/_includes/v21.1/orchestration/start-cockroachdb-insecure.md +++ /dev/null @@ -1,114 +0,0 @@ -1. From your local workstation, use our [`cockroachdb-statefulset.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset.yaml) file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it. - - Download [`cockroachdb-statefulset.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset.yaml): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset.yaml - ~~~ - - {{site.data.alerts.callout_info}} - By default, this manifest specifies CPU and memory resources that are appropriate for the virtual machines used in this deployment example. On a production cluster, you should substitute values that are appropriate for your machines and workload. For details on configuring your deployment, see [Operate CockroachDB on Kubernetes](operate-cockroachdb-kubernetes.html?filters=manual). - {{site.data.alerts.end}} - - Use the file to create the StatefulSet and start the cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl create -f cockroachdb-statefulset.yaml - ~~~ - - ~~~ - service/cockroachdb-public created - service/cockroachdb created - poddisruptionbudget.policy/cockroachdb-budget created - statefulset.apps/cockroachdb created - ~~~ - - Alternatively, if you'd rather start with a configuration file that has been customized for performance: - - 1. Download our [performance version of `cockroachdb-statefulset-insecure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/performance/cockroachdb-statefulset-insecure.yaml): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/performance/cockroachdb-statefulset-insecure.yaml - ~~~ - - 2. Modify the file wherever there is a `TODO` comment. - - 3. Use the file to create the StatefulSet and start the cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl create -f cockroachdb-statefulset-insecure.yaml - ~~~ - -2. Confirm that three pods are `Running` successfully. Note that they will not - be considered `Ready` until after the cluster has been initialized: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 0/1 Running 0 2m - cockroachdb-1 0/1 Running 0 2m - cockroachdb-2 0/1 Running 0 2m - ~~~ - -3. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get persistentvolumes - ~~~ - - ~~~ - NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE - pvc-52f51ecf-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-0 26s - pvc-52fd3a39-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-1 27s - pvc-5315efda-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-2 27s - ~~~ - -4. Use our [`cluster-init.yaml`](https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml) file to perform a one-time initialization that joins the CockroachDB nodes into a single cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl create \ - -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml - ~~~ - - ~~~ - job.batch/cluster-init created - ~~~ - -5. Confirm that cluster initialization has completed successfully. The job should be considered successful and the Kubernetes pods should soon be considered `Ready`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get job cluster-init - ~~~ - - ~~~ - NAME COMPLETIONS DURATION AGE - cluster-init 1/1 7s 27s - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cluster-init-cqf8l 0/1 Completed 0 56s - cockroachdb-0 1/1 Running 0 7m51s - cockroachdb-1 1/1 Running 0 7m51s - cockroachdb-2 1/1 Running 0 7m51s - ~~~ - -{{site.data.alerts.callout_success}} -The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/orchestration/start-cockroachdb-local-helm-insecure.md b/src/current/_includes/v21.1/orchestration/start-cockroachdb-local-helm-insecure.md deleted file mode 100644 index 494b3e6207e..00000000000 --- a/src/current/_includes/v21.1/orchestration/start-cockroachdb-local-helm-insecure.md +++ /dev/null @@ -1,65 +0,0 @@ -1. [Install the Helm client](https://helm.sh/docs/intro/install) (version 3.0 or higher) and add the `cockroachdb` chart repository: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm repo add cockroachdb https://charts.cockroachdb.com/ - ~~~ - - ~~~ - "cockroachdb" has been added to your repositories - ~~~ - -2. Update your Helm chart repositories to ensure that you're using the [latest CockroachDB chart](https://github.com/cockroachdb/helm-charts/blob/master/cockroachdb/Chart.yaml): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm repo update - ~~~ - -3. Install the CockroachDB Helm chart. - - Provide a "release" name to identify and track this particular deployment of the chart. - - {{site.data.alerts.callout_info}} - This tutorial uses `my-release` as the release name. If you use a different value, be sure to adjust the release name in subsequent commands. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ helm install my-release cockroachdb/cockroachdb - ~~~ - - Behind the scenes, this command uses our `cockroachdb-statefulset.yaml` file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. - -4. Confirm that CockroachDB cluster initialization has completed successfully, with the pods for CockroachDB showing `1/1` under `READY` and the pod for initialization showing `COMPLETED` under `STATUS`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-0 1/1 Running 0 8m - my-release-cockroachdb-1 1/1 Running 0 8m - my-release-cockroachdb-2 1/1 Running 0 8m - my-release-cockroachdb-init-hxzsc 0/1 Completed 0 1h - ~~~ - -5. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pv - ~~~ - - ~~~ - NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE - pvc-71019b3a-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-0 standard 11m - pvc-7108e172-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-1 standard 11m - pvc-710dcb66-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-2 standard 11m - ~~~ - -{{site.data.alerts.callout_success}} -The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/orchestration/start-cockroachdb-local-insecure.md b/src/current/_includes/v21.1/orchestration/start-cockroachdb-local-insecure.md deleted file mode 100644 index 37fe8e46939..00000000000 --- a/src/current/_includes/v21.1/orchestration/start-cockroachdb-local-insecure.md +++ /dev/null @@ -1,83 +0,0 @@ -1. From your local workstation, use our [`cockroachdb-statefulset.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset.yaml) file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset.yaml - ~~~ - - ~~~ - service/cockroachdb-public created - service/cockroachdb created - poddisruptionbudget.policy/cockroachdb-budget created - statefulset.apps/cockroachdb created - ~~~ - -2. Confirm that three pods are `Running` successfully. Note that they will not - be considered `Ready` until after the cluster has been initialized: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 0/1 Running 0 2m - cockroachdb-1 0/1 Running 0 2m - cockroachdb-2 0/1 Running 0 2m - ~~~ - -3. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pv - ~~~ - - ~~~ - NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE - pvc-52f51ecf-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-0 26s - pvc-52fd3a39-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-1 27s - pvc-5315efda-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-2 27s - ~~~ - -4. Use our [`cluster-init.yaml`](https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml) file to perform a one-time initialization that joins the CockroachDB nodes into a single cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl create \ - -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml - ~~~ - - ~~~ - job.batch/cluster-init created - ~~~ - -5. Confirm that cluster initialization has completed successfully. The job should be considered successful and the Kubernetes pods should soon be considered `Ready`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get job cluster-init - ~~~ - - ~~~ - NAME COMPLETIONS DURATION AGE - cluster-init 1/1 7s 27s - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cluster-init-cqf8l 0/1 Completed 0 56s - cockroachdb-0 1/1 Running 0 7m51s - cockroachdb-1 1/1 Running 0 7m51s - cockroachdb-2 1/1 Running 0 7m51s - ~~~ - -{{site.data.alerts.callout_success}} -The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/orchestration/start-cockroachdb-operator-secure.md b/src/current/_includes/v21.1/orchestration/start-cockroachdb-operator-secure.md deleted file mode 100644 index ffaeaee59f4..00000000000 --- a/src/current/_includes/v21.1/orchestration/start-cockroachdb-operator-secure.md +++ /dev/null @@ -1,115 +0,0 @@ -### Install the Operator - -{% capture latest_operator_version %}{% include_cached latest_operator_version.md %}{% endcapture %} - -1. Apply the [custom resource definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions) for the Operator: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl apply -f https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v{{ latest_operator_version }}/install/crds.yaml - ~~~ - - ~~~ - customresourcedefinition.apiextensions.k8s.io/crdbclusters.crdb.cockroachlabs.com created - ~~~ - -1. By default, the Operator is configured to install in the `cockroach-operator-system` namespace and to manage CockroachDB instances for all namespaces on the cluster. - - If you'd like to change either of these defaults: - - 1. Download the Operator manifest: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl -0 https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v{{ latest_operator_version }}/install/operator.yaml - ~~~ - - 1. To use a custom namespace, edit all instances of `namespace: cockroach-operator-system` with your desired namespace. - - 1. To limit the namespaces that will be monitored, set the `WATCH_NAMESPACE` environment variable in the `Deployment` pod spec. This can be set to a single namespace, or a comma-delimited set of namespaces. When set, only those `CrdbCluster` resources in the supplied namespace(s) will be reconciled. - - 1. Instead of using the command below, apply your local version of the Operator manifest to the cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl apply -f operator.yaml - ~~~ - - If you want to use the default namespace settings: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl apply -f https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v{{ latest_operator_version }}/install/operator.yaml - ~~~ - - ~~~ - clusterrole.rbac.authorization.k8s.io/cockroach-database-role created - serviceaccount/cockroach-database-sa created - clusterrolebinding.rbac.authorization.k8s.io/cockroach-database-rolebinding created - role.rbac.authorization.k8s.io/cockroach-operator-role created - clusterrolebinding.rbac.authorization.k8s.io/cockroach-operator-rolebinding created - clusterrole.rbac.authorization.k8s.io/cockroach-operator-role created - serviceaccount/cockroach-operator-sa created - rolebinding.rbac.authorization.k8s.io/cockroach-operator-default created - deployment.apps/cockroach-operator created - ~~~ - - -1. Validate that the Operator is running: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroach-operator-6f7b86ffc4-9ppkv 1/1 Running 0 54s - ~~~ - -### Initialize the cluster - -{{site.data.alerts.callout_info}} -By default, the Operator will generate and sign 1 client and 1 node certificate to secure the cluster. To authenticate using your own CA, see [Operate CockroachDB on Kubernetes](operate-cockroachdb-kubernetes.html#use-a-custom-ca). -{{site.data.alerts.end}} - -1. Download `example.yaml`, a custom resource that tells the Operator how to configure the Kubernetes cluster. - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v{{ latest_operator_version }}/examples/example.yaml - ~~~ - - {{site.data.alerts.callout_info}} - By default, this manifest specifies CPU and memory resources that are appropriate for the virtual machines used in this deployment example. On a production cluster, you should [substitute values](operate-cockroachdb-kubernetes.html#allocate-resources) that are appropriate for your machines and workload. - {{site.data.alerts.end}} - -1. Apply `example.yaml`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl apply -f example.yaml - ~~~ - - The Operator will create a StatefulSet and initialize the nodes as a cluster. - - ~~~ - crdbcluster.crdb.cockroachlabs.com/cockroachdb created - ~~~ - -1. Check that the pods were created: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroach-operator-6f7b86ffc4-9t9zb 1/1 Running 0 3m22s - cockroachdb-0 1/1 Running 0 2m31s - cockroachdb-1 1/1 Running 0 102s - cockroachdb-2 1/1 Running 0 46s - ~~~ - - Each pod should have `READY` status soon after being created. \ No newline at end of file diff --git a/src/current/_includes/v21.1/orchestration/start-cockroachdb-secure.md b/src/current/_includes/v21.1/orchestration/start-cockroachdb-secure.md deleted file mode 100644 index 8e0541b27a4..00000000000 --- a/src/current/_includes/v21.1/orchestration/start-cockroachdb-secure.md +++ /dev/null @@ -1,106 +0,0 @@ -### Configure the cluster - -1. Download and modify our [StatefulSet configuration](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/bring-your-own-certs/cockroachdb-statefulset.yaml): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/bring-your-own-certs/cockroachdb-statefulset.yaml - ~~~ - -1. Update `secretName` with the name of the corresponding node secret. - - The secret names depend on your method for generating secrets. For example, if you follow the below [steps using `cockroach cert`](#create-certificates), use this secret name: - - {% include_cached copy-clipboard.html %} - ~~~ yaml - secret: - secretName: cockroachdb.node - ~~~ - -{{site.data.alerts.callout_info}} -By default, this manifest specifies CPU and memory resources that are appropriate for the virtual machines used in this deployment example. On a production cluster, you should substitute values that are appropriate for your machines and workload. For details on configuring your deployment, see [Operate CockroachDB on Kubernetes](operate-cockroachdb-kubernetes.html?filters=manual). -{{site.data.alerts.end}} - -### Create certificates - -{{site.data.alerts.callout_success}} -The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume. -{{site.data.alerts.end}} - -{% include {{ page.version.version }}/orchestration/kubernetes-cockroach-cert.md %} - -### Initialize the cluster - -1. Use the config file you downloaded to create the StatefulSet that automatically creates 3 pods, each running a CockroachDB node: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl create -f cockroachdb-statefulset.yaml - ~~~ - - ~~~ - serviceaccount/cockroachdb created - role.rbac.authorization.k8s.io/cockroachdb created - rolebinding.rbac.authorization.k8s.io/cockroachdb created - service/cockroachdb-public created - service/cockroachdb created - poddisruptionbudget.policy/cockroachdb-budget created - statefulset.apps/cockroachdb created - ~~~ - -1. Initialize the CockroachDB cluster: - - 1. Confirm that three pods are `Running` successfully. Note that they will not be considered `Ready` until after the cluster has been initialized: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 0/1 Running 0 2m - cockroachdb-1 0/1 Running 0 2m - cockroachdb-2 0/1 Running 0 2m - ~~~ - - 1. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pv - ~~~ - - ~~~ - NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE - pvc-9e435563-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-0 standard 51m - pvc-9e47d820-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-1 standard 51m - pvc-9e4f57f0-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-2 standard 51m - ~~~ - - 1. Run `cockroach init` on one of the pods to complete the node startup process and have them join together as a cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-0 \ - -- /cockroach/cockroach init \ - --certs-dir=/cockroach/cockroach-certs - ~~~ - - ~~~ - Cluster successfully initialized - ~~~ - - 1. Confirm that cluster initialization has completed successfully. The job should be considered successful and the Kubernetes pods should soon be considered `Ready`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 1/1 Running 0 3m - cockroachdb-1 1/1 Running 0 3m - cockroachdb-2 1/1 Running 0 3m - ~~~ \ No newline at end of file diff --git a/src/current/_includes/v21.1/orchestration/start-kubernetes.md b/src/current/_includes/v21.1/orchestration/start-kubernetes.md deleted file mode 100644 index cf4d554244e..00000000000 --- a/src/current/_includes/v21.1/orchestration/start-kubernetes.md +++ /dev/null @@ -1,98 +0,0 @@ -You can use the hosted [Google Kubernetes Engine (GKE)](#hosted-gke) service or the hosted [Amazon Elastic Kubernetes Service (EKS)](#hosted-eks) to quickly start Kubernetes. - -{{site.data.alerts.callout_info}} -GKE or EKS are not required to run CockroachDB on Kubernetes. A manual GCE or AWS cluster with the [minimum recommended Kubernetes version](#kubernetes-version) and at least 3 pods, each presenting [sufficient resources](#resources) to start a CockroachDB node, can also be used. -{{site.data.alerts.end}} - -### Hosted GKE - -1. Complete the **Before You Begin** steps described in the [Google Kubernetes Engine Quickstart](https://cloud.google.com/kubernetes-engine/docs/quickstart) documentation. - - This includes installing `gcloud`, which is used to create and delete Kubernetes Engine clusters, and `kubectl`, which is the command-line tool used to manage Kubernetes from your workstation. - - {{site.data.alerts.callout_success}} - The documentation offers the choice of using Google's Cloud Shell product or using a local shell on your machine. Choose to use a local shell if you want to be able to view the DB Console using the steps in this guide. - {{site.data.alerts.end}} - -2. From your local workstation, start the Kubernetes cluster, specifying one of the available [regions](https://cloud.google.com/compute/docs/regions-zones#available) (e.g., `us-east1`): - - {{site.data.alerts.callout_success}} - Since this region can differ from your default `gcloud` region, be sure to include the `--region` flag to run `gcloud` commands against this cluster. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ gcloud container clusters create cockroachdb --machine-type n2-standard-4 --region {region-name} --num-nodes 1 - ~~~ - - ~~~ - Creating cluster cockroachdb...done. - ~~~ - - This creates GKE instances and joins them into a single Kubernetes cluster named `cockroachdb`. The `--region` flag specifies a [regional three-zone cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-regional-cluster), and `--num-nodes` specifies one Kubernetes worker node in each zone. - - The `--machine-type` flag tells the node pool to use the [`n2-standard-4`](https://cloud.google.com/compute/docs/machine-types#standard_machine_types) machine type (4 vCPUs, 16 GB memory), which meets our [recommended CPU and memory configuration](recommended-production-settings.html#basic-hardware-recommendations). - - The process can take a few minutes, so do not move on to the next step until you see a `Creating cluster cockroachdb...done` message and details about your cluster. - -3. Get the email address associated with your Google Cloud account: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ gcloud info | grep Account - ~~~ - - ~~~ - Account: [your.google.cloud.email@example.org] - ~~~ - - {{site.data.alerts.callout_danger}} - This command returns your email address in all lowercase. However, in the next step, you must enter the address using the accurate capitalization. For example, if your address is YourName@example.com, you must use YourName@example.com and not yourname@example.com. - {{site.data.alerts.end}} - -4. [Create the RBAC roles](https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control#prerequisites_for_using_role-based_access_control) CockroachDB needs for running on GKE, using the address from the previous step: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl create clusterrolebinding $USER-cluster-admin-binding \ - --clusterrole=cluster-admin \ - --user={your.google.cloud.email@example.org} - ~~~ - - ~~~ - clusterrolebinding.rbac.authorization.k8s.io/your.username-cluster-admin-binding created - ~~~ - -### Hosted EKS - -1. Complete the steps described in the [EKS Getting Started](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html) documentation. - - This includes installing and configuring the AWS CLI and `eksctl`, which is the command-line tool used to create and delete Kubernetes clusters on EKS, and `kubectl`, which is the command-line tool used to manage Kubernetes from your workstation. - - {{site.data.alerts.callout_info}} - If you are running [EKS-Anywhere](https://aws.amazon.com/eks/eks-anywhere/), CockroachDB requires that you [configure your default storage class](https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/) to auto-provision persistent volumes. Alternatively, you can define a custom storage configuration as required by your install pattern. - {{site.data.alerts.end}} - -2. From your local workstation, start the Kubernetes cluster: - - {{site.data.alerts.callout_success}} - To ensure that all 3 nodes can be placed into a different availability zone, you may want to first [confirm that at least 3 zones are available in the region](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#availability-zones-describe) for your account. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ eksctl create cluster \ - --name cockroachdb \ - --nodegroup-name standard-workers \ - --node-type m5.xlarge \ - --nodes 3 \ - --nodes-min 1 \ - --nodes-max 4 \ - --node-ami auto - ~~~ - - This creates EKS instances and joins them into a single Kubernetes cluster named `cockroachdb`. The `--node-type` flag tells the node pool to use the [`m5.xlarge`](https://aws.amazon.com/ec2/instance-types/) instance type (4 vCPUs, 16 GB memory), which meets our [recommended CPU and memory configuration](recommended-production-settings.html#basic-hardware-recommendations). - - Cluster provisioning usually takes between 10 and 15 minutes. Do not move on to the next step until you see a message like `[✔] EKS cluster "cockroachdb" in "us-east-1" region is ready` and details about your cluster. - -3. Open the [AWS CloudFormation console](https://console.aws.amazon.com/cloudformation/home) to verify that the stacks `eksctl-cockroachdb-cluster` and `eksctl-cockroachdb-nodegroup-standard-workers` were successfully created. Be sure that your region is selected in the console. \ No newline at end of file diff --git a/src/current/_includes/v21.1/orchestration/test-cluster-insecure.md b/src/current/_includes/v21.1/orchestration/test-cluster-insecure.md deleted file mode 100644 index dd4f47561ae..00000000000 --- a/src/current/_includes/v21.1/orchestration/test-cluster-insecure.md +++ /dev/null @@ -1,72 +0,0 @@ -1. Launch a temporary interactive pod and start the [built-in SQL client](cockroach-sql.html) inside it: - -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach:{{page.release_info.version}} \ - --rm \ - --restart=Never \ - -- sql \ - --insecure \ - --host=cockroachdb-public - ~~~ -
- -
- {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach:{{page.release_info.version}} \ - --rm \ - --restart=Never \ - -- sql \ - --insecure \ - --host=my-release-cockroachdb-public - ~~~ -
- -2. Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE bank; - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE TABLE bank.accounts ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - balance DECIMAL - ); - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > INSERT INTO bank.accounts (balance) - VALUES - (1000.50), (20000), (380), (500), (55000); - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SELECT * FROM bank.accounts; - ~~~ - - ~~~ - id | balance - +--------------------------------------+---------+ - 6f123370-c48c-41ff-b384-2c185590af2b | 380 - 990c9148-1ea0-4861-9da7-fd0e65b0a7da | 1000.50 - ac31c671-40bf-4a7b-8bee-452cff8a4026 | 500 - d58afd93-5be9-42ba-b2e2-dc00dcedf409 | 20000 - e6d8f696-87f5-4d3c-a377-8e152fdc27f7 | 55000 - (5 rows) - ~~~ - -3. Exit the SQL shell and delete the temporary pod: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > \q - ~~~ diff --git a/src/current/_includes/v21.1/orchestration/test-cluster-secure.md b/src/current/_includes/v21.1/orchestration/test-cluster-secure.md deleted file mode 100644 index 8e72dd5b893..00000000000 --- a/src/current/_includes/v21.1/orchestration/test-cluster-secure.md +++ /dev/null @@ -1,144 +0,0 @@ -To use the CockroachDB SQL client, first launch a secure pod running the `cockroach` binary. - -
- -{% capture latest_operator_version %}{% include_cached latest_operator_version.md %}{% endcapture %} - -{% include_cached copy-clipboard.html %} -~~~ shell -$ kubectl create \ --f https://raw.githubusercontent.com/cockroachdb/cockroach-operator/v{{ latest_operator_version }}/examples/client-secure-operator.yaml -~~~ - -1. Get a shell into the pod and start the CockroachDB [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach/cockroach-certs \ - --host=cockroachdb-public - ~~~ - - ~~~ - # Welcome to the CockroachDB SQL shell. - # All statements must be terminated by a semicolon. - # To exit, type: \q. - # - # Server version: CockroachDB CCL v21.1.0 (x86_64-unknown-linux-gnu, built 2021/04/23 13:54:57, go1.13.14) (same version as client) - # Cluster ID: a96791d9-998c-4683-a3d3-edbf425bbf11 - # - # Enter \? for a brief introduction. - # - root@cockroachdb-public:26257/defaultdb> - ~~~ - -{% include {{ page.version.version }}/orchestration/kubernetes-basic-sql.md %} -
- -
- -{% include_cached copy-clipboard.html %} -~~~ shell -$ kubectl create \ --f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/bring-your-own-certs/client.yaml -~~~ - -~~~ -pod/cockroachdb-client-secure created -~~~ - -1. Get a shell into the pod and start the CockroachDB [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=cockroachdb-public - ~~~ - - ~~~ - # Welcome to the cockroach SQL interface. - # All statements must be terminated by a semicolon. - # To exit: CTRL + D. - # - # Client version: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6) - # Server version: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6) - - # Cluster ID: 256a8705-e348-4e3a-ab12-e1aba96857e4 - # - # Enter \? for a brief introduction. - # - root@cockroachdb-public:26257/defaultdb> - ~~~ - - {{site.data.alerts.callout_success}} - This pod will continue running indefinitely, so any time you need to reopen the built-in SQL client or run any other [`cockroach` client commands](cockroach-commands.html) (e.g., `cockroach node`), repeat step 2 using the appropriate `cockroach` command. - - If you'd prefer to delete the pod and recreate it when needed, run `kubectl delete pod cockroachdb-client-secure`. - {{site.data.alerts.end}} - -{% include {{ page.version.version }}/orchestration/kubernetes-basic-sql.md %} -
- -
-From your local workstation, use our [`client-secure.yaml`](https://github.com/cockroachdb/helm-charts/blob/master/examples/client-secure.yaml) file to launch a pod and keep it running indefinitely. - -1. Download the file: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl -OOOOOOOOO \ - https://raw.githubusercontent.com/cockroachdb/helm-charts/master/examples/client-secure.yaml - ~~~ - -1. In the file, set the following values: - - `spec.serviceAccountName: my-release-cockroachdb` - - `spec.image: cockroachdb/cockroach: {your CockroachDB version}` - - `spec.volumes[0].project.sources[0].secret.name: my-release-cockroachdb-client-secret` - -1. Use the file to launch a pod and keep it running indefinitely: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl create -f client-secure.yaml - ~~~ - - ~~~ - pod "cockroachdb-client-secure" created - ~~~ - -1. Get a shell into the pod and start the CockroachDB [built-in SQL client](cockroach-sql.html): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=./cockroach-certs \ - --host=my-release-cockroachdb-public - ~~~ - - ~~~ - # Welcome to the cockroach SQL interface. - # All statements must be terminated by a semicolon. - # To exit: CTRL + D. - # - # Client version: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6) - # Server version: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6) - - # Cluster ID: 256a8705-e348-4e3a-ab12-e1aba96857e4 - # - # Enter \? for a brief introduction. - # - root@my-release-cockroachdb-public:26257/defaultdb> - ~~~ - - {{site.data.alerts.callout_success}} - This pod will continue running indefinitely, so any time you need to reopen the built-in SQL client or run any other [`cockroach` client commands](cockroach-commands.html) (e.g., `cockroach node`), repeat step 2 using the appropriate `cockroach` command. - - If you'd prefer to delete the pod and recreate it when needed, run `kubectl delete pod cockroachdb-client-secure`. - {{site.data.alerts.end}} - -{% include {{ page.version.version }}/orchestration/kubernetes-basic-sql.md %} -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/performance/lease-preference-system-database.md b/src/current/_includes/v21.1/performance/lease-preference-system-database.md deleted file mode 100644 index 4bfbb8b4931..00000000000 --- a/src/current/_includes/v21.1/performance/lease-preference-system-database.md +++ /dev/null @@ -1,8 +0,0 @@ -To reduce latency while making {% if page.name == "online-schema-changes.md" %}online schema changes{% else %}[online schema changes](online-schema-changes.html){% endif %}, we recommend specifying a `lease_preference` [zone configuration](configure-replication-zones.html) on the `system` database to a single region and running all subsequent schema changes from a node within that region. For example, if the majority of online schema changes come from machines that are geographically close to `us-east1`, run the following: - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER DATABASE system CONFIGURE ZONE USING constraints = '{"+region=us-east1": 1}', lease_preferences = '[[+region=us-east1]]'; -~~~ - -Run all subsequent schema changes from a node in the specified region. diff --git a/src/current/_includes/v21.1/performance/statement-contention.md b/src/current/_includes/v21.1/performance/statement-contention.md deleted file mode 100644 index c643e1cbabd..00000000000 --- a/src/current/_includes/v21.1/performance/statement-contention.md +++ /dev/null @@ -1,32 +0,0 @@ -Find the transactions and statements within the transactions that are experiencing contention. CockroachDB has several ways of tracking down transactions that are experiencing contention: - -* The [Transactions page](ui-transactions-page.html) and the [Statements page](ui-statements-page.html) in the DB Console allow you to sort by contention. -* Create views for the information in the `crdb_internal.cluster_contention_events` table to find the tables and indexes that are experiencing contention. - - {% include_cached copy-clipboard.html %} - ~~~ sql - CREATE VIEW contended_tables (database_name, schema_name, name, num_contention_events) - AS SELECT DISTINCT database_name, schema_name, name, num_contention_events - FROM crdb_internal.cluster_contention_events - JOIN crdb_internal.tables - ON crdb_internal.cluster_contention_events.table_id = crdb_internal.tables.table_id - ORDER BY num_contention_events desc; - - CREATE VIEW contended_indexes (database_name, schema_name, name, index_name, num_contention_events) AS - SELECT DISTINCT database_name, schema_name, name, index_name, num_contention_events - FROM crdb_internal.cluster_contention_events, crdb_internal.tables, crdb_internal.table_indexes - WHERE (crdb_internal.cluster_contention_events.index_id = crdb_internal.table_indexes.index_id - AND crdb_internal.cluster_contention_events.table_id = crdb_internal.table_indexes.descriptor_id) - AND (crdb_internal.cluster_contention_events.table_id = crdb_internal.tables.table_id) - ORDER BY num_contention_events DESC; - ~~~ - - Then run a select statement from the `contended_tables` or `contended_indexes` view. - - {% include_cached copy-clipboard.html %} - ~~~ sql - SELECT * FROM contended_tables; - SELECT * FROM contended_indexes; - ~~~ - -After identifying the tables and indexes experiencing contention, follow the steps [outlined in our best practices recommendations to avoid contention](performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention). diff --git a/src/current/_includes/v21.1/performance/use-hash-sharded-indexes.md b/src/current/_includes/v21.1/performance/use-hash-sharded-indexes.md deleted file mode 100644 index 3167f29d328..00000000000 --- a/src/current/_includes/v21.1/performance/use-hash-sharded-indexes.md +++ /dev/null @@ -1 +0,0 @@ -For performance reasons, we [discourage indexing on sequential keys](indexes.html). If, however, you are working with a table that must be indexed on sequential keys, you should use [hash-sharded indexes](hash-sharded-indexes.html). Hash-sharded indexes distribute sequential traffic uniformly across ranges, eliminating single-range hotspots and improving write performance on sequentially-keyed indexes at a small cost to read performance. \ No newline at end of file diff --git a/src/current/_includes/v21.1/prod-deployment/advertise-addr-join.md b/src/current/_includes/v21.1/prod-deployment/advertise-addr-join.md deleted file mode 100644 index 67019d1fcea..00000000000 --- a/src/current/_includes/v21.1/prod-deployment/advertise-addr-join.md +++ /dev/null @@ -1,4 +0,0 @@ -Flag | Description ------|------------ -`--advertise-addr` | Specifies the IP address/hostname and port to tell other nodes to use. The port number can be omitted, in which case it defaults to `26257`.

This value must route to an IP address the node is listening on (with `--listen-addr` unspecified, the node listens on all IP addresses).

In some networking scenarios, you may need to use `--advertise-addr` and/or `--listen-addr` differently. For more details, see [Networking](recommended-production-settings.html#networking). -`--join` | Identifies the address of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising. diff --git a/src/current/_includes/v21.1/prod-deployment/aws-inbound-rules.md b/src/current/_includes/v21.1/prod-deployment/aws-inbound-rules.md deleted file mode 100644 index 8be748205a6..00000000000 --- a/src/current/_includes/v21.1/prod-deployment/aws-inbound-rules.md +++ /dev/null @@ -1,31 +0,0 @@ -#### Inter-node and load balancer-node communication - - Field | Value --------|------------------- - Port Range | **26257** - Source | The ID of your security group (e.g., *sg-07ab277a*) - -#### Application data - - Field | Value --------|------------------- - Port Range | **26257** - Source | Your application's IP ranges - -#### DB Console - - Field | Value --------|------------------- - Port Range | **8080** - Source | Your network's IP ranges - -You can set your network IP by selecting "My IP" in the Source field. - -#### Load balancer-health check communication - - Field | Value --------|------------------- - Port Range | **8080** - Source | The IP range of your VPC in CIDR notation (e.g., 10.12.0.0/16) - - To get the IP range of a VPC, open the [Amazon VPC console](https://console.aws.amazon.com/vpc/) and find the VPC listed in the section called Your VPCs. \ No newline at end of file diff --git a/src/current/_includes/v21.1/prod-deployment/insecure-flag.md b/src/current/_includes/v21.1/prod-deployment/insecure-flag.md deleted file mode 100644 index a13951ba4bc..00000000000 --- a/src/current/_includes/v21.1/prod-deployment/insecure-flag.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_danger}} -The `--insecure` flag used in this tutorial is intended for non-production testing only. To run CockroachDB in production, use a secure cluster instead. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/prod-deployment/insecure-initialize-cluster.md b/src/current/_includes/v21.1/prod-deployment/insecure-initialize-cluster.md deleted file mode 100644 index 1bf99ee27c0..00000000000 --- a/src/current/_includes/v21.1/prod-deployment/insecure-initialize-cluster.md +++ /dev/null @@ -1,12 +0,0 @@ -On your local machine, complete the node startup process and have them join together as a cluster: - -1. [Install CockroachDB](install-cockroachdb.html) on your local machine, if you haven't already. - -2. Run the [`cockroach init`](cockroach-init.html) command, with the `--host` flag set to the address of any node: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach init --insecure --host=
- ~~~ - - Each node then prints helpful details to the [standard output](cockroach-start.html#standard-output), such as the CockroachDB version, the URL for the DB Console, and the SQL URL for clients. diff --git a/src/current/_includes/v21.1/prod-deployment/insecure-recommendations.md b/src/current/_includes/v21.1/prod-deployment/insecure-recommendations.md deleted file mode 100644 index e27b3489865..00000000000 --- a/src/current/_includes/v21.1/prod-deployment/insecure-recommendations.md +++ /dev/null @@ -1,13 +0,0 @@ -- Consider using a [secure cluster](manual-deployment.html) instead. Using an insecure cluster comes with risks: - - Your cluster is open to any client that can access any node's IP addresses. - - Any user, even `root`, can log in without providing a password. - - Any user, connecting as `root`, can read or write any data in your cluster. - - There is no network encryption or authentication, and thus no confidentiality. - -- Decide how you want to access your DB Console: - - Access Level | Description - -------------|------------ - Partially open | Set a firewall rule to allow only specific IP addresses to communicate on port `8080`. - Completely open | Set a firewall rule to allow all IP addresses to communicate on port `8080`. - Completely closed | Set a firewall rule to disallow all communication on port `8080`. In this case, a machine with SSH access to a node could use an SSH tunnel to access the DB Console. diff --git a/src/current/_includes/v21.1/prod-deployment/insecure-requirements.md b/src/current/_includes/v21.1/prod-deployment/insecure-requirements.md deleted file mode 100644 index fb2faee26e8..00000000000 --- a/src/current/_includes/v21.1/prod-deployment/insecure-requirements.md +++ /dev/null @@ -1,9 +0,0 @@ -- You must have [SSH access]({{page.ssh-link}}) to each machine. This is necessary for distributing and starting CockroachDB binaries. - -- Your network configuration must allow TCP communication on the following ports: - - `26257` for intra-cluster and client-cluster communication - - `8080` to expose your DB Console - -- Carefully review the [Production Checklist](recommended-production-settings.html) and recommended [Topology Patterns](topology-patterns.html). - -{% include {{ page.version.version }}/prod-deployment/topology-recommendations.md %} \ No newline at end of file diff --git a/src/current/_includes/v21.1/prod-deployment/insecure-scale-cluster.md b/src/current/_includes/v21.1/prod-deployment/insecure-scale-cluster.md deleted file mode 100644 index 44b630a2310..00000000000 --- a/src/current/_includes/v21.1/prod-deployment/insecure-scale-cluster.md +++ /dev/null @@ -1,117 +0,0 @@ -You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/). - -
- - -
-

- -
- -For each additional node you want to add to the cluster, complete the following steps: - -1. SSH to the machine where you want the node to run. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. Run the [`cockroach start`](cockroach-start.html) command, passing the new node's address as the `--advertise-addr` flag and pointing `--join` to the three existing nodes (also include `--locality` if you set it earlier). - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --insecure \ - --advertise-addr= \ - --join=,, \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - -5. Update your load balancer to recognize the new node. - -
- -
- -For each additional node you want to add to the cluster, complete the following steps: - -1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. Create the Cockroach directory: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir /var/lib/cockroach - ~~~ - -5. Create a Unix user named `cockroach`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ useradd cockroach - ~~~ - -6. Change the ownership of the `cockroach` directory to the user `cockroach`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ chown cockroach /var/lib/cockroach - ~~~ - -7. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service - ~~~ - - Alternatively, you can create the file yourself and copy the script into it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - {% include {{ page.version.version }}/prod-deployment/insecurecockroachdb.service %} - ~~~ - - Save the file in the `/etc/systemd/system/` directory - -8. Customize the sample configuration template for your deployment: - - Specify values for the following flags in the sample configuration template: - - {% include {{ page.version.version }}/prod-deployment/advertise-addr-join.md %} - -9. Repeat these steps for each additional node that you want in your cluster. - -
diff --git a/src/current/_includes/v21.1/prod-deployment/insecure-start-nodes.md b/src/current/_includes/v21.1/prod-deployment/insecure-start-nodes.md deleted file mode 100644 index a2f1dc9080e..00000000000 --- a/src/current/_includes/v21.1/prod-deployment/insecure-start-nodes.md +++ /dev/null @@ -1,188 +0,0 @@ -You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/). - -
- - -
-

- -
- -For each initial node of your cluster, complete the following steps: - -{{site.data.alerts.callout_info}} -After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step. -{{site.data.alerts.end}} - -1. SSH to the machine where you want the node to run. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. CockroachDB uses custom-built versions of the [GEOS](spatial-glossary.html#geos) libraries. Copy these libraries to the location where CockroachDB expects to find them: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir -p /usr/local/lib/cockroach - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos.so /usr/local/lib/cockroach/ - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos_c.so /usr/local/lib/cockroach/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -5. Run the [`cockroach start`](cockroach-start.html) command: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --insecure \ - --advertise-addr= \ - --join=,, \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - - This command primes the node to start, using the following flags: - - Flag | Description - -----|------------ - `--insecure` | Indicates that the cluster is insecure, with no network encryption or authentication. - `--advertise-addr` | Specifies the IP address/hostname and port to tell other nodes to use. The port number can be omitted, in which case it defaults to `26257`.

This value must route to an IP address the node is listening on (with `--listen-addr` unspecified, the node listens on all IP addresses).

In some networking scenarios, you may need to use `--advertise-addr` and/or `--listen-addr` differently. For more details, see [Networking](recommended-production-settings.html#networking). - `--join` | Identifies the address of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising. - `--cache`
`--max-sql-memory` | Increases the node's cache size to 25% of available system memory to improve read performance. The capacity for in-memory SQL processing defaults to 25% of system memory but can be raised, if necessary, to increase the number of simultaneous client connections allowed by the node as well as the node's capacity for in-memory processing of rows when using `ORDER BY`, `GROUP BY`, `DISTINCT`, joins, and window functions. For more details, see [Cache and SQL Memory Size](recommended-production-settings.html#cache-and-sql-memory-size). - `--background` | Starts the node in the background so you gain control of the terminal to issue more commands. - - When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain enterprise features. For more details, see [Locality](cockroach-start.html#locality). - - For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data` and binds DB Console HTTP requests to `--http-addr=localhost:8080`. To set these options manually, see [Start a Node](cockroach-start.html). - -6. Repeat these steps for each additional node that you want in your cluster. - -
- -
- -For each initial node of your cluster, complete the following steps: - -{{site.data.alerts.callout_info}} -After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step. -{{site.data.alerts.end}} - -1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. CockroachDB uses custom-built versions of the [GEOS](spatial-glossary.html#geos) libraries. Copy these libraries to the location where CockroachDB expects to find them: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir -p /usr/local/lib/cockroach - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos.so /usr/local/lib/cockroach/ - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos_c.so /usr/local/lib/cockroach/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -5. Create the Cockroach directory: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir /var/lib/cockroach - ~~~ - -6. Create a Unix user named `cockroach`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ useradd cockroach - ~~~ - -7. Change the ownership of the `cockroach` directory to the user `cockroach`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ chown cockroach /var/lib/cockroach - ~~~ - -8. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service) and save the file in the `/etc/systemd/system/` directory: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service - ~~~ - - Alternatively, you can create the file yourself and copy the script into it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - {% include {{ page.version.version }}/prod-deployment/insecurecockroachdb.service %} - ~~~ - -9. In the sample configuration template, specify values for the following flags: - - {% include {{ page.version.version }}/prod-deployment/advertise-addr-join.md %} - - When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain enterprise features. For more details, see [Locality](cockroach-start.html#locality). - - For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data` and binds DB Console HTTP requests to `--http-port=8080`. To set these options manually, see [Start a Node](cockroach-start.html). - -10. Start the CockroachDB cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ systemctl start insecurecockroachdb - ~~~ - -11. Repeat these steps for each additional node that you want in your cluster. - -{{site.data.alerts.callout_info}} -`systemd` handles node restarts in case of node failure. To stop a node without `systemd` restarting it, run `systemctl stop insecurecockroachdb` -{{site.data.alerts.end}} - -
diff --git a/src/current/_includes/v21.1/prod-deployment/insecure-test-cluster.md b/src/current/_includes/v21.1/prod-deployment/insecure-test-cluster.md deleted file mode 100644 index 9f1d66fad3b..00000000000 --- a/src/current/_includes/v21.1/prod-deployment/insecure-test-cluster.md +++ /dev/null @@ -1,41 +0,0 @@ -CockroachDB replicates and distributes data behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. Once a cluster is live, any node can be used as a SQL gateway. - -When using a load balancer, you should issue commands directly to the load balancer, which then routes traffic to the nodes. - -Use the [built-in SQL client](cockroach-sql.html) locally as follows: - -1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of the load balancer: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure --host=
- ~~~ - -2. Create an `insecurenodetest` database: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE insecurenodetest; - ~~~ - -3. View the cluster's databases, which will include `insecurenodetest`: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SHOW DATABASES; - ~~~ - - ~~~ - +--------------------+ - | Database | - +--------------------+ - | crdb_internal | - | information_schema | - | insecurenodetest | - | pg_catalog | - | system | - +--------------------+ - (5 rows) - ~~~ - -4. Use `\q` to exit the SQL shell. diff --git a/src/current/_includes/v21.1/prod-deployment/insecure-test-load-balancing.md b/src/current/_includes/v21.1/prod-deployment/insecure-test-load-balancing.md deleted file mode 100644 index ae47b5cd160..00000000000 --- a/src/current/_includes/v21.1/prod-deployment/insecure-test-load-balancing.md +++ /dev/null @@ -1,79 +0,0 @@ -CockroachDB comes with a number of [built-in workloads](cockroach-workload.html) for simulating client traffic. This step features CockroachDB's version of the [TPC-C](http://www.tpc.org/tpcc/) workload. - -{{site.data.alerts.callout_info}} -Be sure that you have configured your network to allow traffic from the application to the load balancer. In this case, you will run the sample workload on one of your machines. The traffic source should therefore be the **internal (private)** IP address of that machine. -{{site.data.alerts.end}} - -{{site.data.alerts.callout_success}} -For comprehensive guidance on benchmarking CockroachDB with TPC-C, see [Performance Benchmarking](performance-benchmarking-with-tpcc-local.html). -{{site.data.alerts.end}} - -1. SSH to the machine where you want the run the sample TPC-C workload. - - This should be a machine that is not running a CockroachDB node. - -1. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -1. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -1. Use the [`cockroach workload`](cockroach-workload.html) command to load the initial schema and data, pointing it at the IP address of the load balancer: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload init tpcc \ - 'postgresql://root@:26257/tpcc?sslmode=disable' - ~~~ - -1. Use the `cockroach workload` command to run the workload for 10 minutes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload run tpcc \ - --duration=10m \ - 'postgresql://root@:26257/tpcc?sslmode=disable' - ~~~ - - You'll see per-operation statistics print to standard output every second: - - ~~~ - _elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms) - 1s 0 1443.4 1494.8 4.7 9.4 27.3 67.1 transfer - 2s 0 1686.5 1590.9 4.7 8.1 15.2 28.3 transfer - 3s 0 1735.7 1639.0 4.7 7.3 11.5 28.3 transfer - 4s 0 1542.6 1614.9 5.0 8.9 12.1 21.0 transfer - 5s 0 1695.9 1631.1 4.7 7.3 11.5 22.0 transfer - 6s 0 1569.2 1620.8 5.0 8.4 11.5 15.7 transfer - 7s 0 1614.6 1619.9 4.7 8.1 12.1 16.8 transfer - 8s 0 1344.4 1585.6 5.8 10.0 15.2 31.5 transfer - 9s 0 1351.9 1559.5 5.8 10.0 16.8 54.5 transfer - 10s 0 1514.8 1555.0 5.2 8.1 12.1 16.8 transfer - ... - ~~~ - - After the specified duration (10 minutes in this case), the workload will stop and you'll see totals printed to standard output: - - ~~~ - _elapsed___errors_____ops(total)___ops/sec(cum)__avg(ms)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)__result - 600.0s 0 823902 1373.2 5.8 5.5 10.0 15.2 209.7 - ~~~ - - {{site.data.alerts.callout_success}} - For more `tpcc` options, use `cockroach workload run tpcc --help`. For details about other workloads built into the `cockroach` binary, use `cockroach workload --help`. - {{site.data.alerts.end}} - -1. To monitor the load generator's progress, open the [DB Console](ui-overview.html) by pointing a browser to the address in the `admin` field in the standard output of any node on startup. - - Since the load generator is pointed at the load balancer, the connections will be evenly distributed across nodes. To verify this, click **Metrics** on the left, select the **SQL** dashboard, and then check the **SQL Connections** graph. You can use the **Graph** menu to filter the graph for specific nodes. diff --git a/src/current/_includes/v21.1/prod-deployment/insecurecockroachdb.service b/src/current/_includes/v21.1/prod-deployment/insecurecockroachdb.service deleted file mode 100644 index b027b941009..00000000000 --- a/src/current/_includes/v21.1/prod-deployment/insecurecockroachdb.service +++ /dev/null @@ -1,16 +0,0 @@ -[Unit] -Description=Cockroach Database cluster node -Requires=network.target -[Service] -Type=notify -WorkingDirectory=/var/lib/cockroach -ExecStart=/usr/local/bin/cockroach start --insecure --advertise-addr= --join=,, --cache=.25 --max-sql-memory=.25 -TimeoutStopSec=60 -Restart=always -RestartSec=10 -StandardOutput=syslog -StandardError=syslog -SyslogIdentifier=cockroach -User=cockroach -[Install] -WantedBy=default.target diff --git a/src/current/_includes/v21.1/prod-deployment/join-flag-multi-region.md b/src/current/_includes/v21.1/prod-deployment/join-flag-multi-region.md deleted file mode 100644 index 93ae34a8716..00000000000 --- a/src/current/_includes/v21.1/prod-deployment/join-flag-multi-region.md +++ /dev/null @@ -1 +0,0 @@ -When starting a multi-region cluster, set more than one `--join` address per region, and select nodes that are spread across failure domains. This ensures [high availability](architecture/replication-layer.html#overview). \ No newline at end of file diff --git a/src/current/_includes/v21.1/prod-deployment/join-flag-single-region.md b/src/current/_includes/v21.1/prod-deployment/join-flag-single-region.md deleted file mode 100644 index 99250cdfee9..00000000000 --- a/src/current/_includes/v21.1/prod-deployment/join-flag-single-region.md +++ /dev/null @@ -1 +0,0 @@ -For a cluster in a single region, set 3-5 `--join` addresses. Each starting node will attempt to contact one of the join hosts. In case a join host cannot be reached, the node will try another address on the list until it can join the gossip network. \ No newline at end of file diff --git a/src/current/_includes/v21.1/prod-deployment/monitor-cluster.md b/src/current/_includes/v21.1/prod-deployment/monitor-cluster.md deleted file mode 100644 index 363ef1167c1..00000000000 --- a/src/current/_includes/v21.1/prod-deployment/monitor-cluster.md +++ /dev/null @@ -1,3 +0,0 @@ -Despite CockroachDB's various [built-in safeguards against failure](frequently-asked-questions.html#how-does-cockroachdb-survive-failures), it is critical to actively monitor the overall health and performance of a cluster running in production and to create alerting rules that promptly send notifications when there are events that require investigation or intervention. - -For details about available monitoring options and the most important events and metrics to alert on, see [Monitoring and Alerting](monitoring-and-alerting.html). diff --git a/src/current/_includes/v21.1/prod-deployment/node-shutdown.md b/src/current/_includes/v21.1/prod-deployment/node-shutdown.md deleted file mode 100644 index 27066e68d02..00000000000 --- a/src/current/_includes/v21.1/prod-deployment/node-shutdown.md +++ /dev/null @@ -1,8 +0,0 @@ -
    -
  • If the node was started with a process manager, gracefully stop the node by sending SIGTERM with the process manager. If the node is not shutting down after 1 minute, send SIGKILL to terminate the process. When using systemd, for example, set TimeoutStopSec=60 in your configuration template and run systemctl stop <systemd config filename> to stop the node without systemd restarting it.
  • -
    Note:
    -

    The amount of time you should wait before sending SIGKILL can vary depending on your cluster configuration and workload, which affects how long it takes your nodes to complete a graceful shutdown. In certain edge cases, forcefully terminating the process before the node has completed shutdown can result in temporary data unavailability, latency spikes, uncertainty errors, ambiguous commit errors, or query timeouts. If you need maximum cluster availability, you can run cockroach node drain prior to node shutdown and actively monitor the draining process instead of automating it.

    -
    -
  • If the node was started using cockroach start and is running in the foreground, press ctrl-c in the terminal.
  • -
  • If the node was started using cockroach start and the --background and --pid-file flags, run kill <pid>, where <pid> is the process ID of the node.
  • -
diff --git a/src/current/_includes/v21.1/prod-deployment/prod-see-also.md b/src/current/_includes/v21.1/prod-deployment/prod-see-also.md deleted file mode 100644 index cb07de7ecb7..00000000000 --- a/src/current/_includes/v21.1/prod-deployment/prod-see-also.md +++ /dev/null @@ -1,7 +0,0 @@ -- [Production Checklist](recommended-production-settings.html) -- [Manual Deployment](manual-deployment.html) -- [Orchestrated Deployment](orchestration.html) -- [Monitoring and Alerting](monitoring-and-alerting.html) -- [Performance Benchmarking](performance-benchmarking-with-tpcc-small.html) -- [Performance Tuning](performance-best-practices-overview.html) -- [Local Deployment](start-a-local-cluster.html) diff --git a/src/current/_includes/v21.1/prod-deployment/secure-generate-certificates.md b/src/current/_includes/v21.1/prod-deployment/secure-generate-certificates.md deleted file mode 100644 index 9870de5b0cf..00000000000 --- a/src/current/_includes/v21.1/prod-deployment/secure-generate-certificates.md +++ /dev/null @@ -1,201 +0,0 @@ -You can use `cockroach cert` commands or [`openssl` commands](create-security-certificates-openssl.html) to generate security certificates. This section features the `cockroach cert` commands. - -Locally, you'll need to [create the following certificates and keys](cockroach-cert.html): - -- A certificate authority (CA) key pair (`ca.crt` and `ca.key`). -- A node key pair for each node, issued to its IP addresses and any common names the machine uses, as well as to the IP addresses and common names for machines running load balancers. -- A client key pair for the `root` user. You'll use this to run a sample workload against the cluster as well as some `cockroach` client commands from your local machine. - -{{site.data.alerts.callout_success}}Before beginning, it's useful to collect each of your machine's internal and external IP addresses, as well as any server names you want to issue certificates for.{{site.data.alerts.end}} - -1. [Install CockroachDB](install-cockroachdb.html) on your local machine, if you haven't already. - -2. Create two directories: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir certs - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir my-safe-directory - ~~~ - - `certs`: You'll generate your CA certificate and all node and client certificates and keys in this directory and then upload some of the files to your nodes. - - `my-safe-directory`: You'll generate your CA key in this directory and then reference the key when generating node and client certificates. After that, you'll keep the key safe and secret; you will not upload it to your nodes. - -3. Create the CA certificate and key: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-ca \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -4. Create the certificate and key for the first node, issued to all common names you might use to refer to the node as well as to the load balancer instances: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - \ - \ - \ - \ - localhost \ - 127.0.0.1 \ - \ - \ - \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -5. Upload the CA certificate and node certificate and key to the first node: - - {% if page.title contains "Google" %} - {% include_cached copy-clipboard.html %} - ~~~ shell - $ gcloud compute ssh \ - --project \ - --command "mkdir certs" - ~~~ - - {{site.data.alerts.callout_info}} - `gcloud compute ssh` associates your public SSH key with the GCP project and is only needed when connecting to the first node. See the [GCP docs](https://cloud.google.com/sdk/gcloud/reference/compute/ssh) for more details. - {{site.data.alerts.end}} - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - - {% elsif page.title contains "AWS" %} - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ssh-add /path/.pem - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ssh @ "mkdir certs" - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - - {% else %} - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ssh @ "mkdir certs" - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - {% endif %} - -6. Delete the local copy of the node certificate and key: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ rm certs/node.crt certs/node.key - ~~~ - - {{site.data.alerts.callout_info}} - This is necessary because the certificates and keys for additional nodes will also be named `node.crt` and `node.key`. As an alternative to deleting these files, you can run the next `cockroach cert create-node` commands with the `--overwrite` flag. - {{site.data.alerts.end}} - -7. Create the certificate and key for the second node, issued to all common names you might use to refer to the node as well as to the load balancer instances: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - \ - \ - \ - \ - localhost \ - 127.0.0.1 \ - \ - \ - \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -8. Upload the CA certificate and node certificate and key to the second node: - - {% if page.title contains "AWS" %} - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ssh @ "mkdir certs" - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - - {% else %} - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ssh @ "mkdir certs" - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - {% endif %} - -9. Repeat steps 6 - 8 for each additional node. - -10. Create a client certificate and key for the `root` user: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-client \ - root \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -11. Upload the CA certificate and client certificate and key to the machine where you will run a sample workload: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ ssh @ "mkdir certs" - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ scp certs/ca.crt \ - certs/client.root.crt \ - certs/client.root.key \ - @:~/certs - ~~~ - - In later steps, you'll also use the `root` user's certificate to run [`cockroach`](cockroach-commands.html) client commands from your local machine. If you might also want to run `cockroach` client commands directly on a node (e.g., for local debugging), you'll need to copy the `root` user's certificate and key to that node as well. - -{{site.data.alerts.callout_info}} -On accessing the DB Console in a later step, your browser will consider the CockroachDB-created certificate invalid and you’ll need to click through a warning message to get to the UI. You can avoid this issue by [using a certificate issued by a public CA](create-security-certificates-custom-ca.html#accessing-the-db-console-for-a-secure-cluster). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/prod-deployment/secure-initialize-cluster.md b/src/current/_includes/v21.1/prod-deployment/secure-initialize-cluster.md deleted file mode 100644 index fc92a82b724..00000000000 --- a/src/current/_includes/v21.1/prod-deployment/secure-initialize-cluster.md +++ /dev/null @@ -1,8 +0,0 @@ -On your local machine, run the [`cockroach init`](cockroach-init.html) command to complete the node startup process and have them join together as a cluster: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach init --certs-dir=certs --host=
-~~~ - -After running this command, each node prints helpful details to the [standard output](cockroach-start.html#standard-output), such as the CockroachDB version, the URL for the DB Console, and the SQL URL for clients. diff --git a/src/current/_includes/v21.1/prod-deployment/secure-recommendations.md b/src/current/_includes/v21.1/prod-deployment/secure-recommendations.md deleted file mode 100644 index 528850dbbb0..00000000000 --- a/src/current/_includes/v21.1/prod-deployment/secure-recommendations.md +++ /dev/null @@ -1,7 +0,0 @@ -- Decide how you want to access your DB Console: - - Access Level | Description - -------------|------------ - Partially open | Set a firewall rule to allow only specific IP addresses to communicate on port `8080`. - Completely open | Set a firewall rule to allow all IP addresses to communicate on port `8080`. - Completely closed | Set a firewall rule to disallow all communication on port `8080`. In this case, a machine with SSH access to a node could use an SSH tunnel to access the DB Console. diff --git a/src/current/_includes/v21.1/prod-deployment/secure-requirements.md b/src/current/_includes/v21.1/prod-deployment/secure-requirements.md deleted file mode 100644 index 5c35b0898c8..00000000000 --- a/src/current/_includes/v21.1/prod-deployment/secure-requirements.md +++ /dev/null @@ -1,11 +0,0 @@ -- You must have [CockroachDB installed](install-cockroachdb.html) locally. This is necessary for generating and managing your deployment's certificates. - -- You must have [SSH access]({{page.ssh-link}}) to each machine. This is necessary for distributing and starting CockroachDB binaries. - -- Your network configuration must allow TCP communication on the following ports: - - `26257` for intra-cluster and client-cluster communication - - `8080` to expose your DB Console - -- Carefully review the [Production Checklist](recommended-production-settings.html), including supported hardware and software, and the recommended [Topology Patterns](topology-patterns.html). - -{% include {{ page.version.version }}/prod-deployment/topology-recommendations.md %} \ No newline at end of file diff --git a/src/current/_includes/v21.1/prod-deployment/secure-scale-cluster.md b/src/current/_includes/v21.1/prod-deployment/secure-scale-cluster.md deleted file mode 100644 index 14d24ea322f..00000000000 --- a/src/current/_includes/v21.1/prod-deployment/secure-scale-cluster.md +++ /dev/null @@ -1,124 +0,0 @@ -You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/). - -
- - -
-

- -
- -For each additional node you want to add to the cluster, complete the following steps: - -1. SSH to the machine where you want the node to run. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. Run the [`cockroach start`](cockroach-start.html) command, passing the new node's address as the `--advertise-addr` flag and pointing `--join` to the three existing nodes (also include `--locality` if you set it earlier). - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --certs-dir=certs \ - --advertise-addr= \ - --join=,, \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - -5. Update your load balancer to recognize the new node. - -
- -
- -For each additional node you want to add to the cluster, complete the following steps: - -1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. Create the Cockroach directory: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir /var/lib/cockroach - ~~~ - -5. Create a Unix user named `cockroach`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ useradd cockroach - ~~~ - -6. Move the `certs` directory to the `cockroach` directory. - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mv certs /var/lib/cockroach/ - ~~~ - -7. Change the ownership of the `cockroach` directory to the user `cockroach`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ chown -R cockroach:cockroach /var/lib/cockroach - ~~~ - -8. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service): - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service - ~~~ - - Alternatively, you can create the file yourself and copy the script into it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - {% include {{ page.version.version }}/prod-deployment/securecockroachdb.service %} - ~~~ - - Save the file in the `/etc/systemd/system/` directory. - -9. Customize the sample configuration template for your deployment: - - Specify values for the following flags in the sample configuration template: - - {% include {{ page.version.version }}/prod-deployment/advertise-addr-join.md %} - -10. Repeat these steps for each additional node that you want in your cluster. - -
diff --git a/src/current/_includes/v21.1/prod-deployment/secure-start-nodes.md b/src/current/_includes/v21.1/prod-deployment/secure-start-nodes.md deleted file mode 100644 index 04172e3cf61..00000000000 --- a/src/current/_includes/v21.1/prod-deployment/secure-start-nodes.md +++ /dev/null @@ -1,195 +0,0 @@ -You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/). - -
- - -
-

- -
- -For each initial node of your cluster, complete the following steps: - -{{site.data.alerts.callout_info}} -After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step. -{{site.data.alerts.end}} - -1. SSH to the machine where you want the node to run. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. CockroachDB uses custom-built versions of the [GEOS](spatial-glossary.html#geos) libraries. Copy these libraries to the location where CockroachDB expects to find them: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir -p /usr/local/lib/cockroach - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos.so /usr/local/lib/cockroach/ - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos_c.so /usr/local/lib/cockroach/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -5. Run the [`cockroach start`](cockroach-start.html) command: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --certs-dir=certs \ - --advertise-addr= \ - --join=,, \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - - This command primes the node to start, using the following flags: - - Flag | Description - -----|------------ - `--certs-dir` | Specifies the directory where you placed the `ca.crt` file and the `node.crt` and `node.key` files for the node. - `--advertise-addr` | Specifies the IP address/hostname and port to tell other nodes to use. The port number can be omitted, in which case it defaults to `26257`.

This value must route to an IP address the node is listening on (with `--listen-addr` unspecified, the node listens on all IP addresses).

In some networking scenarios, you may need to use `--advertise-addr` and/or `--listen-addr` differently. For more details, see [Networking](recommended-production-settings.html#networking). - `--join` | Identifies the address of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising. - `--cache`
`--max-sql-memory` | Increases the node's cache size to 25% of available system memory to improve read performance. The capacity for in-memory SQL processing defaults to 25% of system memory but can be raised, if necessary, to increase the number of simultaneous client connections allowed by the node as well as the node's capacity for in-memory processing of rows when using `ORDER BY`, `GROUP BY`, `DISTINCT`, joins, and window functions. For more details, see [Cache and SQL Memory Size](recommended-production-settings.html#cache-and-sql-memory-size). - `--background` | Starts the node in the background so you gain control of the terminal to issue more commands. - - When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain [{{ site.data.products.enterprise }} features](enterprise-licensing.html). For more details, see [Locality](cockroach-start.html#locality). - - For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data` and binds DB Console HTTP requests to `--http-addr=:8080`. To set these options manually, see [Start a Node](cockroach-start.html). - -6. Repeat these steps for each additional node that you want in your cluster. - -
- -
- -For each initial node of your cluster, complete the following steps: - -{{site.data.alerts.callout_info}} -After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step. -{{site.data.alerts.end}} - -1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. CockroachDB uses custom-built versions of the [GEOS](spatial-glossary.html#geos) libraries. Copy these libraries to the location where CockroachDB expects to find them: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir -p /usr/local/lib/cockroach - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos.so /usr/local/lib/cockroach/ - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/lib/libgeos_c.so /usr/local/lib/cockroach/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -5. Create the Cockroach directory: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mkdir /var/lib/cockroach - ~~~ - -6. Create a Unix user named `cockroach`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ useradd cockroach - ~~~ - -7. Move the `certs` directory to the `cockroach` directory. - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ mv certs /var/lib/cockroach/ - ~~~ - -8. Change the ownership of the `cockroach` directory to the user `cockroach`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ chown -R cockroach:cockroach /var/lib/cockroach - ~~~ - -9. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service) and save the file in the `/etc/systemd/system/` directory: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service - ~~~ - - Alternatively, you can create the file yourself and copy the script into it: - - {% include_cached copy-clipboard.html %} - ~~~ shell - {% include {{ page.version.version }}/prod-deployment/securecockroachdb.service %} - ~~~ - -10. In the sample configuration template, specify values for the following flags: - - {% include {{ page.version.version }}/prod-deployment/advertise-addr-join.md %} - - When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain [{{ site.data.products.enterprise }} features](enterprise-licensing.html). For more details, see [Locality](cockroach-start.html#locality). - - For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data` and binds DB Console HTTP requests to `--http-addr=localhost:8080`. To set these options manually, see [Start a Node](cockroach-start.html). - -11. Start the CockroachDB cluster: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ systemctl start securecockroachdb - ~~~ - -11. Repeat these steps for each additional node that you want in your cluster. - -{{site.data.alerts.callout_info}} -`systemd` handles node restarts in case of node failure. To stop a node without `systemd` restarting it, run `systemctl stop securecockroachdb` -{{site.data.alerts.end}} - -
diff --git a/src/current/_includes/v21.1/prod-deployment/secure-test-cluster.md b/src/current/_includes/v21.1/prod-deployment/secure-test-cluster.md deleted file mode 100644 index cbd81488b0d..00000000000 --- a/src/current/_includes/v21.1/prod-deployment/secure-test-cluster.md +++ /dev/null @@ -1,41 +0,0 @@ -CockroachDB replicates and distributes data behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. Once a cluster is live, any node can be used as a SQL gateway. - -When using a load balancer, you should issue commands directly to the load balancer, which then routes traffic to the nodes. - -Use the [built-in SQL client](cockroach-sql.html) locally as follows: - -1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of the load balancer: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach sql --certs-dir=certs --host=
- ~~~ - -2. Create a `securenodetest` database: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE securenodetest; - ~~~ - -3. View the cluster's databases, which will include `securenodetest`: - - {% include_cached copy-clipboard.html %} - ~~~ sql - > SHOW DATABASES; - ~~~ - - ~~~ - +--------------------+ - | Database | - +--------------------+ - | crdb_internal | - | information_schema | - | securenodetest | - | pg_catalog | - | system | - +--------------------+ - (5 rows) - ~~~ - -4. Use `\q` to exit the SQL shell. \ No newline at end of file diff --git a/src/current/_includes/v21.1/prod-deployment/secure-test-load-balancing.md b/src/current/_includes/v21.1/prod-deployment/secure-test-load-balancing.md deleted file mode 100644 index 2fb26c9e276..00000000000 --- a/src/current/_includes/v21.1/prod-deployment/secure-test-load-balancing.md +++ /dev/null @@ -1,79 +0,0 @@ -CockroachDB comes with a number of [built-in workloads](cockroach-workload.html) for simulating client traffic. This step features CockroachDB's version of the [TPC-C](http://www.tpc.org/tpcc/) workload. - -{{site.data.alerts.callout_info}} -Be sure that you have configured your network to allow traffic from the application to the load balancer. In this case, you will run the sample workload on one of your machines. The traffic source should therefore be the **internal (private)** IP address of that machine. -{{site.data.alerts.end}} - -{{site.data.alerts.callout_success}} -For comprehensive guidance on benchmarking CockroachDB with TPC-C, see [Performance Benchmarking](performance-benchmarking-with-tpcc-local.html). -{{site.data.alerts.end}} - -1. SSH to the machine where you want to run the sample TPC-C workload. - - This should be a machine that is not running a CockroachDB node, and it should already have a `certs` directory containing `ca.crt`, `client.root.crt`, and `client.root.key` files. - -1. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -1. Copy the binary into the `PATH`: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -1. Use the [`cockroach workload`](cockroach-workload.html) command to load the initial schema and data, pointing it at the IP address of the load balancer: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload init tpcc \ - 'postgresql://root@:26257/tpcc?sslmode=verify-full&sslrootcert=certs/ca.crt&sslcert=certs/client.root.crt&sslkey=certs/client.root.key' - ~~~ - -1. Use the `cockroach workload` command to run the workload for 10 minutes: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ cockroach workload run tpcc \ - --duration=10m \ - 'postgresql://root@:26257/tpcc?sslmode=verify-full&sslrootcert=certs/ca.crt&sslcert=certs/client.root.crt&sslkey=certs/client.root.key' - ~~~ - - You'll see per-operation statistics print to standard output every second: - - ~~~ - _elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms) - 1s 0 1443.4 1494.8 4.7 9.4 27.3 67.1 transfer - 2s 0 1686.5 1590.9 4.7 8.1 15.2 28.3 transfer - 3s 0 1735.7 1639.0 4.7 7.3 11.5 28.3 transfer - 4s 0 1542.6 1614.9 5.0 8.9 12.1 21.0 transfer - 5s 0 1695.9 1631.1 4.7 7.3 11.5 22.0 transfer - 6s 0 1569.2 1620.8 5.0 8.4 11.5 15.7 transfer - 7s 0 1614.6 1619.9 4.7 8.1 12.1 16.8 transfer - 8s 0 1344.4 1585.6 5.8 10.0 15.2 31.5 transfer - 9s 0 1351.9 1559.5 5.8 10.0 16.8 54.5 transfer - 10s 0 1514.8 1555.0 5.2 8.1 12.1 16.8 transfer - ... - ~~~ - - After the specified duration (10 minutes in this case), the workload will stop and you'll see totals printed to standard output: - - ~~~ - _elapsed___errors_____ops(total)___ops/sec(cum)__avg(ms)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)__result - 600.0s 0 823902 1373.2 5.8 5.5 10.0 15.2 209.7 - ~~~ - - {{site.data.alerts.callout_success}} - For more `tpcc` options, use `cockroach workload run tpcc --help`. For details about other workloads built into the `cockroach` binary, use `cockroach workload --help`. - {{site.data.alerts.end}} - -1. To monitor the load generator's progress, open the [DB Console](ui-overview.html) by pointing a browser to the address in the `admin` field in the standard output of any node on startup. - - Since the load generator is pointed at the load balancer, the connections will be evenly distributed across nodes. To verify this, click **Metrics** on the left, select the **SQL** dashboard, and then check the **SQL Connections** graph. You can use the **Graph** menu to filter the graph for specific nodes. diff --git a/src/current/_includes/v21.1/prod-deployment/securecockroachdb.service b/src/current/_includes/v21.1/prod-deployment/securecockroachdb.service deleted file mode 100644 index 39054cf2e1d..00000000000 --- a/src/current/_includes/v21.1/prod-deployment/securecockroachdb.service +++ /dev/null @@ -1,16 +0,0 @@ -[Unit] -Description=Cockroach Database cluster node -Requires=network.target -[Service] -Type=notify -WorkingDirectory=/var/lib/cockroach -ExecStart=/usr/local/bin/cockroach start --certs-dir=certs --advertise-addr= --join=,, --cache=.25 --max-sql-memory=.25 -TimeoutStopSec=60 -Restart=always -RestartSec=10 -StandardOutput=syslog -StandardError=syslog -SyslogIdentifier=cockroach -User=cockroach -[Install] -WantedBy=default.target diff --git a/src/current/_includes/v21.1/prod-deployment/synchronize-clocks.md b/src/current/_includes/v21.1/prod-deployment/synchronize-clocks.md deleted file mode 100644 index 207c7015ef5..00000000000 --- a/src/current/_includes/v21.1/prod-deployment/synchronize-clocks.md +++ /dev/null @@ -1,179 +0,0 @@ -CockroachDB requires moderate levels of [clock synchronization](recommended-production-settings.html#clock-synchronization) to preserve data consistency. For this reason, when a node detects that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the maximum offset allowed (500ms by default), it spontaneously shuts down. This avoids the risk of consistency anomalies, but it's best to prevent clocks from drifting too far in the first place by running clock synchronization software on each node. - -{% if page.title contains "Digital Ocean" or page.title contains "On-Premises" %} - -[`ntpd`](http://doc.ntp.org/) should keep offsets in the single-digit milliseconds, so that software is featured here, but other methods of clock synchronization are suitable as well. - -1. SSH to the first machine. - -2. Disable `timesyncd`, which tends to be active by default on some Linux distributions: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo timedatectl set-ntp no - ~~~ - - Verify that `timesyncd` is off: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ timedatectl - ~~~ - - Look for `Network time on: no` or `NTP enabled: no` in the output. - -3. Install the `ntp` package: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo apt-get install ntp - ~~~ - -4. Stop the NTP daemon: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo service ntp stop - ~~~ - -5. Sync the machine's clock with Google's NTP service: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo ntpd -b time.google.com - ~~~ - - To make this change permanent, in the `/etc/ntp.conf` file, remove or comment out any lines starting with `server` or `pool` and add the following lines: - - {% include_cached copy-clipboard.html %} - ~~~ - server time1.google.com iburst - server time2.google.com iburst - server time3.google.com iburst - server time4.google.com iburst - ~~~ - - Restart the NTP daemon: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo service ntp start - ~~~ - - {{site.data.alerts.callout_info}} - We recommend Google's NTP service because it handles ["smearing" the leap second](https://developers.google.com/time/smear). If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine. See the [Production Checklist](recommended-production-settings.html#considerations) for details. - {{site.data.alerts.end}} - -6. Verify that the machine is using a Google NTP server: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo ntpq -p - ~~~ - - The active NTP server will be marked with an asterisk. - -7. Repeat these steps for each machine where a CockroachDB node will run. - -{% elsif page.title contains "Google" %} - -Compute Engine instances are preconfigured to use [NTP](http://www.ntp.org/), which should keep offsets in the single-digit milliseconds. However, Google can’t predict how external NTP services, such as `pool.ntp.org`, will handle the leap second. Therefore, you should: - -- [Configure each GCE instance to use Google's internal NTP service](https://cloud.google.com/compute/docs/instances/configure-ntp#configure_ntp_for_your_instances). -- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, note that all of the nodes must be synced to the same time source, or to different sources that implement leap second smearing in the same way. See the [Production Checklist](recommended-production-settings.html#considerations) for details. - -{% elsif page.title contains "AWS" %} - -Amazon provides the [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html), which uses a fleet of satellite-connected and atomic reference clocks in each AWS Region to deliver accurate current time readings. The service also smears the leap second. - -- [Configure each AWS instance to use the internal Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). - - Per the above instructions, ensure that `etc/chrony.conf` on the instance contains the line `server 169.254.169.123 prefer iburst minpoll 4 maxpoll 4` and that other `server` or `pool` lines are commented out. - - To verify that Amazon Time Sync Service is being used, run `chronyc sources -v` and check for a line containing `* 169.254.169.123`. The `*` denotes the preferred time server. -- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, note that all of the nodes must be synced to the same time source, or to different sources that implement leap second smearing in the same way. See the [Production Checklist](recommended-production-settings.html#considerations) for details. - -{% elsif page.title contains "Azure" %} - -[`ntpd`](http://doc.ntp.org/) should keep offsets in the single-digit milliseconds, so that software is featured here. However, to run `ntpd` properly on Azure VMs, it's necessary to first unbind the Time Synchronization device used by the Hyper-V technology running Azure VMs; this device aims to synchronize time between the VM and its host operating system but has been known to cause problems. - -1. SSH to the first machine. - -2. Find the ID of the Hyper-V Time Synchronization device: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ curl -O https://raw.githubusercontent.com/torvalds/linux/master/tools/hv/lsvmbus - ~~~ - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ python lsvmbus -vv | grep -w "Time Synchronization" -A 3 - ~~~ - - ~~~ - VMBUS ID 12: Class_ID = {9527e630-d0ae-497b-adce-e80ab0175caf} - [Time Synchronization] - Device_ID = {2dd1ce17-079e-403c-b352-a1921ee207ee} - Sysfs path: /sys/bus/vmbus/devices/2dd1ce17-079e-403c-b352-a1921ee207ee - Rel_ID=12, target_cpu=0 - ~~~ - -3. Unbind the device, using the `Device_ID` from the previous command's output: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ echo | sudo tee /sys/bus/vmbus/drivers/hv_util/unbind - ~~~ - -4. Install the `ntp` package: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo apt-get install ntp - ~~~ - -5. Stop the NTP daemon: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo service ntp stop - ~~~ - -6. Sync the machine's clock with Google's NTP service: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo ntpd -b time.google.com - ~~~ - - To make this change permanent, in the `/etc/ntp.conf` file, remove or comment out any lines starting with `server` or `pool` and add the following lines: - - {% include_cached copy-clipboard.html %} - ~~~ - server time1.google.com iburst - server time2.google.com iburst - server time3.google.com iburst - server time4.google.com iburst - ~~~ - - Restart the NTP daemon: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo service ntp start - ~~~ - - {{site.data.alerts.callout_info}} - We recommend Google's NTP service because it handles ["smearing" the leap second](https://developers.google.com/time/smear). If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine. See the [Production Checklist](recommended-production-settings.html#considerations) for details. - {{site.data.alerts.end}} - -7. Verify that the machine is using a Google NTP server: - - {% include_cached copy-clipboard.html %} - ~~~ shell - $ sudo ntpq -p - ~~~ - - The active NTP server will be marked with an asterisk. - -8. Repeat these steps for each machine where a CockroachDB node will run. - -{% endif %} diff --git a/src/current/_includes/v21.1/prod-deployment/topology-recommendations.md b/src/current/_includes/v21.1/prod-deployment/topology-recommendations.md deleted file mode 100644 index 31384079cec..00000000000 --- a/src/current/_includes/v21.1/prod-deployment/topology-recommendations.md +++ /dev/null @@ -1,19 +0,0 @@ -- Run each node on a separate machine. Since CockroachDB replicates across nodes, running more than one node per machine increases the risk of data loss if a machine fails. Likewise, if a machine has multiple disks or SSDs, run one node with multiple `--store` flags and not one node per disk. For more details about stores, see [Start a Node](cockroach-start.html#store). - -- When starting each node, use the [`--locality`](cockroach-start.html#locality) flag to describe the node's location, for example, `--locality=region=west,zone=us-west-1`. The key-value pairs should be ordered from most to least inclusive, and the keys and order of key-value pairs must be the same on all nodes. - -- When deploying in a single availability zone: - - - To be able to tolerate the failure of any 1 node, use at least 3 nodes with the [`default` 3-way replication factor](configure-replication-zones.html#view-the-default-replication-zone). In this case, if 1 node fails, each range retains 2 of its 3 replicas, a majority. - - - To be able to tolerate 2 simultaneous node failures, use at least 5 nodes and [increase the `default` replication factor for user data](configure-replication-zones.html#edit-the-default-replication-zone) to 5. The replication factor for [important internal data](configure-replication-zones.html#create-a-replication-zone-for-a-system-range) is 5 by default, so no adjustments are needed for internal data. In this case, if 2 nodes fail at the same time, each range retains 3 of its 5 replicas, a majority. - -- When deploying across multiple availability zones: - - - To be able to tolerate the failure of 1 entire AZ in a region, use at least 3 AZs per region and set `--locality` on each node to spread data evenly across regions and AZs. In this case, if 1 AZ goes offline, the 2 remaining AZs retain a majority of replicas. - - - To ensure that ranges are split evenly across nodes, use the same number of nodes in each AZ. This is to avoid overloading any nodes with excessive resource consumption. - -- When deploying across multiple regions: - - - To be able to tolerate the failure of 1 entire region, use at least 3 regions. \ No newline at end of file diff --git a/src/current/_includes/v21.1/prod-deployment/use-cluster.md b/src/current/_includes/v21.1/prod-deployment/use-cluster.md deleted file mode 100644 index 0e65c9fb94c..00000000000 --- a/src/current/_includes/v21.1/prod-deployment/use-cluster.md +++ /dev/null @@ -1,12 +0,0 @@ -Now that your deployment is working, you can: - -1. [Implement your data model](sql-statements.html). -1. [Create users](create-user.html) and [grant them privileges](grant.html). -1. [Connect your application](install-client-drivers.html). Be sure to connect your application to the load balancer, not to a CockroachDB node. -1. [Take backups](take-full-and-incremental-backups.html) of your data. - -You may also want to adjust the way the cluster replicates data. For example, by default, a multi-node cluster replicates all data 3 times; you can change this replication factor or create additional rules for replicating individual databases and tables differently. For more information, see [Configure Replication Zones](configure-replication-zones.html). - -{{site.data.alerts.callout_danger}} -When running a cluster of 5 nodes or more, it's safest to [increase the replication factor for important internal data](configure-replication-zones.html#create-a-replication-zone-for-a-system-range) to 5, even if you do not do so for user data. For the cluster as a whole to remain available, the ranges for this internal data must always retain a majority of their replicas. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/sidebar-data-reference.json b/src/current/_includes/v21.1/sidebar-data-reference.json deleted file mode 100644 index 0c23d76e550..00000000000 --- a/src/current/_includes/v21.1/sidebar-data-reference.json +++ /dev/null @@ -1,1502 +0,0 @@ -{ - "title": "Reference", - "is_top_level": true, - "items": [ - { - "title": "Architecture", - "urls": [ - "/${VERSION}/architecture/overview.html" - ], - "items": [ - { - "title": "SQL Layer", - "urls": [ - "/${VERSION}/architecture/sql-layer.html" - ] - }, - { - "title": "Transaction Layer", - "urls": [ - "/${VERSION}/architecture/transaction-layer.html" - ] - }, - { - "title": "Distribution Layer", - "urls": [ - "/${VERSION}/architecture/distribution-layer.html" - ] - }, - { - "title": "Replication Layer", - "urls": [ - "/${VERSION}/architecture/replication-layer.html" - ] - }, - { - "title": "Storage Layer", - "urls": [ - "/${VERSION}/architecture/storage-layer.html" - ] - }, - { - "title": "Life of a Distributed Transaction", - "urls": [ - "/${VERSION}/architecture/life-of-a-distributed-transaction.html" - ] - }, - { - "title": "Reads and Writes Overview", - "urls": [ - "/${VERSION}/architecture/reads-and-writes-overview.html" - ] - } - ] - }, - { - "title": "SQL", - "urls": [ - "/${VERSION}/sql-feature-support.html" - ], - "items": [ - { - "title": "SQL Syntax", - "items": [ - { - "title": "Full SQL Grammar", - "urls": [ - "/${VERSION}/sql-grammar.html" - ] - }, - { - "title": "Keywords & Identifiers", - "urls": [ - "/${VERSION}/keywords-and-identifiers.html" - ] - }, - { - "title": "Constants", - "urls": [ - "/${VERSION}/sql-constants.html" - ] - }, - { - "title": "Selection Queries", - "urls": [ - "/${VERSION}/selection-queries.html" - ] - }, - { - "title": "Table Expressions", - "urls": [ - "/${VERSION}/table-expressions.html" - ] - }, - { - "title": "Common Table Expressions", - "urls": [ - "/${VERSION}/common-table-expressions.html" - ] - }, - { - "title": "Scalar Expressions", - "urls": [ - "/${VERSION}/scalar-expressions.html" - ] - }, - { - "title": "NULL Handling", - "urls": [ - "/${VERSION}/null-handling.html" - ] - } - ] - }, - { - "title": "SQL Statements", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/sql-statements.html" - ] - }, - { - "title": "ADD COLUMN", - "urls": [ - "/${VERSION}/add-column.html" - ] - }, - { - "title": "ADD CONSTRAINT", - "urls": [ - "/${VERSION}/add-constraint.html" - ] - }, - { - "title": "ADD REGION (Enterprise)", - "urls": [ - "/${VERSION}/add-region.html" - ] - }, - { - "title": "ALTER COLUMN", - "urls": [ - "/${VERSION}/alter-column.html" - ] - }, - { - "title": "ALTER DATABASE", - "urls": [ - "/${VERSION}/alter-database.html" - ] - }, - { - "title": "ALTER INDEX", - "urls": [ - "/${VERSION}/alter-index.html" - ] - }, - { - "title": "ALTER PARTITION (Enterprise)", - "urls": [ - "/${VERSION}/alter-partition.html" - ] - }, - { - "title": "ALTER PRIMARY KEY", - "urls": [ - "/${VERSION}/alter-primary-key.html" - ] - }, - { - "title": "ALTER RANGE", - "urls": [ - "/${VERSION}/alter-range.html" - ] - }, - { - "title": "ALTER ROLE", - "urls": [ - "/${VERSION}/alter-role.html" - ] - }, - { - "title": "ALTER SCHEMA", - "urls": [ - "/${VERSION}/alter-schema.html" - ] - }, - { - "title": "ALTER SEQUENCE", - "urls": [ - "/${VERSION}/alter-sequence.html" - ] - }, - { - "title": "ALTER TABLE", - "urls": [ - "/${VERSION}/alter-table.html" - ] - }, - { - "title": "ALTER TYPE", - "urls": [ - "/${VERSION}/alter-type.html" - ] - }, - { - "title": "ALTER USER", - "urls": [ - "/${VERSION}/alter-user.html" - ] - }, - { - "title": "AS OF SYSTEM TIME", - "urls": [ - "/${VERSION}/as-of-system-time.html" - ] - }, - { - "title": "EXPERIMENTAL_AUDIT", - "urls": [ - "/${VERSION}/experimental-audit.html" - ] - }, - { - "title": "ALTER VIEW", - "urls": [ - "/${VERSION}/alter-view.html" - ] - }, - { - "title": "BACKUP", - "urls": [ - "/${VERSION}/backup.html" - ] - }, - { - "title": "BEGIN", - "urls": [ - "/${VERSION}/begin-transaction.html" - ] - }, - { - "title": "CANCEL JOB", - "urls": [ - "/${VERSION}/cancel-job.html" - ] - }, - { - "title": "CANCEL QUERY", - "urls": [ - "/${VERSION}/cancel-query.html" - ] - }, - { - "title": "CANCEL SESSION", - "urls": [ - "/${VERSION}/cancel-session.html" - ] - }, - { - "title": "COMMENT ON", - "urls": [ - "/${VERSION}/comment-on.html" - ] - }, - { - "title": "COMMIT", - "urls": [ - "/${VERSION}/commit-transaction.html" - ] - }, - { - "title": "CONFIGURE ZONE", - "urls": [ - "/${VERSION}/configure-zone.html" - ] - }, - { - "title": "CONVERT TO SCHEMA", - "urls": [ - "/${VERSION}/convert-to-schema.html" - ] - }, - { - "title": "COPY FROM", - "urls": [ - "/${VERSION}/copy-from.html" - ] - }, - { - "title": "CREATE CHANGEFEED (Enterprise)", - "urls": [ - "/${VERSION}/create-changefeed.html" - ] - }, - { - "title": "CREATE DATABASE", - "urls": [ - "/${VERSION}/create-database.html" - ] - }, - { - "title": "CREATE INDEX", - "urls": [ - "/${VERSION}/create-index.html" - ] - }, - { - "title": "CREATE ROLE", - "urls": [ - "/${VERSION}/create-role.html" - ] - }, - { - "title": "CREATE SCHEDULE FOR BACKUP", - "urls": [ - "/${VERSION}/create-schedule-for-backup.html" - ] - }, - { - "title": "CREATE SCHEMA", - "urls": [ - "/${VERSION}/create-schema.html" - ] - }, - { - "title": "CREATE SEQUENCE", - "urls": [ - "/${VERSION}/create-sequence.html" - ] - }, - { - "title": "CREATE STATISTICS", - "urls": [ - "/${VERSION}/create-statistics.html" - ] - }, - { - "title": "CREATE TABLE", - "urls": [ - "/${VERSION}/create-table.html" - ] - }, - { - "title": "CREATE TABLE AS", - "urls": [ - "/${VERSION}/create-table-as.html" - ] - }, - { - "title": "CREATE TYPE", - "urls": [ - "/${VERSION}/create-type.html" - ] - }, - { - "title": "CREATE USER", - "urls": [ - "/${VERSION}/create-user.html" - ] - }, - { - "title": "CREATE VIEW", - "urls": [ - "/${VERSION}/create-view.html" - ] - }, - { - "title": "DELETE", - "urls": [ - "/${VERSION}/delete.html" - ] - }, - { - "title": "DROP COLUMN", - "urls": [ - "/${VERSION}/drop-column.html" - ] - }, - { - "title": "DROP CONSTRAINT", - "urls": [ - "/${VERSION}/drop-constraint.html" - ] - }, - { - "title": "DROP DATABASE", - "urls": [ - "/${VERSION}/drop-database.html" - ] - }, - { - "title": "DROP REGION (Enterprise)", - "urls": [ - "/${VERSION}/drop-region.html" - ] - }, - { - "title": "DROP TYPE", - "urls": [ - "/${VERSION}/drop-type.html" - ] - }, - { - "title": "DROP INDEX", - "urls": [ - "/${VERSION}/drop-index.html" - ] - }, - { - "title": "DROP ROLE", - "urls": [ - "/${VERSION}/drop-role.html" - ] - }, - { - "title": "DROP SCHEDULES", - "urls": [ - "/${VERSION}/drop-schedules.html" - ] - }, - { - "title": "DROP SCHEMA", - "urls": [ - "/${VERSION}/drop-schema.html" - ] - }, - { - "title": "DROP SEQUENCE", - "urls": [ - "/${VERSION}/drop-sequence.html" - ] - }, - { - "title": "DROP TABLE", - "urls": [ - "/${VERSION}/drop-table.html" - ] - }, - { - "title": "DROP USER", - "urls": [ - "/${VERSION}/drop-user.html" - ] - }, - { - "title": "DROP VIEW", - "urls": [ - "/${VERSION}/drop-view.html" - ] - }, - { - "title": "EXPERIMENTAL CHANGEFEED FOR", - "urls": [ - "/${VERSION}/changefeed-for.html" - ] - }, - { - "title": "EXPLAIN", - "urls": [ - "/${VERSION}/explain.html" - ] - }, - { - "title": "EXPLAIN ANALYZE", - "urls": [ - "/${VERSION}/explain-analyze.html" - ] - }, - { - "title": "EXPORT", - "urls": [ - "/${VERSION}/export.html" - ] - }, - { - "title": "GRANT", - "urls": [ - "/${VERSION}/grant.html" - ] - }, - { - "title": "IMPORT", - "urls": [ - "/${VERSION}/import.html" - ] - }, - { - "title": "IMPORT INTO", - "urls": [ - "/${VERSION}/import-into.html" - ] - }, - { - "title": "INSERT", - "urls": [ - "/${VERSION}/insert.html" - ] - }, - { - "title": "JOIN", - "urls": [ - "/${VERSION}/joins.html" - ] - }, - { - "title": "LIMIT/OFFSET", - "urls": [ - "/${VERSION}/limit-offset.html" - ] - }, - { - "title": "ORDER BY", - "urls": [ - "/${VERSION}/order-by.html" - ] - }, - { - "title": "OWNER TO", - "urls": [ - "/${VERSION}/owner-to.html" - ] - }, - { - "title": "PARTITION BY (Enterprise)", - "urls": [ - "/${VERSION}/partition-by.html" - ] - }, - { - "title": "PAUSE JOB", - "urls": [ - "/${VERSION}/pause-job.html" - ] - }, - { - "title": "PAUSE SCHEDULES", - "urls": [ - "/${VERSION}/pause-schedules.html" - ] - }, - { - "title": "REASSIGN OWNED", - "urls": [ - "/${VERSION}/reassign-owned.html" - ] - }, - { - "title": "REFRESH", - "urls": [ - "/${VERSION}/refresh.html" - ] - }, - { - "title": "RENAME COLUMN", - "urls": [ - "/${VERSION}/rename-column.html" - ] - }, - { - "title": "RENAME CONSTRAINT", - "urls": [ - "/${VERSION}/rename-constraint.html" - ] - }, - { - "title": "RENAME DATABASE", - "urls": [ - "/${VERSION}/rename-database.html" - ] - }, - { - "title": "RENAME INDEX", - "urls": [ - "/${VERSION}/rename-index.html" - ] - }, - { - "title": "RENAME TABLE", - "urls": [ - "/${VERSION}/rename-table.html" - ] - }, - { - "title": "RELEASE SAVEPOINT", - "urls": [ - "/${VERSION}/release-savepoint.html" - ] - }, - { - "title": "RESET <session variable>", - "urls": [ - "/${VERSION}/reset-vars.html" - ] - }, - { - "title": "RESET CLUSTER SETTING", - "urls": [ - "/${VERSION}/reset-cluster-setting.html" - ] - }, - { - "title": "RESTORE", - "urls": [ - "/${VERSION}/restore.html" - ] - }, - { - "title": "RESUME JOB", - "urls": [ - "/${VERSION}/resume-job.html" - ] - }, - { - "title": "RESUME SCHEDULES", - "urls": [ - "/${VERSION}/resume-schedules.html" - ] - }, - { - "title": "REVOKE", - "urls": [ - "/${VERSION}/revoke.html" - ] - }, - { - "title": "ROLLBACK", - "urls": [ - "/${VERSION}/rollback-transaction.html" - ] - }, - { - "title": "SAVEPOINT", - "urls": [ - "/${VERSION}/savepoint.html" - ] - }, - { - "title": "SELECT", - "urls": [ - "/${VERSION}/select-clause.html" - ] - }, - { - "title": "SELECT FOR UPDATE", - "urls": [ - "/${VERSION}/select-for-update.html" - ] - }, - { - "title": "SET <session variable>", - "urls": [ - "/${VERSION}/set-vars.html" - ] - }, - { - "title": "SET CLUSTER SETTING", - "urls": [ - "/${VERSION}/set-cluster-setting.html" - ] - }, - { - "title": "SET LOCALITY", - "urls": [ - "/${VERSION}/set-locality.html" - ] - }, - { - "title": "SET PRIMARY REGION (Enterprise)", - "urls": [ - "/${VERSION}/set-primary-region.html" - ] - }, - { - "title": "SET SCHEMA", - "urls": [ - "/${VERSION}/set-schema.html" - ] - }, - { - "title": "SET TRANSACTION", - "urls": [ - "/${VERSION}/set-transaction.html" - ] - }, - { - "title": "SHOW <session variables>", - "urls": [ - "/${VERSION}/show-vars.html" - ] - }, - { - "title": "SHOW BACKUP", - "urls": [ - "/${VERSION}/show-backup.html" - ] - }, - { - "title": "SHOW CLUSTER SETTING", - "urls": [ - "/${VERSION}/show-cluster-setting.html" - ] - }, - { - "title": "SHOW COLUMNS", - "urls": [ - "/${VERSION}/show-columns.html" - ] - }, - { - "title": "SHOW CONSTRAINTS", - "urls": [ - "/${VERSION}/show-constraints.html" - ] - }, - { - "title": "SHOW CREATE", - "urls": [ - "/${VERSION}/show-create.html" - ] - }, - { - "title": "SHOW DATABASES", - "urls": [ - "/${VERSION}/show-databases.html" - ] - }, - { - "title": "SHOW ENUMS", - "urls": [ - "/${VERSION}/show-enums.html" - ] - }, - { - "title": "SHOW FULL TABLE SCANS", - "urls": [ - "/${VERSION}/show-full-table-scans.html" - ] - }, - { - "title": "SHOW GRANTS", - "urls": [ - "/${VERSION}/show-grants.html" - ] - }, - { - "title": "SHOW INDEX", - "urls": [ - "/${VERSION}/show-index.html" - ] - }, - { - "title": "SHOW JOBS", - "urls": [ - "/${VERSION}/show-jobs.html" - ] - }, - { - "title": "SHOW LOCALITY", - "urls": [ - "/${VERSION}/show-locality.html" - ] - }, - { - "title": "SHOW PARTITIONS (Enterprise)", - "urls": [ - "/${VERSION}/show-partitions.html" - ] - }, - { - "title": "SHOW RANGES", - "urls": [ - "/${VERSION}/show-ranges.html" - ] - }, - { - "title": "SHOW RANGE FOR ROW", - "urls": [ - "/${VERSION}/show-range-for-row.html" - ] - }, - { - "title": "SHOW REGIONS", - "urls": [ - "/${VERSION}/show-regions.html" - ] - }, - { - "title": "SHOW ROLES", - "urls": [ - "/${VERSION}/show-roles.html" - ] - }, - { - "title": "SHOW SCHEDULES", - "urls": [ - "/${VERSION}/show-schedules.html" - ] - }, - { - "title": "SHOW SCHEMAS", - "urls": [ - "/${VERSION}/show-schemas.html" - ] - }, - { - "title": "SHOW SEQUENCES", - "urls": [ - "/${VERSION}/show-sequences.html" - ] - }, - { - "title": "SHOW SESSIONS", - "urls": [ - "/${VERSION}/show-sessions.html" - ] - }, - { - "title": "SHOW STATEMENTS", - "urls": [ - "/${VERSION}/show-statements.html" - ] - }, - { - "title": "SHOW STATISTICS", - "urls": [ - "/${VERSION}/show-statistics.html" - ] - }, - { - "title": "SHOW SAVEPOINT STATUS", - "urls": [ - "/${VERSION}/show-savepoint-status.html" - ] - }, - { - "title": "SHOW TABLES", - "urls": [ - "/${VERSION}/show-tables.html" - ] - }, - { - "title": "SHOW TRACE FOR SESSION", - "urls": [ - "/${VERSION}/show-trace.html" - ] - }, - { - "title": "SHOW TRANSACTIONS", - "urls": [ - "/${VERSION}/show-transactions.html" - ] - }, - { - "title": "SHOW TYPES", - "urls": [ - "/${VERSION}/show-types.html" - ] - }, - { - "title": "SHOW USERS", - "urls": [ - "/${VERSION}/show-users.html" - ] - }, - { - "title": "SHOW ZONE CONFIGURATIONS", - "urls": [ - "/${VERSION}/show-zone-configurations.html" - ] - }, - { - "title": "SPLIT AT", - "urls": [ - "/${VERSION}/split-at.html" - ] - }, - { - "title": "SURVIVE {ZONE,REGION} FAILURE", - "urls": [ - "/${VERSION}/survive-failure.html" - ] - }, - { - "title": "TRUNCATE", - "urls": [ - "/${VERSION}/truncate.html" - ] - }, - { - "title": "UNSPLIT AT", - "urls": [ - "/${VERSION}/unsplit-at.html" - ] - }, - { - "title": "UPDATE", - "urls": [ - "/${VERSION}/update.html" - ] - }, - { - "title": "UPSERT", - "urls": [ - "/${VERSION}/upsert.html" - ] - }, - { - "title": "VALIDATE CONSTRAINT", - "urls": [ - "/${VERSION}/validate-constraint.html" - ] - }, - { - "title": "WITH", - "urls": [ - "/${VERSION}/common-table-expressions.html" - ] - } - ] - }, - { - "title": "SQL Data Types", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/data-types.html" - ] - }, - { - "title": "ARRAY", - "urls": [ - "/${VERSION}/array.html" - ] - }, - { - "title": "BIT", - "urls": [ - "/${VERSION}/bit.html" - ] - }, - { - "title": "BOOL", - "urls": [ - "/${VERSION}/bool.html" - ] - }, - { - "title": "BYTES", - "urls": [ - "/${VERSION}/bytes.html" - ] - }, - { - "title": "COLLATE", - "urls": [ - "/${VERSION}/collate.html" - ] - }, - { - "title": "DATE", - "urls": [ - "/${VERSION}/date.html" - ] - }, - { - "title": "DECIMAL", - "urls": [ - "/${VERSION}/decimal.html" - ] - }, - { - "title": "ENUM", - "urls": [ - "/${VERSION}/enum.html" - ] - }, - { - "title": "FLOAT", - "urls": [ - "/${VERSION}/float.html" - ] - }, - { - "title": "INET", - "urls": [ - "/${VERSION}/inet.html" - ] - }, - { - "title": "INT", - "urls": [ - "/${VERSION}/int.html" - ] - }, - { - "title": "INTERVAL", - "urls": [ - "/${VERSION}/interval.html" - ] - }, - { - "title": "JSONB", - "urls": [ - "/${VERSION}/jsonb.html" - ] - }, - { - "title": "SERIAL", - "urls": [ - "/${VERSION}/serial.html" - ] - }, - { - "title": "STRING", - "urls": [ - "/${VERSION}/string.html" - ] - }, - { - "title": "TIME", - "urls": [ - "/${VERSION}/time.html" - ] - }, - { - "title": "TIMESTAMP", - "urls": [ - "/${VERSION}/timestamp.html" - ] - }, - { - "title": "UUID", - "urls": [ - "/${VERSION}/uuid.html" - ] - } - ] - }, - { - "title": "Constraints", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/constraints.html" - ] - }, - { - "title": "Check", - "urls": [ - "/${VERSION}/check.html" - ] - }, - { - "title": "Default Value", - "urls": [ - "/${VERSION}/default-value.html" - ] - }, - { - "title": "Foreign Key", - "urls": [ - "/${VERSION}/foreign-key.html" - ] - }, - { - "title": "Not Null", - "urls": [ - "/${VERSION}/not-null.html" - ] - }, - { - "title": "Primary Key", - "urls": [ - "/${VERSION}/primary-key.html" - ] - }, - { - "title": "Unique", - "urls": [ - "/${VERSION}/unique.html" - ] - } - ] - }, - { - "title": "Functions and Operators", - "urls": [ - "/${VERSION}/functions-and-operators.html" - ] - }, - { - "title": "Window Functions", - "urls": [ - "/${VERSION}/window-functions.html" - ] - }, - { - "title": "Name Resolution", - "urls": [ - "/${VERSION}/sql-name-resolution.html" - ] - }, - { - "title": "System Catalogs", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/system-catalogs.html" - ] - }, - { - "title": "crdb_internal", - "urls": [ - "/${VERSION}/crdb-internal.html" - ] - }, - { - "title": "information_schema", - "urls": [ - "/${VERSION}/information-schema.html" - ] - }, - { - "title": "pg_catalog", - "urls": [ - "/${VERSION}/pg-catalog.html" - ] - }, - { - "title": "pg_extension", - "urls": [ - "/${VERSION}/pg-extension.html" - ] - } - ] - } - ] - }, - { - "title": "Cluster Settings", - "urls": [ - "/${VERSION}/cluster-settings.html" - ] - }, - { - "title": "Performance Optimization", - "items": [ - { - "title": "Indexes", - "urls": [ - "/${VERSION}/indexes.html" - ] - }, - { - "title": "Cost-Based Optimizer", - "urls": [ - "/${VERSION}/cost-based-optimizer.html" - ] - }, - { - "title": "Vectorized Execution Engine", - "urls": [ - "/${VERSION}/vectorized-execution.html" - ] - }, - { - "title": "Load-Based Splitting", - "urls": [ - "/${VERSION}/load-based-splitting.html" - ] - } - ] - }, - { - "title": "CLI", - "urls": [ - "/${VERSION}/cockroach-commands.html" - ], - "items": [ - { - "title": "Client Connection Parameters", - "urls": [ - "/${VERSION}/connection-parameters.html" - ] - }, - { - "title": "cockroach Commands", - "items": [ - { - "title": "cockroach start", - "urls": [ - "/${VERSION}/cockroach-start.html" - ] - }, - { - "title": "cockroach init", - "urls": [ - "/${VERSION}/cockroach-init.html" - ] - }, - { - "title": "cockroach start-single-node", - "urls": [ - "/${VERSION}/cockroach-start-single-node.html" - ] - }, - { - "title": "cockroach cert", - "urls": [ - "/${VERSION}/cockroach-cert.html" - ] - }, - { - "title": "cockroach quit", - "urls": [ - "/${VERSION}/cockroach-quit.html" - ] - }, - { - "title": "cockroach sql", - "urls": [ - "/${VERSION}/cockroach-sql.html" - ] - }, - { - "title": "cockroach sqlfmt", - "urls": [ - "/${VERSION}/cockroach-sqlfmt.html" - ] - }, - { - "title": "cockroach node", - "urls": [ - "/${VERSION}/cockroach-node.html" - ] - }, - { - "title": "cockroach nodelocal upload", - "urls": [ - "/${VERSION}/cockroach-nodelocal-upload.html" - ] - }, - { - "title": "cockroach auth-session", - "urls": [ - "/${VERSION}/cockroach-auth-session.html" - ] - }, - { - "title": "cockroach dump", - "urls": [ - "/${VERSION}/cockroach-dump.html" - ] - }, - { - "title": "cockroach demo", - "urls": [ - "/${VERSION}/cockroach-demo.html" - ] - }, - { - "title": "cockroach debug ballast", - "urls": [ - "/${VERSION}/cockroach-debug-ballast.html" - ] - }, - { - "title": "cockroach debug encryption-active-key", - "urls": [ - "/${VERSION}/cockroach-debug-encryption-active-key.html" - ] - }, - { - "title": "cockroach debug list-files", - "urls": [ - "/${VERSION}/cockroach-debug-list-files.html" - ] - }, - { - "title": "cockroach debug merge-logs", - "urls": [ - "/${VERSION}/cockroach-debug-merge-logs.html" - ] - }, - { - "title": "cockroach debug zip", - "urls": [ - "/${VERSION}/cockroach-debug-zip.html" - ] - }, - { - "title": "cockroach statement-diag", - "urls": [ - "/${VERSION}/cockroach-statement-diag.html" - ] - }, - { - "title": "cockroach gen", - "urls": [ - "/${VERSION}/cockroach-gen.html" - ] - }, - { - "title": "cockroach userfile upload", - "urls": [ - "/${VERSION}/cockroach-userfile-upload.html" - ] - }, - { - "title": "cockroach userfile list", - "urls": [ - "/${VERSION}/cockroach-userfile-list.html" - ] - }, - { - "title": "cockroach userfile get", - "urls": [ - "/${VERSION}/cockroach-userfile-get.html" - ] - }, - { - "title": "cockroach userfile delete", - "urls": [ - "/${VERSION}/cockroach-userfile-delete.html" - ] - }, - { - "title": "cockroach version", - "urls": [ - "/${VERSION}/cockroach-version.html" - ] - }, - { - "title": "cockroach workload", - "urls": [ - "/${VERSION}/cockroach-workload.html" - ] - }, - { - "title": "cockroach import", - "urls": [ - "/${VERSION}/cockroach-import.html" - ] - } - ] - } - ] - }, - { - "title": "DB Console", - "urls": [ - "/${VERSION}/ui-overview.html" - ], - "items": [ - { - "title": "Cluster Overview Page", - "urls": [ - "/${VERSION}/ui-cluster-overview-page.html" - ] - }, - { - "title": "Metrics Dashboards", - "items": [ - { - "title": "Overview Dashboard", - "urls": [ - "/${VERSION}/ui-overview-dashboard.html" - ] - }, - { - "title": "Hardware Dashboard", - "urls": [ - "/${VERSION}/ui-hardware-dashboard.html" - ] - }, - { - "title": "Runtime Dashboard", - "urls": [ - "/${VERSION}/ui-runtime-dashboard.html" - ] - }, - { - "title": "SQL Dashboard", - "urls": [ - "/${VERSION}/ui-sql-dashboard.html" - ] - }, - { - "title": "Storage Dashboard", - "urls": [ - "/${VERSION}/ui-storage-dashboard.html" - ] - }, - { - "title": "Replication Dashboard", - "urls": [ - "/${VERSION}/ui-replication-dashboard.html" - ] - }, - { - "title": "Changefeeds Dashboard", - "urls": [ - "/${VERSION}/ui-cdc-dashboard.html" - ] - }, - { - "title": "Custom Chart", - "urls": [ - "/${VERSION}/ui-custom-chart-debug-page.html" - ] - } - ] - }, - { - "title": "Databases Page", - "urls": [ - "/${VERSION}/ui-databases-page.html" - ] - }, - { - "title": "Sessions Page", - "urls": [ - "/${VERSION}/ui-sessions-page.html" - ] - }, - { - "title": "Statements Page", - "urls": [ - "/${VERSION}/ui-statements-page.html" - ] - }, - { - "title": "Transactions Page", - "urls": [ - "/${VERSION}/ui-transactions-page.html" - ] - }, - { - "title": "Network Latency Page", - "urls": [ - "/${VERSION}/ui-network-latency-page.html" - ] - }, - { - "title": "Jobs Page", - "urls": [ - "/${VERSION}/ui-jobs-page.html" - ] - }, - { - "title": "Advanced Debug Page", - "urls": [ - "/${VERSION}/ui-debug-pages.html" - ] - } - ] - }, - { - "title": "Cluster API", - "urls": [ - "https://www.cockroachlabs.com/docs/api/cluster/v2" - ] - }, - { - "title": "Logging", - "urls": [ - "/${VERSION}/logging.html" - ], - "items": [ - { - "title": "Log formats", - "urls": [ - "/${VERSION}/log-formats.html" - ] - }, - { - "title": "Notable event types", - "urls": [ - "/${VERSION}/eventlog.html" - ] - } - ] - }, - { - "title": "Diagnostics Reporting", - "urls": [ - "/${VERSION}/diagnostics-reporting.html" - ] - }, - { - "title": "Cloud Release Notes", - "urls": [ - "/releases/cloud.html" - ] - } - ] - } diff --git a/src/current/_includes/v21.1/spatial/ogr2ogr-supported-version.md b/src/current/_includes/v21.1/spatial/ogr2ogr-supported-version.md deleted file mode 100644 index ad444257227..00000000000 --- a/src/current/_includes/v21.1/spatial/ogr2ogr-supported-version.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -An `ogr2ogr` version of 3.1.0 or higher is required to generate data that can be imported into CockroachDB. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/spatial/zmcoords.md b/src/current/_includes/v21.1/spatial/zmcoords.md deleted file mode 100644 index 14c41ff0446..00000000000 --- a/src/current/_includes/v21.1/spatial/zmcoords.md +++ /dev/null @@ -1,27 +0,0 @@ -{% include_cached new-in.html version="v21.1" %} You can also store a `{{page.title}}` with the following additional dimensions: - -- A third dimension coordinate `Z` (`{{page.title}}Z`). -- A measure coordinate `M` (`{{page.title}}M`). -- Both a third dimension and a measure coordinate (`{{page.title}}ZM`). - -The `Z` and `M` dimensions can be accessed or modified using a number of [built-in functions](functions-and-operators.html#spatial-functions), including: - -- `ST_Z` -- `ST_M` -- `ST_Affine` -- `ST_Zmflag` -- `ST_MakePoint` -- `ST_MakePointM` -- `ST_Force3D` -- `ST_Force3DZ` -- `ST_Force3DM` -- `ST_Force4D` -- `ST_Snap` -- `ST_SnapToGrid` -- `ST_RotateZ` -- `ST_AddMeasure` - -Note that CockroachDB's [spatial indexing](spatial-indexes.html) is still based on the 2D coordinate system. This means that: - -- The Z/M dimension is not index accelerated when using spatial predicates. -- Some spatial functions ignore the Z/M dimension, with transformations discarding the Z/M value. diff --git a/src/current/_includes/v21.1/sql/begin-transaction-as-of-system-time-example.md b/src/current/_includes/v21.1/sql/begin-transaction-as-of-system-time-example.md deleted file mode 100644 index 7f2c11dac77..00000000000 --- a/src/current/_includes/v21.1/sql/begin-transaction-as-of-system-time-example.md +++ /dev/null @@ -1,19 +0,0 @@ -{% include_cached copy-clipboard.html %} -~~~ sql -> BEGIN AS OF SYSTEM TIME '2019-04-09 18:02:52.0+00:00'; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM products; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> COMMIT; -~~~ diff --git a/src/current/_includes/v21.1/sql/combine-alter-table-commands.md b/src/current/_includes/v21.1/sql/combine-alter-table-commands.md deleted file mode 100644 index 62839cce017..00000000000 --- a/src/current/_includes/v21.1/sql/combine-alter-table-commands.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_success}} -This command can be combined with other `ALTER TABLE` commands in a single statement. For a list of commands that can be combined, see [`ALTER TABLE`](alter-table.html). For a demonstration, see [Add and rename columns atomically](rename-column.html#add-and-rename-columns-atomically). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/sql/connection-parameters.md b/src/current/_includes/v21.1/sql/connection-parameters.md deleted file mode 100644 index 50453037ee8..00000000000 --- a/src/current/_includes/v21.1/sql/connection-parameters.md +++ /dev/null @@ -1,9 +0,0 @@ -Flag | Description ------|------------ -`--host` | The server host and port number to connect to. This can be the address of any node in the cluster.

**Env Variable:** `COCKROACH_HOST`
**Default:** `localhost:26257` -`--port`

`-p` | The server port to connect to. Note: The port number can also be specified via `--host`.

**Env Variable:** `COCKROACH_PORT`
**Default:** `26257` -`--user`

`-u` | The [SQL user](create-user.html) that will own the client session.

**Env Variable:** `COCKROACH_USER`
**Default:** `root` -`--insecure` | Use an insecure connection.

**Env Variable:** `COCKROACH_INSECURE`
**Default:** `false` -`--cert-principal-map` | A comma-separated list of `:` mappings. This allows mapping the principal in a cert to a DB principal such as `node` or `root` or any SQL user. This is intended for use in situations where the certificate management system places restrictions on the `Subject.CommonName` or `SubjectAlternateName` fields in the certificate (e.g., disallowing a `CommonName` like `node` or `root`). If multiple mappings are provided for the same ``, the last one specified in the list takes precedence. A principal not specified in the map is passed through as-is via the identity function. A cert is allowed to authenticate a DB principal if the DB principal name is contained in the mapped `CommonName` or DNS-type `SubjectAlternateName` fields. -`--certs-dir` | The path to the [certificate directory](cockroach-cert.html) containing the CA and client certificates and client key.

**Env Variable:** `COCKROACH_CERTS_DIR`
**Default:** `${HOME}/.cockroach-certs/` - `--url` | A [connection URL](connection-parameters.html#connect-using-a-url) to use instead of the other arguments.

**Env Variable:** `COCKROACH_URL`
**Default:** no URL \ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/covering-index.md b/src/current/_includes/v21.1/sql/covering-index.md deleted file mode 100644 index 366d4500b2c..00000000000 --- a/src/current/_includes/v21.1/sql/covering-index.md +++ /dev/null @@ -1 +0,0 @@ -An index that stores all the columns needed by a query is also known as a _covering index_ for that query. When a query has a covering index, CockroachDB can use that index directly instead of doing a [join](joins.html) with the [primary key](primary-key.html), which is likely to be slower. diff --git a/src/current/_includes/v21.1/sql/crdb-internal-partitions-example.md b/src/current/_includes/v21.1/sql/crdb-internal-partitions-example.md deleted file mode 100644 index 680b0adf261..00000000000 --- a/src/current/_includes/v21.1/sql/crdb-internal-partitions-example.md +++ /dev/null @@ -1,43 +0,0 @@ -## Querying partitions programmatically - -The `crdb_internal.partitions` internal table contains information about the partitions in your database. In testing, scripting, and other programmatic environments, we recommend querying this table for partition information instead of using the `SHOW PARTITIONS` statement. For example, to get all `us_west` partitions of in your database, you can run the following query: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM crdb_internal.partitions WHERE name='us_west'; -~~~ - -~~~ - table_id | index_id | parent_name | name | columns | column_names | list_value | range_value | zone_id | subzone_id -+----------+----------+-------------+---------+---------+--------------+-------------------------------------------------+-------------+---------+------------+ - 53 | 1 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 0 | 0 - 54 | 1 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 54 | 1 - 54 | 2 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 54 | 2 - 55 | 1 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 55 | 1 - 55 | 2 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 55 | 2 - 55 | 3 | NULL | us_west | 1 | vehicle_city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 55 | 3 - 56 | 1 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 56 | 1 - 58 | 1 | NULL | us_west | 1 | city | ('seattle'), ('san francisco'), ('los angeles') | NULL | 58 | 1 -(8 rows) -~~~ - -Other internal tables, like `crdb_internal.tables`, include information that could be useful in conjunction with `crdb_internal.partitions`. - -For example, if you want the output for your partitions to include the name of the table and database, you can perform a join of the two tables: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT - partitions.name AS partition_name, column_names, list_value, tables.name AS table_name, database_name - FROM crdb_internal.partitions JOIN crdb_internal.tables ON partitions.table_id=tables.table_id - WHERE tables.name='users'; -~~~ - -~~~ - partition_name | column_names | list_value | table_name | database_name -+----------------+--------------+-------------------------------------------------+------------+---------------+ - us_west | city | ('seattle'), ('san francisco'), ('los angeles') | users | movr - us_east | city | ('new york'), ('boston'), ('washington dc') | users | movr - europe_west | city | ('amsterdam'), ('paris'), ('rome') | users | movr -(3 rows) -~~~ diff --git a/src/current/_includes/v21.1/sql/crdb-internal-partitions.md b/src/current/_includes/v21.1/sql/crdb-internal-partitions.md deleted file mode 100644 index ebab5abe4ed..00000000000 --- a/src/current/_includes/v21.1/sql/crdb-internal-partitions.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_success}} -In testing, scripting, and other programmatic environments, we recommend querying the `crdb_internal.partitions` internal table for partition information instead of using the `SHOW PARTITIONS` statement. For more information, see [Querying partitions programmatically](show-partitions.html#querying-partitions-programmatically). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/sql/db-terms.md b/src/current/_includes/v21.1/sql/db-terms.md deleted file mode 100644 index 4a8551b12ee..00000000000 --- a/src/current/_includes/v21.1/sql/db-terms.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -To avoid confusion with the general term "[database](https://en.wikipedia.org/wiki/Database)", throughout this guide we refer to the logical object as a *database*, to CockroachDB by name, and to a deployment of CockroachDB as a [*cluster*](architecture/overview.html#terms). -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/dev-schema-change-limits.md b/src/current/_includes/v21.1/sql/dev-schema-change-limits.md deleted file mode 100644 index fbcac99f13c..00000000000 --- a/src/current/_includes/v21.1/sql/dev-schema-change-limits.md +++ /dev/null @@ -1,3 +0,0 @@ -Review the [limitations of online schema changes in CockroachDB](online-schema-changes.html#limitations). Note that CockroachDB has [limited support for schema changes within the same explicit transaction](online-schema-changes.html#limited-support-for-schema-changes-within-transactions). - - We recommend doing schema changes outside explicit transactions, where possible. When a database [schema management tool](third-party-database-tools.html#schema-migration-tools) manages transactions on your behalf, we recommend only including one schema change operation per transaction. \ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/dev-schema-changes.md b/src/current/_includes/v21.1/sql/dev-schema-changes.md deleted file mode 100644 index d1f63bbcff5..00000000000 --- a/src/current/_includes/v21.1/sql/dev-schema-changes.md +++ /dev/null @@ -1 +0,0 @@ -As a general best practice, we discourage the use of client libraries to execute [database schema changes](online-schema-changes.html). Instead, use a database schema migration tool, or the [CockroachDB SQL client](cockroach-sql.html). \ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/function-special-forms.md b/src/current/_includes/v21.1/sql/function-special-forms.md deleted file mode 100644 index b9ac987444a..00000000000 --- a/src/current/_includes/v21.1/sql/function-special-forms.md +++ /dev/null @@ -1,29 +0,0 @@ -| Special form | Equivalent to | -|-----------------------------------------------------------|---------------------------------------------| -| `AT TIME ZONE` | `timezone()` | -| `CURRENT_CATALOG` | `current_catalog()` | -| `COLLATION FOR` | `pg_collation_for()` | -| `CURRENT_DATE` | `current_date()` | -| `CURRENT_ROLE` | `current_user()` | -| `CURRENT_SCHEMA` | `current_schema()` | -| `CURRENT_TIMESTAMP` | `current_timestamp()` | -| `CURRENT_TIME` | `current_time()` | -| `CURRENT_USER` | `current_user()` | -| `EXTRACT( FROM )` | `extract("", )` | -| `EXTRACT_DURATION( FROM )` | `extract_duration("", )` | -| `OVERLAY( PLACING FROM FOR )` | `overlay(, , , )` | -| `OVERLAY( PLACING FROM )` | `overlay(, , )` | -| `POSITION( IN )` | `strpos(, )` | -| `SESSION_USER` | `current_user()` | -| `SUBSTRING( FOR FROM )` | `substring(, , )` | -| `SUBSTRING( FOR )` | `substring(, 1, )` | -| `SUBSTRING( FROM FOR )` | `substring(, , )` | -| `SUBSTRING( FROM )` | `substring(, )` | -| `TRIM( FROM )` | `btrim(, )` | -| `TRIM(, )` | `btrim(, )` | -| `TRIM(FROM )` | `btrim()` | -| `TRIM(LEADING FROM )` | `ltrim(, )` | -| `TRIM(LEADING FROM )` | `ltrim()` | -| `TRIM(TRAILING FROM )` | `rtrim(, )` | -| `TRIM(TRAILING FROM )` | `rtrim()` | -| `USER` | `current_user()` | diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/add_column.html b/src/current/_includes/v21.1/sql/generated/diagrams/add_column.html deleted file mode 100644 index f59fd135d0e..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/add_column.html +++ /dev/null @@ -1,52 +0,0 @@ -
- - - - -ALTER - - -TABLE - - -IF - - -EXISTS - - -table_name - - -ADD - - -COLUMN - - -IF - - -NOT - - -EXISTS - - - -column_name - - - - -typename - - - - -col_qualification - - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/add_constraint.html b/src/current/_includes/v21.1/sql/generated/diagrams/add_constraint.html deleted file mode 100644 index a8f3b1c9c61..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/add_constraint.html +++ /dev/null @@ -1,38 +0,0 @@ -
- - - - -ALTER - - -TABLE - - -IF - - -EXISTS - - -table_name - - -ADD - - -CONSTRAINT - - - -constraint_name - - - - -constraint_elem - - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/alter_column.html b/src/current/_includes/v21.1/sql/generated/diagrams/alter_column.html deleted file mode 100644 index 538c7895cd9..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/alter_column.html +++ /dev/null @@ -1,90 +0,0 @@ -
- - - - -ALTER - - -TABLE - - -IF - - -EXISTS - - -table_name - -ALTER - - -COLUMN - - -column_name - -SET - - -DEFAULT - - -a_expr - -NOT - - -NULL - - -DATA - - -TYPE - - -typename - -COLLATE - - -collation_name - -USING - - -a_expr - -DROP - - -DEFAULT - - -NOT - - -NULL - - -STORED - - -TYPE - - -typename - -COLLATE - - -collation_name - -USING - - -a_expr - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/alter_database_add_region.html b/src/current/_includes/v21.1/sql/generated/diagrams/alter_database_add_region.html deleted file mode 100644 index 96911af975f..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/alter_database_add_region.html +++ /dev/null @@ -1,31 +0,0 @@ -
- - - - -ALTER - - -DATABASE - - -database_name - -ADD - - -REGION - - -IF - - -NOT - - -EXISTS - - -region_name - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/alter_database_drop_region.html b/src/current/_includes/v21.1/sql/generated/diagrams/alter_database_drop_region.html deleted file mode 100644 index 32390036533..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/alter_database_drop_region.html +++ /dev/null @@ -1,28 +0,0 @@ -
- - - - -ALTER - - -DATABASE - - -database_name - -DROP - - -REGION - - -IF - - -EXISTS - - -region_name - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/alter_database_primary_region.html b/src/current/_includes/v21.1/sql/generated/diagrams/alter_database_primary_region.html deleted file mode 100644 index f0dee9cd2cf..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/alter_database_primary_region.html +++ /dev/null @@ -1,28 +0,0 @@ -
- - - - -ALTER - - -DATABASE - - -database_name - -SET - - -PRIMARY - - -REGION - - -= - - -region_name - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/alter_database_survival_goal.html b/src/current/_includes/v21.1/sql/generated/diagrams/alter_database_survival_goal.html deleted file mode 100644 index 046ac7f0e84..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/alter_database_survival_goal.html +++ /dev/null @@ -1,26 +0,0 @@ -
- - - - -ALTER - - -DATABASE - - -database_name - -SURVIVE - - -ZONE - - -REGION - - -FAILURE - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/alter_index_partition_by.html b/src/current/_includes/v21.1/sql/generated/diagrams/alter_index_partition_by.html deleted file mode 100644 index 55136f4ad23..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/alter_index_partition_by.html +++ /dev/null @@ -1,72 +0,0 @@ -
- - - - -ALTER - - -INDEX - - -IF - - -EXISTS - - -table_name - -@ - - -index_name - -PARTITION - - -BY - - -LIST - - -( - - -name_list - -) - - -( - - -list_partitions - -RANGE - - -( - - -name_list - -) - - -( - - -range_partitions - -) - - -NOTHING - - -, - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/alter_primary_key.html b/src/current/_includes/v21.1/sql/generated/diagrams/alter_primary_key.html deleted file mode 100644 index 5996d6b8681..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/alter_primary_key.html +++ /dev/null @@ -1,71 +0,0 @@ -
- - - - -ALTER - - -TABLE - - -IF - - -EXISTS - - -table_name - - -ALTER - - -PRIMARY - - -KEY - - -USING - - -COLUMNS - - -( - - - -index_params - - - -) - - -USING - - -HASH - - -WITH - - -BUCKET_COUNT - - -= - - -n_buckets - - - -opt_interleave - - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/alter_role.html b/src/current/_includes/v21.1/sql/generated/diagrams/alter_role.html deleted file mode 100644 index 6291ec7cca0..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/alter_role.html +++ /dev/null @@ -1,29 +0,0 @@ -
- - - - -ALTER - - -ROLE - - -USER - - -IF - - -EXISTS - - -name - - -opt_with - - -role_options - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/alter_schema.html b/src/current/_includes/v21.1/sql/generated/diagrams/alter_schema.html deleted file mode 100644 index 90c8bd0e951..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/alter_schema.html +++ /dev/null @@ -1,35 +0,0 @@ -
- - - - -ALTER - - -SCHEMA - - -name - -. - - -name - -RENAME - - -TO - - -schema_name - -OWNER - - -TO - - -role_spec - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/alter_sequence.html b/src/current/_includes/v21.1/sql/generated/diagrams/alter_sequence.html deleted file mode 100644 index c512e684adc..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/alter_sequence.html +++ /dev/null @@ -1,93 +0,0 @@ -
- - - - -ALTER - - -SEQUENCE - - -IF - - -EXISTS - - -sequence_name - -RENAME - - -TO - - -sequence_name - -NO - - -CYCLE - - -MINVALUE - - -MAXVALUE - - -OWNED - - -BY - - -NONE - - -column_name - -CACHE - - -MINVALUE - - -MAXVALUE - - -INCREMENT - - -BY - - -START - - -WITH - - -integer - -VIRTUAL - - -SET - - -SCHEMA - - -schema_name - -OWNER - - -TO - - -role_spec - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/alter_table_locality.html b/src/current/_includes/v21.1/sql/generated/diagrams/alter_table_locality.html deleted file mode 100644 index e3f809dcac3..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/alter_table_locality.html +++ /dev/null @@ -1,28 +0,0 @@ -
- - - - -ALTER - - -TABLE - - -IF - - -EXISTS - - -table_name - -SET - - -LOCALITY - - -locality - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/alter_table_partition_by.html b/src/current/_includes/v21.1/sql/generated/diagrams/alter_table_partition_by.html deleted file mode 100644 index 073c8794394..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/alter_table_partition_by.html +++ /dev/null @@ -1,81 +0,0 @@ -
- - - - - - ALTER - - - TABLE - - - IF - - - EXISTS - - - - table_name - - - - PARTITION - - - BY - - - LIST - - - ( - - - - name_list - - - - ) - - - ( - - - - list_partitions - - - - RANGE - - - ( - - - - name_list - - - - ) - - - ( - - - - range_partitions - - - - ) - - - NOTHING - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/alter_type.html b/src/current/_includes/v21.1/sql/generated/diagrams/alter_type.html deleted file mode 100644 index f9e696ba0cf..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/alter_type.html +++ /dev/null @@ -1,79 +0,0 @@ -
- - - - -ALTER - - -TYPE - - -type_name - -ADD - - -VALUE - - -IF - - -NOT - - -EXISTS - - -value - -BEFORE - - -AFTER - - -DROP - - -VALUE - - -value - -RENAME - - -VALUE - - -value - -TO - - -value - -TO - - -name - -SET - - -SCHEMA - - -schema_name - -OWNER - - -TO - - -role_spec - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/alter_user_password.html b/src/current/_includes/v21.1/sql/generated/diagrams/alter_user_password.html deleted file mode 100644 index 0e014933d1b..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/alter_user_password.html +++ /dev/null @@ -1,31 +0,0 @@ -
- - - - -ALTER - - -USER - - -IF - - -EXISTS - - -name - - -WITH - - -PASSWORD - - -password - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/alter_view.html b/src/current/_includes/v21.1/sql/generated/diagrams/alter_view.html deleted file mode 100644 index 0b11538369e..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/alter_view.html +++ /dev/null @@ -1,47 +0,0 @@ -
- - - - -ALTER - - -MATERIALIZED - - -VIEW - - -IF - - -EXISTS - - -view_name - -RENAME - - -TO - - -view_name - -SET - - -SCHEMA - - -schema_name - -OWNER - - -TO - - -role_spec - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/alter_zone_database.html b/src/current/_includes/v21.1/sql/generated/diagrams/alter_zone_database.html deleted file mode 100644 index 11eeb471abb..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/alter_zone_database.html +++ /dev/null @@ -1,61 +0,0 @@ -
- - - - -ALTER - - -DATABASE - - -database_name - -CONFIGURE - - -ZONE - - -USING - - -variable - -= - - -COPY - - -FROM - - -PARENT - - -value - -, - - -variable - -= - - -value - -COPY - - -FROM - - -PARENT - - -DISCARD - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/alter_zone_index.html b/src/current/_includes/v21.1/sql/generated/diagrams/alter_zone_index.html deleted file mode 100644 index ef64e2314d3..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/alter_zone_index.html +++ /dev/null @@ -1,66 +0,0 @@ -
- - - - -ALTER - - -INDEX - - -table_name - -@ - - -index_name - -CONFIGURE - - -ZONE - - -USING - - -variable - -= - - -COPY - - -FROM - - -PARENT - - -value - -, - - -variable - -= - - -value - -COPY - - -FROM - - -PARENT - - -DISCARD - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/alter_zone_partition.html b/src/current/_includes/v21.1/sql/generated/diagrams/alter_zone_partition.html deleted file mode 100644 index 69ee2d0eb57..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/alter_zone_partition.html +++ /dev/null @@ -1,84 +0,0 @@ -
- - - - -ALTER - - -PARTITION - - -partition_name - -OF - - -TABLE - - -table_name - -INDEX - - -table_name - -@ - - -index_name - -* - - -index_name - -CONFIGURE - - -ZONE - - -USING - - -variable - -= - - -COPY - - -FROM - - -PARENT - - -value - -, - - -variable - -= - - -value - -COPY - - -FROM - - -PARENT - - -DISCARD - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/alter_zone_range.html b/src/current/_includes/v21.1/sql/generated/diagrams/alter_zone_range.html deleted file mode 100644 index 890dcc7240c..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/alter_zone_range.html +++ /dev/null @@ -1,61 +0,0 @@ -
- - - - -ALTER - - -RANGE - - -range_name - -CONFIGURE - - -ZONE - - -USING - - -variable - -= - - -COPY - - -FROM - - -PARENT - - -value - -, - - -variable - -= - - -value - -COPY - - -FROM - - -PARENT - - -DISCARD - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/alter_zone_table.html b/src/current/_includes/v21.1/sql/generated/diagrams/alter_zone_table.html deleted file mode 100644 index 11c233ebc84..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/alter_zone_table.html +++ /dev/null @@ -1,61 +0,0 @@ -
- - - - -ALTER - - -TABLE - - -table_name - -CONFIGURE - - -ZONE - - -USING - - -variable - -= - - -COPY - - -FROM - - -PARENT - - -value - -, - - -variable - -= - - -value - -COPY - - -FROM - - -PARENT - - -DISCARD - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/analyze.html b/src/current/_includes/v21.1/sql/generated/diagrams/analyze.html deleted file mode 100644 index 98b63f7a0bd..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/analyze.html +++ /dev/null @@ -1,18 +0,0 @@ -
- - - - -ANALYZE - - -ANALYSE - - - -analyze_target - - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/backup.html b/src/current/_includes/v21.1/sql/generated/diagrams/backup.html deleted file mode 100644 index 2d5e006404e..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/backup.html +++ /dev/null @@ -1,85 +0,0 @@ -
- - - - -BACKUP - - -TABLE - - -table_pattern - -, - - -DATABASE - - -database_name - -, - - -INTO - - -subdirectory - -LATEST - - -IN - - -destination - -( - - -partitioned_backup_location - -, - - -) - - -AS - - -OF - - -SYSTEM - - -TIME - - -timestamp - -WITH - - -backup_options - -, - - -OPTIONS - - -( - - -backup_options - -, - - -) - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/begin_transaction.html b/src/current/_includes/v21.1/sql/generated/diagrams/begin_transaction.html deleted file mode 100644 index f5e88bcce00..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/begin_transaction.html +++ /dev/null @@ -1,60 +0,0 @@ -
- - - - -BEGIN - - -TRANSACTION - - -PRIORITY - - -LOW - - -NORMAL - - -HIGH - - -READ - - -ONLY - - -WRITE - - -AS - - -OF - - -SYSTEM - - -TIME - - - -a_expr - - - -NOT - - -DEFERRABLE - - -, - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/cancel_job.html b/src/current/_includes/v21.1/sql/generated/diagrams/cancel_job.html deleted file mode 100644 index 5f740158386..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/cancel_job.html +++ /dev/null @@ -1,22 +0,0 @@ -
- - - - -CANCEL - - -JOB - - -job_id - -JOBS - - -select_stmt - - -for_schedules_clause - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/cancel_query.html b/src/current/_includes/v21.1/sql/generated/diagrams/cancel_query.html deleted file mode 100644 index 612db072eb4..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/cancel_query.html +++ /dev/null @@ -1,36 +0,0 @@ -
- - - - -CANCEL - - -QUERY - - -IF - - -EXISTS - - -query_id - - -QUERIES - - -IF - - -EXISTS - - - -select_stmt - - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/cancel_session.html b/src/current/_includes/v21.1/sql/generated/diagrams/cancel_session.html deleted file mode 100644 index 857f87adb18..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/cancel_session.html +++ /dev/null @@ -1,36 +0,0 @@ -
- - - - -CANCEL - - -SESSION - - -IF - - -EXISTS - - -session_id - - -SESSIONS - - -IF - - -EXISTS - - - -select_stmt - - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/check_column_level.html b/src/current/_includes/v21.1/sql/generated/diagrams/check_column_level.html deleted file mode 100644 index 59eec3e3c15..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/check_column_level.html +++ /dev/null @@ -1,70 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - - table_name - - - - ( - - - - column_name - - - - - column_type - - - - CHECK - - - ( - - - - check_expr - - - - ) - - - - column_constraints - - - - , - - - - column_def - - - - - table_constraints - - - - ) - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/check_table_level.html b/src/current/_includes/v21.1/sql/generated/diagrams/check_table_level.html deleted file mode 100644 index 6066d637220..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/check_table_level.html +++ /dev/null @@ -1,60 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - - table_name - - - - ( - - - - column_def - - - - , - - - CONSTRAINT - - - - name - - - - CHECK - - - ( - - - - check_expr - - - - ) - - - - table_constraints - - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/col_qualification.html b/src/current/_includes/v21.1/sql/generated/diagrams/col_qualification.html deleted file mode 100644 index 144a044e8b9..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/col_qualification.html +++ /dev/null @@ -1,134 +0,0 @@ -
- - - - -CONSTRAINT - - -constraint_name - -NOT - - -NULL - - -VISIBLE - - -NULL - - -UNIQUE - - -PRIMARY - - -KEY - - -USING - - -HASH - - -WITH - - -BUCKET_COUNT - - -= - - -n_buckets - -CHECK - - -( - - -a_expr - -) - - -DEFAULT - - -b_expr - -REFERENCES - - -table_name - - -opt_name_parens - - -key_match - - -reference_actions - -GENERATED_ALWAYS - - -ALWAYS - - -AS - - -( - - -a_expr - -) - - -STORED - - -VIRTUAL - - -COLLATE - - -collation_name - -FAMILY - - -family_name - -CREATE - - -FAMILY - - -family_name - -IF - - -NOT - - -EXISTS - - -FAMILY - - -family_name - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/column_def.html b/src/current/_includes/v21.1/sql/generated/diagrams/column_def.html deleted file mode 100644 index 284e8dc5838..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/column_def.html +++ /dev/null @@ -1,23 +0,0 @@ - \ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/comment.html b/src/current/_includes/v21.1/sql/generated/diagrams/comment.html deleted file mode 100644 index d9933514585..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/comment.html +++ /dev/null @@ -1,53 +0,0 @@ -
- - - - -COMMENT - - -ON - - -DATABASE - - - -database_name - - - -TABLE - - - -table_name - - - -COLUMN - - - -column_name - - - -INDEX - - - -table_index_name - - - -IS - - - -comment_text - - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/commit_transaction.html b/src/current/_includes/v21.1/sql/generated/diagrams/commit_transaction.html deleted file mode 100644 index 12914f3e1cb..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/commit_transaction.html +++ /dev/null @@ -1,17 +0,0 @@ -
- - - - - - COMMIT - - - END - - - TRANSACTION - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/copy_from.html b/src/current/_includes/v21.1/sql/generated/diagrams/copy_from.html deleted file mode 100644 index 99ee6ba0694..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/copy_from.html +++ /dev/null @@ -1,25 +0,0 @@ -
- - - - -COPY - - -table_name - - -opt_column_list - -FROM - - -STDIN - - -WITH - - -copy_options - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/create_as_col_qual_list.html b/src/current/_includes/v21.1/sql/generated/diagrams/create_as_col_qual_list.html deleted file mode 100644 index 791829c3ba3..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/create_as_col_qual_list.html +++ /dev/null @@ -1,17 +0,0 @@ -
- - - - -PRIMARY - - -KEY - - -FAMILY - - -family_name - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/create_as_constraint_def.html b/src/current/_includes/v21.1/sql/generated/diagrams/create_as_constraint_def.html deleted file mode 100644 index 3699d2b1833..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/create_as_constraint_def.html +++ /dev/null @@ -1,20 +0,0 @@ -
- - - - -PRIMARY - - -KEY - - -( - - -create_as_params - -) - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/create_changefeed.html b/src/current/_includes/v21.1/sql/generated/diagrams/create_changefeed.html deleted file mode 100644 index 82b77b8360e..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/create_changefeed.html +++ /dev/null @@ -1,46 +0,0 @@ -
- - - - -CREATE - - -CHANGEFEED - - -FOR - - -TABLE - - -table_name - - -, - - -INTO - - -sink - - -WITH - - -option - - -= - - -value - - -, - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/create_database.html b/src/current/_includes/v21.1/sql/generated/diagrams/create_database.html deleted file mode 100644 index d9e49737091..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/create_database.html +++ /dev/null @@ -1,87 +0,0 @@ -
- - - - -CREATE - - -DATABASE - - -IF - - -NOT - - -EXISTS - - -database_name - -WITH - - -opt_template_clause - -ENCODING - - -= - - -encoding - - -opt_lc_collate_clause - - -opt_lc_ctype_clause - -CONNECTION - - -LIMIT - - -= - - -limit - -PRIMARY - - -REGION - - -= - - -region_name - -REGIONS - - -= - - -region_name_list - -SURVIVE - - -= - - -REGION - - -ZONE - - -FAILURE - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/create_index.html b/src/current/_includes/v21.1/sql/generated/diagrams/create_index.html deleted file mode 100644 index f2c221b81bf..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/create_index.html +++ /dev/null @@ -1,159 +0,0 @@ -
- - - - -CREATE - - -UNIQUE - - -INDEX - - -CONCURRENTLY - - - -opt_index_name - - - -IF - - -NOT - - -EXISTS - - - -index_name - - - -ON - - - -table_name - - - -USING - - - -name - - - -( - - - -func_expr_windowless - - - -( - - - -a_expr - - - -) - - - -name - - - - -index_elem_options - - - -, - - -) - - -USING - - -HASH - - -WITH - - -BUCKET_COUNT - - -= - - -n_buckets - - -COVERING - - -STORING - - -INCLUDE - - -( - - - -name_list - - - -) - - - -opt_interleave - - - - -opt_partition_by - - - -WITH - - -( - - - -storage_parameter - - - -, - - -) - - - -opt_where_clause - - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/create_inverted_index.html b/src/current/_includes/v21.1/sql/generated/diagrams/create_inverted_index.html deleted file mode 100644 index 23918d53d67..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/create_inverted_index.html +++ /dev/null @@ -1,214 +0,0 @@ -
- - - - -CREATE - - -UNIQUE - - -INDEX - - -CONCURRENTLY - - - -opt_index_name - - - -IF - - -NOT - - -EXISTS - - - -index_name - - - -ON - - - -table_name - - - - -opt_index_access_method - - - -( - - - -func_expr_windowless - - - -( - - - -a_expr - - - -) - - - -name - - - - -index_elem_options - - - -, - - -) - - - -opt_hash_sharded - - - -INVERTED - - -INDEX - - -CONCURRENTLY - - - -opt_index_name - - - -IF - - -NOT - - -EXISTS - - - -index_name - - - -ON - - - -table_name - - - -( - - - -func_expr_windowless - - - -( - - - -a_expr - - - -) - - - -name - - - - -index_elem_options - - - -, - - -) - - -COVERING - - -STORING - - -INCLUDE - - -( - - - -name_list - - - -) - - - -opt_interleave - - - - -opt_partition_by - - - -WITH - - -( - - - -storage_parameter - - - -, - - -) - - - -opt_where_clause - - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/create_role.html b/src/current/_includes/v21.1/sql/generated/diagrams/create_role.html deleted file mode 100644 index 3c9c43dedf3..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/create_role.html +++ /dev/null @@ -1,28 +0,0 @@ -
- - - - - - CREATE - - - ROLE - - - IF - - - NOT - - - EXISTS - - - - name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/create_schema.html b/src/current/_includes/v21.1/sql/generated/diagrams/create_schema.html deleted file mode 100644 index b87479154ad..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/create_schema.html +++ /dev/null @@ -1,41 +0,0 @@ -
- - - - -CREATE - - -SCHEMA - - -IF - - -NOT - - -EXISTS - - -name - -. - - -name - - -name - -. - - -name - -AUTHORIZATION - - -role_spec - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/create_sequence.html b/src/current/_includes/v21.1/sql/generated/diagrams/create_sequence.html deleted file mode 100644 index d68c0e6d755..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/create_sequence.html +++ /dev/null @@ -1,74 +0,0 @@ -
- - - - -CREATE - - -opt_temp - -SEQUENCE - - -IF - - -NOT - - -EXISTS - - -sequence_name - -NO - - -CYCLE - - -MINVALUE - - -MAXVALUE - - -OWNED - - -BY - - -NONE - - -column_name - -CACHE - - -MINVALUE - - -MAXVALUE - - -INCREMENT - - -BY - - -START - - -WITH - - -integer - -VIRTUAL - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/create_stats.html b/src/current/_includes/v21.1/sql/generated/diagrams/create_stats.html deleted file mode 100644 index c02186ee5cb..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/create_stats.html +++ /dev/null @@ -1,25 +0,0 @@ -
- - - - -CREATE - - -STATISTICS - - -statistics_name - - -opt_stats_columns - -FROM - - -create_stats_target - - -opt_as_of_clause - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/create_table.html b/src/current/_includes/v21.1/sql/generated/diagrams/create_table.html deleted file mode 100644 index d433c344440..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/create_table.html +++ /dev/null @@ -1,78 +0,0 @@ - diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/create_table_as.html b/src/current/_includes/v21.1/sql/generated/diagrams/create_table_as.html deleted file mode 100644 index f7ba5eb5e72..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/create_table_as.html +++ /dev/null @@ -1,96 +0,0 @@ - diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/create_type.html b/src/current/_includes/v21.1/sql/generated/diagrams/create_type.html deleted file mode 100644 index 173d1b71097..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/create_type.html +++ /dev/null @@ -1,37 +0,0 @@ -
- - - - -CREATE - - -TYPE - - -IF - - -NOT - - -EXISTS - - -type_name - -AS - - -ENUM - - -( - - -opt_enum_val_list - -) - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/create_user.html b/src/current/_includes/v21.1/sql/generated/diagrams/create_user.html deleted file mode 100644 index 1dc78bb289a..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/create_user.html +++ /dev/null @@ -1,39 +0,0 @@ -
- - - - - - CREATE - - - USER - - - IF - - - NOT - - - EXISTS - - - - name - - - - WITH - - - PASSWORD - - - - password - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/create_view.html b/src/current/_includes/v21.1/sql/generated/diagrams/create_view.html deleted file mode 100644 index f211d2c5c92..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/create_view.html +++ /dev/null @@ -1,68 +0,0 @@ -
- - - - -CREATE - - - -opt_temp - - - -MATERIALIZED - - -VIEW - - -IF - - -NOT - - -EXISTS - - -OR - - -REPLACE - - - -opt_temp - - - -VIEW - - - -view_name - - - -( - - - -name_list - - - -) - - -AS - - - -select_stmt - - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/default_value_column_level.html b/src/current/_includes/v21.1/sql/generated/diagrams/default_value_column_level.html deleted file mode 100644 index 0ba9afca9c4..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/default_value_column_level.html +++ /dev/null @@ -1,64 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - - table_name - - - - ( - - - - column_name - - - - - column_type - - - - DEFAULT - - - - default_value - - - - - column_constraints - - - - , - - - - column_def - - - - - table_constraints - - - - ) - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/delete.html b/src/current/_includes/v21.1/sql/generated/diagrams/delete.html deleted file mode 100644 index bd3969966e2..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/delete.html +++ /dev/null @@ -1,80 +0,0 @@ -
- - - - -WITH - - -RECURSIVE - - - -common_table_expr - - - -, - - -DELETE - - -FROM - - -ONLY - - - -table_name - - - - -opt_index_flags - - - -* - - -AS - - - -table_alias_name - - - -WHERE - - - -a_expr - - - - -sort_clause - - - - -limit_clause - - - -RETURNING - - - -target_list - - - -NOTHING - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/drop_column.html b/src/current/_includes/v21.1/sql/generated/diagrams/drop_column.html deleted file mode 100644 index 384f5219d9d..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/drop_column.html +++ /dev/null @@ -1,43 +0,0 @@ -
- - - - -ALTER - - -TABLE - - -IF - - -EXISTS - - -table_name - - -DROP - - -COLUMN - - -IF - - -EXISTS - - -name - - -CASCADE - - -RESTRICT - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/drop_constraint.html b/src/current/_includes/v21.1/sql/generated/diagrams/drop_constraint.html deleted file mode 100644 index 77cea230ccd..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/drop_constraint.html +++ /dev/null @@ -1,45 +0,0 @@ -
- - - - -ALTER - - -TABLE - - -IF - - -EXISTS - - -table_name - - -DROP - - -CONSTRAINT - - -IF - - -EXISTS - - - -name - - - -CASCADE - - -RESTRICT - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/drop_database.html b/src/current/_includes/v21.1/sql/generated/diagrams/drop_database.html deleted file mode 100644 index 038eb0befc1..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/drop_database.html +++ /dev/null @@ -1,31 +0,0 @@ -
- - - - - - DROP - - - DATABASE - - - IF - - - EXISTS - - - - name - - - - CASCADE - - - RESTRICT - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/drop_index.html b/src/current/_includes/v21.1/sql/generated/diagrams/drop_index.html deleted file mode 100644 index 3e50bf8d4b1..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/drop_index.html +++ /dev/null @@ -1,41 +0,0 @@ -
- - - - -DROP - - -INDEX - - -CONCURRENTLY - - -IF - - -EXISTS - - - -table_name - - - -@ - - - -index_name - - - -CASCADE - - -RESTRICT - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/drop_role.html b/src/current/_includes/v21.1/sql/generated/diagrams/drop_role.html deleted file mode 100644 index 0037ebf56ce..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/drop_role.html +++ /dev/null @@ -1,25 +0,0 @@ -
- - - - - - DROP - - - ROLE - - - IF - - - EXISTS - - - - name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/drop_schema.html b/src/current/_includes/v21.1/sql/generated/diagrams/drop_schema.html deleted file mode 100644 index 6a4a912d0a6..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/drop_schema.html +++ /dev/null @@ -1,26 +0,0 @@ -
- - - - -DROP - - -SCHEMA - - -IF - - -EXISTS - - -schema_name_list - -CASCADE - - -RESTRICT - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/drop_sequence.html b/src/current/_includes/v21.1/sql/generated/diagrams/drop_sequence.html deleted file mode 100644 index 6507f7dec30..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/drop_sequence.html +++ /dev/null @@ -1,34 +0,0 @@ -
- - - - - - DROP - - - SEQUENCE - - - IF - - - EXISTS - - - - sequence_name - - - - , - - - CASCADE - - - RESTRICT - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/drop_table.html b/src/current/_includes/v21.1/sql/generated/diagrams/drop_table.html deleted file mode 100644 index 18ad4fdd502..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/drop_table.html +++ /dev/null @@ -1,34 +0,0 @@ -
- - - - - - DROP - - - TABLE - - - IF - - - EXISTS - - - - table_name - - - - , - - - CASCADE - - - RESTRICT - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/drop_type.html b/src/current/_includes/v21.1/sql/generated/diagrams/drop_type.html deleted file mode 100644 index b893a73cbe5..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/drop_type.html +++ /dev/null @@ -1,20 +0,0 @@ -
- - - - -DROP - - -TYPE - - -IF - - -EXISTS - - -type_name_list - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/drop_user.html b/src/current/_includes/v21.1/sql/generated/diagrams/drop_user.html deleted file mode 100644 index 57c3db991b9..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/drop_user.html +++ /dev/null @@ -1,28 +0,0 @@ -
- - - - - - DROP - - - USER - - - IF - - - EXISTS - - - - user_name - - - - , - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/drop_view.html b/src/current/_includes/v21.1/sql/generated/diagrams/drop_view.html deleted file mode 100644 index 04a3322891d..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/drop_view.html +++ /dev/null @@ -1,36 +0,0 @@ -
- - - - -DROP - - -MATERIALIZED - - -VIEW - - -IF - - -EXISTS - - - -table_name - - - -, - - -CASCADE - - -RESTRICT - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/experimental_audit.html b/src/current/_includes/v21.1/sql/generated/diagrams/experimental_audit.html deleted file mode 100644 index 46cc527074a..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/experimental_audit.html +++ /dev/null @@ -1,39 +0,0 @@ -
- - - - -ALTER - - -TABLE - - -IF - - -EXISTS - - - -table_name - - - -EXPERIMENTAL_AUDIT - - -SET - - -READ - - -WRITE - - -OFF - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/explain.html b/src/current/_includes/v21.1/sql/generated/diagrams/explain.html deleted file mode 100644 index b50c6851f8d..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/explain.html +++ /dev/null @@ -1,39 +0,0 @@ -
- - - - -EXPLAIN - - -( - - -VERBOSE - - -TYPES - - -OPT - - -DISTSQL - - -VEC - - -, - - -) - - - -preparable_stmt - - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/explain_analyze.html b/src/current/_includes/v21.1/sql/generated/diagrams/explain_analyze.html deleted file mode 100644 index dbde7920ff2..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/explain_analyze.html +++ /dev/null @@ -1,35 +0,0 @@ -
- - - - -EXPLAIN - - -ANALYZE - - -ANALYSE - - -( - - -PLAN - - -DISTSQL - - -DEBUG - - -, - - -) - - -preparable_stmt - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/export.html b/src/current/_includes/v21.1/sql/generated/diagrams/export.html deleted file mode 100644 index 05ad8e2a864..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/export.html +++ /dev/null @@ -1,36 +0,0 @@ -
- - - - -EXPORT - - -INTO - - -CSV - - -file_location - - - -opt_with_options - - - -FROM - - -select_stmt - - -TABLE - - -table_name - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/family_def.html b/src/current/_includes/v21.1/sql/generated/diagrams/family_def.html deleted file mode 100644 index 1dda01d9e79..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/family_def.html +++ /dev/null @@ -1,30 +0,0 @@ -
- - - - - - FAMILY - - - - opt_family_name - - - - ( - - - - name - - - - , - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/for_locking.html b/src/current/_includes/v21.1/sql/generated/diagrams/for_locking.html deleted file mode 100644 index e8e2b503b5c..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/for_locking.html +++ /dev/null @@ -1,45 +0,0 @@ -
- - - - -FOR - - -NO - - -KEY - - -UPDATE - - -KEY - - -SHARE - - -OF - - - -table_name - - - -, - - -SKIP - - -LOCKED - - -NOWAIT - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/foreign_key_column_level.html b/src/current/_includes/v21.1/sql/generated/diagrams/foreign_key_column_level.html deleted file mode 100644 index a963e586425..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/foreign_key_column_level.html +++ /dev/null @@ -1,75 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - - table_name - - - - ( - - - - column_name - - - - - column_type - - - - REFERENCES - - - - parent_table - - - - ( - - - - ref_column_name - - - - ) - - - - column_constraints - - - - , - - - - column_def - - - - - table_constraints - - - - ) - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/foreign_key_table_level.html b/src/current/_includes/v21.1/sql/generated/diagrams/foreign_key_table_level.html deleted file mode 100644 index 2eb3498af46..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/foreign_key_table_level.html +++ /dev/null @@ -1,85 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - - table_name - - - - ( - - - - column_def - - - - , - - - CONSTRAINT - - - - name - - - - FOREIGN KEY - - - ( - - - - fk_column_name - - - - , - - - ) - - - REFERENCES - - - - parent_table - - - - ( - - - - ref_column_name - - - - , - - - ) - - - - table_constraints - - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/grant.html b/src/current/_includes/v21.1/sql/generated/diagrams/grant.html deleted file mode 100644 index 0883d85cdf1..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/grant.html +++ /dev/null @@ -1,52 +0,0 @@ -
- - - - -GRANT - - -ALL - - -PRIVILEGES - - -ON - - -targets - -TO - - -name_list - - -privilege_list - -ON - - -targets - -TO - - -name_list - -TO - - -name_list - -WITH - - -ADMIN - - -OPTION - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/import.html b/src/current/_includes/v21.1/sql/generated/diagrams/import.html deleted file mode 100644 index 4528fe2a3e2..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/import.html +++ /dev/null @@ -1,72 +0,0 @@ -
- - - - - - IMPORT - - - TABLE - - - - table_name - - - - CREATE - - - USING - - - - create_table_file - - - - ( - - - - table_elem_list - - - - ) - - - CSV - - - DATA - - - ( - - - - file_to_import - - - - , - - - ) - - - WITH - - - - kv_option - - - - , - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/import_csv.html b/src/current/_includes/v21.1/sql/generated/diagrams/import_csv.html deleted file mode 100644 index 20d4607a90b..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/import_csv.html +++ /dev/null @@ -1,55 +0,0 @@ -
- - - - -IMPORT - - -TABLE - - -table_name - -CREATE - - -USING - - -file_location - -( - - -table_elem_list - -) - - -CSV - - -AVRO - - -DATA - - -( - - -file_location - -, - - -) - - -WITH - - -kv_option_list - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/import_dump.html b/src/current/_includes/v21.1/sql/generated/diagrams/import_dump.html deleted file mode 100644 index 1c94207f03e..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/import_dump.html +++ /dev/null @@ -1,27 +0,0 @@ -
- - - - -IMPORT - - -TABLE - - -table_name - -FROM - - -import_format - - -file_location - -WITH - - -kv_option_list - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/import_into.html b/src/current/_includes/v21.1/sql/generated/diagrams/import_into.html deleted file mode 100644 index eb5f9622179..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/import_into.html +++ /dev/null @@ -1,61 +0,0 @@ -
- - - - -IMPORT - - -INTO - - -table_name - -( - - -column_name - -, - - -) - - -CSV - - -AVRO - - -DELIMITED - - -DATA - - -( - - -file_location - -, - - -) - - -WITH - - -option - -= - - -value - -, - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/index_def.html b/src/current/_includes/v21.1/sql/generated/diagrams/index_def.html deleted file mode 100644 index ccb29849b31..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/index_def.html +++ /dev/null @@ -1,110 +0,0 @@ -
- - - - -UNIQUE - - -INDEX - - - -opt_index_name - - - -( - - - -index_elem - - - -, - - -) - - -USING - - -HASH - - -WITH - - -BUCKET_COUNT - - -= - - -n_buckets - - -COVERING - - -STORING - - -INCLUDE - - -( - - - -name_list - - - -) - - - -opt_interleave - - - - -opt_partition_by - - - -INVERTED - - -INDEX - - - -name - - - -( - - - -index_elem - - - -, - - -) - - - -opt_where_clause - - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/insert.html b/src/current/_includes/v21.1/sql/generated/diagrams/insert.html deleted file mode 100644 index c61450e7913..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/insert.html +++ /dev/null @@ -1,62 +0,0 @@ -
- - - - -WITH - - -RECURSIVE - - -common_table_expr - -, - - -INSERT - - -INTO - - -table_name - -AS - - -table_alias_name - -( - - -column_name - -, - - -) - - -select_stmt - -DEFAULT - - -VALUES - - -on_conflict - -RETURNING - - -target_elem - -, - - -NOTHING - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/joined_table.html b/src/current/_includes/v21.1/sql/generated/diagrams/joined_table.html deleted file mode 100644 index 943fce42100..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/joined_table.html +++ /dev/null @@ -1,87 +0,0 @@ -
- - - - -( - - -joined_table - -) - - -table_ref - -CROSS - - -opt_join_hint - -NATURAL - - -FULL - - -LEFT - - -RIGHT - - -OUTER - - -INNER - - -opt_join_hint - -JOIN - - -table_ref - -FULL - - -LEFT - - -RIGHT - - -OUTER - - -INNER - - -opt_join_hint - -JOIN - - -table_ref - -USING - - -( - - -name - -, - - -) - - -ON - - -a_expr - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/like_table_option_list.html b/src/current/_includes/v21.1/sql/generated/diagrams/like_table_option_list.html deleted file mode 100644 index 040145768cf..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/like_table_option_list.html +++ /dev/null @@ -1,28 +0,0 @@ -
- - - - -INCLUDING - - -EXCLUDING - - -CONSTRAINTS - - -DEFAULTS - - -GENERATED - - -INDEXES - - -ALL - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/not_null_column_level.html b/src/current/_includes/v21.1/sql/generated/diagrams/not_null_column_level.html deleted file mode 100644 index 52e17e9d57d..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/not_null_column_level.html +++ /dev/null @@ -1,59 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - - table_name - - - - ( - - - - column_name - - - - - column_type - - - - NOT NULL - - - - column_constraints - - - - , - - - - column_def - - - - - table_constraints - - - - ) - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/on_conflict.html b/src/current/_includes/v21.1/sql/generated/diagrams/on_conflict.html deleted file mode 100644 index 1edb951f1b9..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/on_conflict.html +++ /dev/null @@ -1,78 +0,0 @@ -
- - - - -ON - - -CONFLICT - - -( - - -name - -, - - -) - - -DO - - -UPDATE - - -SET - - -column_name - -= - - -a_expr - -( - - -column_name - -, - - -) - - -= - - -( - - -select_stmt - - -a_expr - -, - - -a_expr - -, - - -) - - -, - - -NOTHING - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/opt_frame_clause.html b/src/current/_includes/v21.1/sql/generated/diagrams/opt_frame_clause.html deleted file mode 100644 index 17ebda5d4ef..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/opt_frame_clause.html +++ /dev/null @@ -1,28 +0,0 @@ -
- - - - -RANGE - - -ROWS - - -GROUPS - - -BETWEEN - - -frame_bound - -AND - - -frame_bound - - -opt_frame_exclusion - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/opt_locality.html b/src/current/_includes/v21.1/sql/generated/diagrams/opt_locality.html deleted file mode 100644 index e12091ebcf8..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/opt_locality.html +++ /dev/null @@ -1,51 +0,0 @@ -
- - - - -LOCALITY - - -GLOBAL - - -REGIONAL - - -BY - - -TABLE - - -IN - - -region_name - -PRIMARY - - -REGION - - -ROW - - -AS - - -column_name - -IN - - -region_name - -PRIMARY - - -REGION - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/opt_persistence_temp_table.html b/src/current/_includes/v21.1/sql/generated/diagrams/opt_persistence_temp_table.html deleted file mode 100644 index 3e3f2b0036b..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/opt_persistence_temp_table.html +++ /dev/null @@ -1,22 +0,0 @@ -
- - - - -LOCAL - - -GLOBAL - - -TEMPORARY - - -TEMP - - -UNLOGGED - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/opt_with_storage_parameter_list.html b/src/current/_includes/v21.1/sql/generated/diagrams/opt_with_storage_parameter_list.html deleted file mode 100644 index 6c919de4bdc..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/opt_with_storage_parameter_list.html +++ /dev/null @@ -1,24 +0,0 @@ -
- - - - -WITH - - -( - - - -storage_parameter - - - -, - - -) - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/pause_job.html b/src/current/_includes/v21.1/sql/generated/diagrams/pause_job.html deleted file mode 100644 index 26b8fb4dfd1..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/pause_job.html +++ /dev/null @@ -1,22 +0,0 @@ -
- - - - -PAUSE - - -JOB - - -job_id - -JOBS - - -select_stmt - - -for_schedules_clause - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/primary_key_column_level.html b/src/current/_includes/v21.1/sql/generated/diagrams/primary_key_column_level.html deleted file mode 100644 index f938b641654..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/primary_key_column_level.html +++ /dev/null @@ -1,59 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - - table_name - - - - ( - - - - column_name - - - - - column_type - - - - PRIMARY KEY - - - - column_constraints - - - - , - - - - column_def - - - - - table_constraints - - - - ) - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/primary_key_table_level.html b/src/current/_includes/v21.1/sql/generated/diagrams/primary_key_table_level.html deleted file mode 100644 index db8ece49c39..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/primary_key_table_level.html +++ /dev/null @@ -1,63 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - - table_name - - - - ( - - - - column_def - - - - , - - - CONSTRAINT - - - - name - - - - PRIMARY KEY - - - ( - - - - column_name - - - - , - - - ) - - - - table_constraints - - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/reassign_owned_by.html b/src/current/_includes/v21.1/sql/generated/diagrams/reassign_owned_by.html deleted file mode 100644 index bdf17572e2e..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/reassign_owned_by.html +++ /dev/null @@ -1,22 +0,0 @@ -
- - - - -REASSIGN - - -OWNED - - -BY - - -role_spec_list - -TO - - -role_spec - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/release_savepoint.html b/src/current/_includes/v21.1/sql/generated/diagrams/release_savepoint.html deleted file mode 100644 index 194ce6573ca..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/release_savepoint.html +++ /dev/null @@ -1,19 +0,0 @@ -
- - - - - - RELEASE - - - SAVEPOINT - - - - name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/rename_column.html b/src/current/_includes/v21.1/sql/generated/diagrams/rename_column.html deleted file mode 100644 index 2d275bc9de7..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/rename_column.html +++ /dev/null @@ -1,44 +0,0 @@ -
- - - - - - ALTER - - - TABLE - - - IF - - - EXISTS - - - - table_name - - - - RENAME - - - COLUMN - - - - current_name - - - - TO - - - - name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/rename_constraint.html b/src/current/_includes/v21.1/sql/generated/diagrams/rename_constraint.html deleted file mode 100644 index 36b2c9dfe1f..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/rename_constraint.html +++ /dev/null @@ -1,33 +0,0 @@ -
- - - - -ALTER - - -TABLE - - -IF - - -EXISTS - - -table_name - -RENAME - - -CONSTRAINT - - -current_name - -TO - - -name - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/rename_database.html b/src/current/_includes/v21.1/sql/generated/diagrams/rename_database.html deleted file mode 100644 index ce9ddd3ddba..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/rename_database.html +++ /dev/null @@ -1,30 +0,0 @@ -
- - - - - - ALTER - - - DATABASE - - - - name - - - - RENAME - - - TO - - - - name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/rename_index.html b/src/current/_includes/v21.1/sql/generated/diagrams/rename_index.html deleted file mode 100644 index 82ed2e90255..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/rename_index.html +++ /dev/null @@ -1,33 +0,0 @@ -
- - - - -ALTER - - -INDEX - - -IF - - -EXISTS - - -table_name - -@ - - -index_name - -RENAME - - -TO - - -index_name - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/rename_table.html b/src/current/_includes/v21.1/sql/generated/diagrams/rename_table.html deleted file mode 100644 index 316c56482eb..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/rename_table.html +++ /dev/null @@ -1,36 +0,0 @@ -
- - - - - - ALTER - - - TABLE - - - IF - - - EXISTS - - - - current_name - - - - RENAME - - - TO - - - - new_name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/reset_csetting.html b/src/current/_includes/v21.1/sql/generated/diagrams/reset_csetting.html deleted file mode 100644 index 49e120ffc69..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/reset_csetting.html +++ /dev/null @@ -1,22 +0,0 @@ -
- - - - - - RESET - - - CLUSTER - - - SETTING - - - - var_name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/reset_session.html b/src/current/_includes/v21.1/sql/generated/diagrams/reset_session.html deleted file mode 100644 index 0a47ec52d49..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/reset_session.html +++ /dev/null @@ -1,19 +0,0 @@ -
- - - - - - RESET - - - SESSION - - - - session_var - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/restore.html b/src/current/_includes/v21.1/sql/generated/diagrams/restore.html deleted file mode 100644 index ad14ad7c24a..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/restore.html +++ /dev/null @@ -1,76 +0,0 @@ -
- - - - -RESTORE - - -TABLE - - -table_pattern - -, - - -DATABASE - - -database_name - -, - - -FROM - - -subdirectory - -IN - - -destination - -( - - -partitioned_backup_location - -, - - -) - - -AS - - -OF - - -SYSTEM - - -TIME - - -timestamp - -WITH - - -restore_options_list - -OPTIONS - - -( - - -restore_options_list - -) - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/resume_job.html b/src/current/_includes/v21.1/sql/generated/diagrams/resume_job.html deleted file mode 100644 index 30dd723b8d6..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/resume_job.html +++ /dev/null @@ -1,22 +0,0 @@ -
- - - - -RESUME - - -JOB - - -job_id - -JOBS - - -select_stmt - - -for_schedules_clause - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/revoke.html b/src/current/_includes/v21.1/sql/generated/diagrams/revoke.html deleted file mode 100644 index 7358a4a041a..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/revoke.html +++ /dev/null @@ -1,44 +0,0 @@ -
- - - - -REVOKE - - -ALL - - -PRIVILEGES - - -ON - - -targets - - -privilege_list - -ON - - -targets - -ADMIN - - -OPTION - - -FOR - - -privilege_list - -FROM - - -name_list - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/rollback_transaction.html b/src/current/_includes/v21.1/sql/generated/diagrams/rollback_transaction.html deleted file mode 100644 index e981a160929..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/rollback_transaction.html +++ /dev/null @@ -1,17 +0,0 @@ -
- - - - -ROLLBACK - - -TO - - -SAVEPOINT - - -savepoint_name - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/savepoint.html b/src/current/_includes/v21.1/sql/generated/diagrams/savepoint.html deleted file mode 100644 index 9b7dc70608b..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/savepoint.html +++ /dev/null @@ -1,16 +0,0 @@ -
- - - - - - SAVEPOINT - - - - name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/select.html b/src/current/_includes/v21.1/sql/generated/diagrams/select.html deleted file mode 100644 index 9fc522fe5c9..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/select.html +++ /dev/null @@ -1,116 +0,0 @@ - diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/select_clause.html b/src/current/_includes/v21.1/sql/generated/diagrams/select_clause.html deleted file mode 100644 index 535480ba3ba..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/select_clause.html +++ /dev/null @@ -1,53 +0,0 @@ - diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/set_cluster_setting.html b/src/current/_includes/v21.1/sql/generated/diagrams/set_cluster_setting.html deleted file mode 100644 index b6554c7be52..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/set_cluster_setting.html +++ /dev/null @@ -1,36 +0,0 @@ -
- - - - - - SET - - - CLUSTER - - - SETTING - - - - var_name - - - - = - - - TO - - - - var_value - - - - DEFAULT - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/set_operation.html b/src/current/_includes/v21.1/sql/generated/diagrams/set_operation.html deleted file mode 100644 index aa0e63023dc..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/set_operation.html +++ /dev/null @@ -1,32 +0,0 @@ -
- - - - - -select_clause - - - -UNION - - -INTERSECT - - -EXCEPT - - -ALL - - -DISTINCT - - - -select_clause - - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/set_transaction.html b/src/current/_includes/v21.1/sql/generated/diagrams/set_transaction.html deleted file mode 100644 index 9502afaf6d2..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/set_transaction.html +++ /dev/null @@ -1,60 +0,0 @@ -
- - - - -SET - - -TRANSACTION - - -PRIORITY - - -LOW - - -NORMAL - - -HIGH - - -READ - - -ONLY - - -WRITE - - -AS - - -OF - - -SYSTEM - - -TIME - - - -a_expr - - - -NOT - - -DEFERRABLE - - -, - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/set_var.html b/src/current/_includes/v21.1/sql/generated/diagrams/set_var.html deleted file mode 100644 index 96bb04e7cf6..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/set_var.html +++ /dev/null @@ -1,33 +0,0 @@ -
- - - - - - SET - - - SESSION - - - - var_name - - - - TO - - - = - - - - var_value - - - - , - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/show_backup.html b/src/current/_includes/v21.1/sql/generated/diagrams/show_backup.html deleted file mode 100644 index a4f115d59ca..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/show_backup.html +++ /dev/null @@ -1,46 +0,0 @@ -
- - - - -SHOW - - -BACKUPS - - -IN - - -location - -BACKUP - - -subdirectory - -IN - - -SCHEMAS - - -location - -WITH - - -kv_option_list - -OPTIONS - - -( - - -kv_option_list - -) - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/show_cluster_setting.html b/src/current/_includes/v21.1/sql/generated/diagrams/show_cluster_setting.html deleted file mode 100644 index 7aeef4c1ad3..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/show_cluster_setting.html +++ /dev/null @@ -1,35 +0,0 @@ -
- - - - -SHOW - - -CLUSTER - - -SETTING - - -var_name - -ALL - - -SETTINGS - - -ALL - - -PUBLIC - - -CLUSTER - - -SETTINGS - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/show_columns.html b/src/current/_includes/v21.1/sql/generated/diagrams/show_columns.html deleted file mode 100644 index 9da18b6612a..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/show_columns.html +++ /dev/null @@ -1,23 +0,0 @@ -
- - - - -SHOW - - -COLUMNS - - -FROM - - -table_name - -WITH - - -COMMENT - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/show_constraints.html b/src/current/_includes/v21.1/sql/generated/diagrams/show_constraints.html deleted file mode 100644 index 9c520ae9bc6..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/show_constraints.html +++ /dev/null @@ -1,25 +0,0 @@ -
- - - - - - SHOW - - - CONSTRAINT - - - CONSTRAINTS - - - FROM - - - - table_name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/show_create.html b/src/current/_includes/v21.1/sql/generated/diagrams/show_create.html deleted file mode 100644 index aeab19eeb0e..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/show_create.html +++ /dev/null @@ -1,20 +0,0 @@ -
- - - - -SHOW - - -CREATE - - -object_name - -ALL - - -TABLES - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/show_databases.html b/src/current/_includes/v21.1/sql/generated/diagrams/show_databases.html deleted file mode 100644 index 0270318301c..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/show_databases.html +++ /dev/null @@ -1,18 +0,0 @@ -
- - - - -SHOW - - -DATABASES - - -WITH - - -COMMENT - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/show_enums.html b/src/current/_includes/v21.1/sql/generated/diagrams/show_enums.html deleted file mode 100644 index e1b93a00704..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/show_enums.html +++ /dev/null @@ -1,22 +0,0 @@ -
- - - - -SHOW - - -ENUMS - - -FROM - - -name - -. - - -name - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/show_full_scans.html b/src/current/_includes/v21.1/sql/generated/diagrams/show_full_scans.html deleted file mode 100644 index 6892f893296..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/show_full_scans.html +++ /dev/null @@ -1,18 +0,0 @@ -
- - - - -SHOW - - -FULL - - -TABLE - - -SCANS - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/show_indexes.html b/src/current/_includes/v21.1/sql/generated/diagrams/show_indexes.html deleted file mode 100644 index f640cb0b1ee..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/show_indexes.html +++ /dev/null @@ -1,34 +0,0 @@ -
- - - - -SHOW - - -INDEX - - -INDEXES - - -KEYS - - -FROM - - -table_name - -DATABASE - - -database_name - -WITH - - -COMMENT - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/show_jobs.html b/src/current/_includes/v21.1/sql/generated/diagrams/show_jobs.html deleted file mode 100644 index 287b8fb877b..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/show_jobs.html +++ /dev/null @@ -1,40 +0,0 @@ -
- - - - -SHOW - - -AUTOMATIC - - -JOBS - - -JOBS - - -WHEN - - -COMPLETE - - -select_stmt - - -for_schedules_clause - -JOB - - -WHEN - - -COMPLETE - - -job_id - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/show_locality.html b/src/current/_includes/v21.1/sql/generated/diagrams/show_locality.html deleted file mode 100644 index 2ca711fbdad..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/show_locality.html +++ /dev/null @@ -1,12 +0,0 @@ -
- - - - -SHOW - - -LOCALITY - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/show_partitions.html b/src/current/_includes/v21.1/sql/generated/diagrams/show_partitions.html deleted file mode 100644 index aeb6156fbd3..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/show_partitions.html +++ /dev/null @@ -1,30 +0,0 @@ -
- - - - -SHOW - - -PARTITIONS - - -FROM - - -TABLE - - -table_name - -DATABASE - - -database_name - -INDEX - - -table_index_name - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/show_ranges.html b/src/current/_includes/v21.1/sql/generated/diagrams/show_ranges.html deleted file mode 100644 index bd1553dfe05..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/show_ranges.html +++ /dev/null @@ -1,30 +0,0 @@ -
- - - - -SHOW - - -RANGES - - -FROM - - -TABLE - - -table_name - -INDEX - - -table_index_name - -DATABASE - - -database_name - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/show_regions.html b/src/current/_includes/v21.1/sql/generated/diagrams/show_regions.html deleted file mode 100644 index 388d9609890..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/show_regions.html +++ /dev/null @@ -1,29 +0,0 @@ -
- - - - -SHOW - - -REGIONS - - -FROM - - -CLUSTER - - -DATABASE - - -database_name - -ALL - - -DATABASES - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/show_roles.html b/src/current/_includes/v21.1/sql/generated/diagrams/show_roles.html deleted file mode 100644 index fd508395e0b..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/show_roles.html +++ /dev/null @@ -1,14 +0,0 @@ -
- - - - - - SHOW - - - ROLES - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/show_savepoint_status.html b/src/current/_includes/v21.1/sql/generated/diagrams/show_savepoint_status.html deleted file mode 100644 index 7fc1c8fa52d..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/show_savepoint_status.html +++ /dev/null @@ -1,15 +0,0 @@ -
- - - - -SHOW - - -SAVEPOINT - - -STATUS - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/show_schemas.html b/src/current/_includes/v21.1/sql/generated/diagrams/show_schemas.html deleted file mode 100644 index efa07764533..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/show_schemas.html +++ /dev/null @@ -1,22 +0,0 @@ -
- - - - - - SHOW - - - SCHEMAS - - - FROM - - - - name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/show_sequences.html b/src/current/_includes/v21.1/sql/generated/diagrams/show_sequences.html deleted file mode 100644 index 4f3fe915c12..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/show_sequences.html +++ /dev/null @@ -1,17 +0,0 @@ -
- - - - -SHOW - - -SEQUENCES - - -FROM - - -name - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/show_sessions.html b/src/current/_includes/v21.1/sql/generated/diagrams/show_sessions.html deleted file mode 100644 index 3b2aa5b16ee..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/show_sessions.html +++ /dev/null @@ -1,20 +0,0 @@ -
- - - - - - SHOW - - - CLUSTER - - - LOCAL - - - SESSIONS - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/show_statements.html b/src/current/_includes/v21.1/sql/generated/diagrams/show_statements.html deleted file mode 100644 index a4c0cee9141..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/show_statements.html +++ /dev/null @@ -1,24 +0,0 @@ -
- - - - -SHOW - - -ALL - - -CLUSTER - - -LOCAL - - -STATEMENTS - - -QUERIES - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/show_stats.html b/src/current/_includes/v21.1/sql/generated/diagrams/show_stats.html deleted file mode 100644 index 0e350b93c0f..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/show_stats.html +++ /dev/null @@ -1,20 +0,0 @@ -
- - - - -SHOW - - -STATISTICS - - -FOR - - -TABLE - - -table_name - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/show_tables.html b/src/current/_includes/v21.1/sql/generated/diagrams/show_tables.html deleted file mode 100644 index 84b221efaf2..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/show_tables.html +++ /dev/null @@ -1,28 +0,0 @@ -
- - - - -SHOW - - -TABLES - - -FROM - - -database_name - -. - - -schema_name - -WITH - - -COMMENT - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/show_trace.html b/src/current/_includes/v21.1/sql/generated/diagrams/show_trace.html deleted file mode 100644 index 37271dc87b5..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/show_trace.html +++ /dev/null @@ -1,25 +0,0 @@ -
- - - - -SHOW - - -COMPACT - - -KV - - -TRACE - - -FOR - - -SESSION - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/show_users.html b/src/current/_includes/v21.1/sql/generated/diagrams/show_users.html deleted file mode 100644 index 7c33b7f00b4..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/show_users.html +++ /dev/null @@ -1,14 +0,0 @@ -
- - - - - - SHOW - - - USERS - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/show_var.html b/src/current/_includes/v21.1/sql/generated/diagrams/show_var.html deleted file mode 100644 index fb7ec6f4ce8..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/show_var.html +++ /dev/null @@ -1,20 +0,0 @@ -
- - - - - - SHOW - - - SESSION - - - var_name - - - ALL - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/show_zone.html b/src/current/_includes/v21.1/sql/generated/diagrams/show_zone.html deleted file mode 100644 index d5d55b5dab2..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/show_zone.html +++ /dev/null @@ -1,89 +0,0 @@ -
- - - - -SHOW - - -ZONE - - -CONFIGURATION - - -FROM - - -RANGE - - -zone_name - -DATABASE - - -database_name - -TABLE - - -table_name - -INDEX - - -table_name - -@ - - -index_name - - -standalone_index_name - -PARTITION - - -partition_name - -PARTITION - - -partition_name - -OF - - -TABLE - - -table_name - -INDEX - - -table_name - -@ - - -index_name - - -standalone_index_name - -CONFIGURATIONS - - -ALL - - -ZONE - - -CONFIGURATIONS - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/simple_select_clause.html b/src/current/_includes/v21.1/sql/generated/diagrams/simple_select_clause.html deleted file mode 100644 index fd45ea37754..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/simple_select_clause.html +++ /dev/null @@ -1,107 +0,0 @@ -
- - - - -SELECT - - -ALL - - -DISTINCT - - -ON - - -( - - - -a_expr - - - -, - - -) - - - -target_elem - - - -, - - -FROM - - - -table_ref - - - -, - - -AS - - -OF - - -SYSTEM - - -TIME - - - -a_expr - - - -WHERE - - - -a_expr - - - -GROUP - - -BY - - - -a_expr - - - -, - - -HAVING - - - -a_expr - - - -WINDOW - - - -window_definition_list - - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/sort_clause.html b/src/current/_includes/v21.1/sql/generated/diagrams/sort_clause.html deleted file mode 100644 index 59aababe951..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/sort_clause.html +++ /dev/null @@ -1,69 +0,0 @@ -
- - - - -ORDER - - -BY - - - -a_expr - - - -ASC - - -DESC - - -NULLS - - -FIRST - - -LAST - - -PRIMARY - - -KEY - - - -table_name - - - -INDEX - - - -table_name - - - -@ - - - -index_name - - - -ASC - - -DESC - - -, - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/split_index_at.html b/src/current/_includes/v21.1/sql/generated/diagrams/split_index_at.html deleted file mode 100644 index 51daee7e3c7..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/split_index_at.html +++ /dev/null @@ -1,35 +0,0 @@ -
- - - - -ALTER - - -INDEX - - -table_name - -@ - - -index_name - -SPLIT - - -AT - - -select_stmt - -WITH - - -EXPIRATION - - -a_expr - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/split_table_at.html b/src/current/_includes/v21.1/sql/generated/diagrams/split_table_at.html deleted file mode 100644 index 2b7b43c5a59..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/split_table_at.html +++ /dev/null @@ -1,30 +0,0 @@ -
- - - - -ALTER - - -TABLE - - -table_name - -SPLIT - - -AT - - -select_stmt - -WITH - - -EXPIRATION - - -a_expr - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/stmt_block.html b/src/current/_includes/v21.1/sql/generated/diagrams/stmt_block.html deleted file mode 100644 index a612c9c6f07..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/stmt_block.html +++ /dev/null @@ -1,21170 +0,0 @@ -
- - -

stmt_block:

- - - - - - - - stmt - - - - - -

no references


stmt:

- - - - - - - HELPTOKEN - - - - preparable_stmt - - - - - analyze_stmt - - - - - copy_from_stmt - - - - - comment_stmt - - - - - execute_stmt - - - - - deallocate_stmt - - - - - discard_stmt - - - - - grant_stmt - - - - - prepare_stmt - - - - - revoke_stmt - - - - - savepoint_stmt - - - - - reassign_owned_by_stmt - - - - - drop_owned_by_stmt - - - - - release_stmt - - - - - refresh_stmt - - - - - nonpreparable_set_stmt - - - - - transaction_stmt - - - - - close_cursor_stmt - - - - - -

referenced by: -

-


preparable_stmt:

- - - - - - - - alter_stmt - - - - - backup_stmt - - - - - cancel_stmt - - - - - create_stmt - - - - - delete_stmt - - - - - drop_stmt - - - - - explain_stmt - - - - - import_stmt - - - - - insert_stmt - - - - - pause_stmt - - - - - reset_stmt - - - - - restore_stmt - - - - - resume_stmt - - - - - export_stmt - - - - - scrub_stmt - - - - - select_stmt - - - - - preparable_set_stmt - - - - - show_stmt - - - - - truncate_stmt - - - - - update_stmt - - - - - upsert_stmt - - - - - -

referenced by: -

-


analyze_stmt:

- - - - - - - ANALYZE - - - ANALYSE - - - - analyze_target - - - - - -

referenced by: -

-


copy_from_stmt:

- - - - - - - COPY - - - - table_name - - - - - opt_column_list - - - - FROM - - - STDIN - - - - opt_with_copy_options - - - - - opt_where_clause - - - - - -

referenced by: -

-


comment_stmt:

- - - - - - - COMMENT - - - ON - - - DATABASE - - - - database_name - - - - TABLE - - - - table_name - - - - COLUMN - - - - column_path - - - - INDEX - - - - table_index_name - - - - IS - - - - comment_text - - - - - -

referenced by: -

-


execute_stmt:

- - - - - - - EXECUTE - - - - table_alias_name - - - - - execute_param_clause - - - - - -

referenced by: -

-


deallocate_stmt:

- - - - - - - DEALLOCATE - - - PREPARE - - - - name - - - - ALL - - - - -

referenced by: -

-


discard_stmt:

- - - - - - - DISCARD - - - ALL - - - - -

referenced by: -

-


grant_stmt:

- - - - - - - GRANT - - - - privileges - - - - ON - - - - targets - - - - TYPE - - - - target_types - - - - SCHEMA - - - - schema_name_list - - - - TO - - - - name_list - - - - - privilege_list - - - - TO - - - - name_list - - - - WITH - - - ADMIN - - - OPTION - - - - -

referenced by: -

-


prepare_stmt:

- - - - - - - PREPARE - - - - table_alias_name - - - - - prep_type_clause - - - - AS - - - - preparable_stmt - - - - - -

referenced by: -

-


revoke_stmt:

- - - - - - - REVOKE - - - - privileges - - - - ON - - - - targets - - - - TYPE - - - - target_types - - - - SCHEMA - - - - schema_name_list - - - - ADMIN - - - OPTION - - - FOR - - - - privilege_list - - - - FROM - - - - name_list - - - - - -

referenced by: -

-


savepoint_stmt:

- - - - - - - SAVEPOINT - - - - name - - - - - -

referenced by: -

-


reassign_owned_by_stmt:

- - - - - - - REASSIGN - - - OWNED - - - BY - - - - role_spec_list - - - - TO - - - - role_spec - - - - - -

referenced by: -

-


drop_owned_by_stmt:

- - - - - - - DROP - - - OWNED - - - BY - - - - role_spec_list - - - - - opt_drop_behavior - - - - - -

referenced by: -

-


release_stmt:

- - - - - - - RELEASE - - - - savepoint_name - - - - - -

referenced by: -

-


refresh_stmt:

- - - - - - - REFRESH - - - MATERIALIZED - - - VIEW - - - - opt_concurrently - - - - - view_name - - - - - opt_clear_data - - - - - -

referenced by: -

-


nonpreparable_set_stmt:

- - - - - - - - set_transaction_stmt - - - - - -

referenced by: -

-


transaction_stmt:

- - - - - - - - begin_stmt - - - - - commit_stmt - - - - - rollback_stmt - - - - - abort_stmt - - - - - -

referenced by: -

-


close_cursor_stmt:

- - - - - - - CLOSE - - - ALL - - - - -

referenced by: -

-


alter_stmt:

- - - - - - - - alter_ddl_stmt - - - - - alter_role_stmt - - - - - -

referenced by: -

-


backup_stmt:

- - - - - - - BACKUP - - - - opt_backup_targets - - - - INTO - - - - sconst_or_placeholder - - - - LATEST - - - IN - - - - string_or_placeholder_opt_list - - - - - opt_as_of_clause - - - - TO - - - - string_or_placeholder_opt_list - - - - - opt_as_of_clause - - - - - opt_incremental - - - - - opt_with_backup_options - - - - - -

referenced by: -

-


cancel_stmt:

- - - - - - - - cancel_jobs_stmt - - - - - cancel_queries_stmt - - - - - cancel_sessions_stmt - - - - - -

referenced by: -

-


create_stmt:

- - - - - - - - create_role_stmt - - - - - create_ddl_stmt - - - - - create_stats_stmt - - - - - create_schedule_for_backup_stmt - - - - - create_changefeed_stmt - - - - - create_replication_stream_stmt - - - - - create_extension_stmt - - - - - -

referenced by: -

-


delete_stmt:

- - - - - - - - opt_with_clause - - - - DELETE - - - FROM - - - - table_expr_opt_alias_idx - - - - - opt_where_clause - - - - - opt_sort_clause - - - - - opt_limit_clause - - - - - returning_clause - - - - - -

referenced by: -

-


drop_stmt:

- - - - - - - - drop_ddl_stmt - - - - - drop_role_stmt - - - - - drop_schedule_stmt - - - - - -

referenced by: -

-


explain_stmt:

- - - - - - - EXPLAIN - - - ANALYZE - - - ANALYSE - - - ( - - - - explain_option_list - - - - ) - - - - explainable_stmt - - - - - -

referenced by: -

-


import_stmt:

- - - - - - - IMPORT - - - - import_format - - - - - string_or_placeholder - - - - TABLE - - - - table_name - - - - FROM - - - - import_format - - - - - string_or_placeholder - - - - CREATE - - - USING - - - - string_or_placeholder - - - - ( - - - - table_elem_list - - - - ) - - - - import_format - - - - DATA - - - ( - - - - string_or_placeholder_list - - - - ) - - - INTO - - - - table_name - - - - ( - - - - insert_column_list - - - - ) - - - - import_format - - - - DATA - - - ( - - - - string_or_placeholder_list - - - - ) - - - - opt_with_options - - - - - -

referenced by: -

-


insert_stmt:

- - - - - - - - opt_with_clause - - - - INSERT - - - INTO - - - - insert_target - - - - - insert_rest - - - - - on_conflict - - - - - returning_clause - - - - - -

referenced by: -

-


pause_stmt:

- - - - - - - - pause_jobs_stmt - - - - - pause_schedules_stmt - - - - - -

referenced by: -

-


reset_stmt:

- - - - - - - - reset_session_stmt - - - - - reset_csetting_stmt - - - - - -

referenced by: -

-


restore_stmt:

- - - - - - - RESTORE - - - FROM - - - - string_or_placeholder - - - - IN - - - - list_of_string_or_placeholder_opt_list - - - - - opt_as_of_clause - - - - - opt_with_restore_options - - - - - targets - - - - FROM - - - - string_or_placeholder - - - - IN - - - - list_of_string_or_placeholder_opt_list - - - - - opt_as_of_clause - - - - - opt_with_restore_options - - - - REPLICATION - - - STREAM - - - FROM - - - - string_or_placeholder_opt_list - - - - - opt_as_of_clause - - - - - -

referenced by: -

-


resume_stmt:

- - - - - - - - resume_jobs_stmt - - - - - resume_schedules_stmt - - - - - -

referenced by: -

-


export_stmt:

- - - - - - - EXPORT - - - INTO - - - - import_format - - - - - string_or_placeholder - - - - - opt_with_options - - - - FROM - - - - select_stmt - - - - - -

referenced by: -

-


scrub_stmt:

- - - - - - - - scrub_table_stmt - - - - - scrub_database_stmt - - - - - -

referenced by: -

-


select_stmt:

- - - - - - - - select_no_parens - - - - - select_with_parens - - - - - -

referenced by: -

-


preparable_set_stmt:

- - - - - - - - set_session_stmt - - - - - set_csetting_stmt - - - - - use_stmt - - - - - -

referenced by: -

-


show_stmt:

- - - - - - - - show_backup_stmt - - - - - show_columns_stmt - - - - - show_constraints_stmt - - - - - show_create_stmt - - - - - show_csettings_stmt - - - - - show_databases_stmt - - - - - show_enums_stmt - - - - - show_types_stmt - - - - - show_grants_stmt - - - - - show_indexes_stmt - - - - - show_partitions_stmt - - - - - show_jobs_stmt - - - - - show_locality_stmt - - - - - show_schedules_stmt - - - - - show_statements_stmt - - - - - show_ranges_stmt - - - - - show_range_for_row_stmt - - - - - show_regions_stmt - - - - - show_survival_goal_stmt - - - - - show_roles_stmt - - - - - show_savepoint_stmt - - - - - show_schemas_stmt - - - - - show_sequences_stmt - - - - - show_session_stmt - - - - - show_sessions_stmt - - - - - show_stats_stmt - - - - - show_tables_stmt - - - - - show_trace_stmt - - - - - show_transactions_stmt - - - - - show_users_stmt - - - - - show_zone_stmt - - - - - -

referenced by: -

-


truncate_stmt:

- - - - - - - TRUNCATE - - - - opt_table - - - - - relation_expr_list - - - - - opt_drop_behavior - - - - - -

referenced by: -

-


update_stmt:

- - - - - - - - opt_with_clause - - - - UPDATE - - - - table_expr_opt_alias_idx - - - - SET - - - - set_clause_list - - - - - opt_from_list - - - - - opt_where_clause - - - - - opt_sort_clause - - - - - opt_limit_clause - - - - - returning_clause - - - - - -

referenced by: -

-


upsert_stmt:

- - - - - - - - opt_with_clause - - - - UPSERT - - - INTO - - - - insert_target - - - - - insert_rest - - - - - returning_clause - - - - - -

referenced by: -

-


analyze_target:

- - - - - - - - table_name - - - - - -

referenced by: -

-


table_name:

- - - - - - - - db_object_name - - - - - -

referenced by: -

-


opt_column_list:

- - - - - - - ( - - - - name_list - - - - ) - - - - -

referenced by: -

-


opt_with_copy_options:

- - - - - - - - opt_with - - - - - copy_options_list - - - - - -

referenced by: -

-


opt_where_clause:

- - - - - - - - where_clause - - - - - -

referenced by: -

-


database_name:

- - - - - - - - name - - - - - -

referenced by: -

-


comment_text:

- - - - - - - SCONST - - - NULL - - - - -

referenced by: -

-


column_path:

- - - - - - - - name - - - - - prefixed_column_path - - - - - -

referenced by: -

-


table_index_name:

- - - - - - - - table_name - - - - @ - - - - index_name - - - - - standalone_index_name - - - - - -

referenced by: -

-


table_alias_name:

- - - - - - - - name - - - - - -

referenced by: -

-


execute_param_clause:

- - - - - - - ( - - - - expr_list - - - - ) - - - - -

referenced by: -

-


name:

- - - - - - - identifier - - - - unreserved_keyword - - - - - col_name_keyword - - - - - -

referenced by: -

-


privileges:

- - - - - - - ALL - - - - opt_privileges_clause - - - - - privilege_list - - - - - -

referenced by: -

-


targets:

- - - - - - - identifier - - - - col_name_keyword - - - - - unreserved_keyword - - - - - complex_table_pattern - - - - - table_pattern - - - - , - - - TABLE - - - - table_pattern_list - - - - TENANT - - - ICONST - - - DATABASE - - - - name_list - - - - - -

referenced by: -

-


name_list:

- - - - - - - - name - - - - , - - - - -

referenced by: -

-


privilege_list:

- - - - - - - - privilege - - - - , - - - - -

referenced by: -

-


target_types:

- - - - - - - - type_name_list - - - - - -

referenced by: -

-


schema_name_list:

- - - - - - - - qualifiable_schema_name - - - - , - - - - -

referenced by: -

-


prep_type_clause:

- - - - - - - ( - - - - type_list - - - - ) - - - - -

referenced by: -

-


role_spec_list:

- - - - - - - - role_spec - - - - , - - - - -

referenced by: -

-


role_spec:

- - - - - - - - username_or_sconst - - - - CURRENT_USER - - - SESSION_USER - - - - -

referenced by: -

-


opt_drop_behavior:

- - - - - - - CASCADE - - - RESTRICT - - - - -

referenced by: -

-


savepoint_name:

- - - - - - - SAVEPOINT - - - - name - - - - - -

referenced by: -

-


opt_concurrently:

- - - - - - - CONCURRENTLY - - - - -

referenced by: -

-


view_name:

- - - - - - - - table_name - - - - - -

referenced by: -

-


opt_clear_data:

- - - - - - - WITH - - - NO - - - DATA - - - - -

referenced by: -

-


set_transaction_stmt:

- - - - - - - SET - - - SESSION - - - TRANSACTION - - - - transaction_mode_list - - - - - -

referenced by: -

-


begin_stmt:

- - - - - - - BEGIN - - - - opt_transaction - - - - START - - - TRANSACTION - - - - begin_transaction - - - - - -

referenced by: -

-


commit_stmt:

- - - - - - - COMMIT - - - END - - - - opt_transaction - - - - - -

referenced by: -

-


rollback_stmt:

- - - - - - - ROLLBACK - - - - opt_transaction - - - - TO - - - - savepoint_name - - - - - -

referenced by: -

-


abort_stmt:

- - - - - - - ABORT - - - - opt_abort_mod - - - - - -

referenced by: -

-


alter_ddl_stmt:

- - - - - - - - alter_table_stmt - - - - - alter_index_stmt - - - - - alter_view_stmt - - - - - alter_sequence_stmt - - - - - alter_database_stmt - - - - - alter_range_stmt - - - - - alter_partition_stmt - - - - - alter_schema_stmt - - - - - alter_type_stmt - - - - - -

referenced by: -

-


alter_role_stmt:

- - - - - - - ALTER - - - - role_or_group_or_user - - - - IF - - - EXISTS - - - - string_or_placeholder - - - - - opt_role_options - - - - - -

referenced by: -

-


opt_backup_targets:

- - - - - - - - targets - - - - - -

referenced by: -

-


sconst_or_placeholder:

- - - - - - - SCONST - - - PLACEHOLDER - - - - -

referenced by: -

-


string_or_placeholder_opt_list:

- - - - - - - - string_or_placeholder - - - - ( - - - - string_or_placeholder_list - - - - ) - - - - -

referenced by: -

-


opt_as_of_clause:

- - - - - - - - as_of_clause - - - - - -

referenced by: -

-


opt_with_backup_options:

- - - - - - - WITH - - - - backup_options_list - - - - OPTIONS - - - ( - - - - backup_options_list - - - - ) - - - - -

referenced by: -

-


opt_incremental:

- - - - - - - INCREMENTAL - - - FROM - - - - string_or_placeholder_list - - - - - -

referenced by: -

-


cancel_jobs_stmt:

- - - - - - - CANCEL - - - JOB - - - - a_expr - - - - JOBS - - - - select_stmt - - - - - for_schedules_clause - - - - - -

referenced by: -

-


cancel_queries_stmt:

- - - - - - - CANCEL - - - QUERY - - - IF - - - EXISTS - - - - a_expr - - - - QUERIES - - - IF - - - EXISTS - - - - select_stmt - - - - - -

referenced by: -

-


cancel_sessions_stmt:

- - - - - - - CANCEL - - - SESSION - - - IF - - - EXISTS - - - - a_expr - - - - SESSIONS - - - IF - - - EXISTS - - - - select_stmt - - - - - -

referenced by: -

-


create_role_stmt:

- - - - - - - CREATE - - - - role_or_group_or_user - - - - IF - - - NOT - - - EXISTS - - - - string_or_placeholder - - - - - opt_role_options - - - - - -

referenced by: -

-


create_ddl_stmt:

- - - - - - - - create_database_stmt - - - - - create_index_stmt - - - - - create_schema_stmt - - - - - create_table_stmt - - - - - create_table_as_stmt - - - - - create_type_stmt - - - - - create_view_stmt - - - - - create_sequence_stmt - - - - - -

referenced by: -

-


create_stats_stmt:

- - - - - - - CREATE - - - STATISTICS - - - - statistics_name - - - - - opt_stats_columns - - - - FROM - - - - create_stats_target - - - - - opt_create_stats_options - - - - - -

referenced by: -

-


create_schedule_for_backup_stmt:

- - - - - - - CREATE - - - SCHEDULE - - - - opt_description - - - - FOR - - - BACKUP - - - - opt_backup_targets - - - - INTO - - - - string_or_placeholder_opt_list - - - - - opt_with_backup_options - - - - - cron_expr - - - - - opt_full_backup_clause - - - - - opt_with_schedule_options - - - - - -

referenced by: -

-


create_changefeed_stmt:

- - - - - - - CREATE - - - CHANGEFEED - - - FOR - - - - changefeed_targets - - - - - opt_changefeed_sink - - - - - opt_with_options - - - - - -

referenced by: -

-


create_replication_stream_stmt:

- - - - - - - CREATE - - - REPLICATION - - - STREAM - - - FOR - - - - targets - - - - - opt_changefeed_sink - - - - - opt_with_replication_options - - - - - -

referenced by: -

-


create_extension_stmt:

- - - - - - - CREATE - - - EXTENSION - - - IF - - - NOT - - - EXISTS - - - - name - - - - - -

referenced by: -

-


opt_with_clause:

- - - - - - - - with_clause - - - - - -

referenced by: -

-


table_expr_opt_alias_idx:

- - - - - - - - table_name_opt_idx - - - - AS - - - - table_alias_name - - - - - -

referenced by: -

-


opt_sort_clause:

- - - - - - - - sort_clause - - - - - -

referenced by: -

-


opt_limit_clause:

- - - - - - - - limit_clause - - - - - -

referenced by: -

-


returning_clause:

- - - - - - - RETURNING - - - - target_list - - - - NOTHING - - - - -

referenced by: -

-


drop_ddl_stmt:

- - - - - - - - drop_database_stmt - - - - - drop_index_stmt - - - - - drop_table_stmt - - - - - drop_view_stmt - - - - - drop_sequence_stmt - - - - - drop_schema_stmt - - - - - drop_type_stmt - - - - - -

referenced by: -

-


drop_role_stmt:

- - - - - - - DROP - - - - role_or_group_or_user - - - - IF - - - EXISTS - - - - string_or_placeholder_list - - - - - -

referenced by: -

-


drop_schedule_stmt:

- - - - - - - DROP - - - SCHEDULE - - - - a_expr - - - - SCHEDULES - - - - select_stmt - - - - - -

referenced by: -

-


explainable_stmt:

- - - - - - - - preparable_stmt - - - - - execute_stmt - - - - - -

referenced by: -

-


explain_option_list:

- - - - - - - - explain_option_name - - - - , - - - - -

referenced by: -

-


import_format:

- - - - - - - - name - - - - - -

referenced by: -

-


string_or_placeholder:

- - - - - - - - non_reserved_word_or_sconst - - - - PLACEHOLDER - - - - -

referenced by: -

-


opt_with_options:

- - - - - - - WITH - - - - kv_option_list - - - - OPTIONS - - - ( - - - - kv_option_list - - - - ) - - - - -

referenced by: -

-


string_or_placeholder_list:

- - - - - - - - string_or_placeholder - - - - , - - - - -

referenced by: -

-


table_elem_list:

- - - - - - - - table_elem - - - - , - - - - -

referenced by: -

-


insert_column_list:

- - - - - - - - insert_column_item - - - - , - - - - -

referenced by: -

-


insert_target:

- - - - - - - - table_name - - - - AS - - - - table_alias_name - - - - - -

referenced by: -

-


insert_rest:

- - - - - - - ( - - - - insert_column_list - - - - ) - - - - select_stmt - - - - DEFAULT - - - VALUES - - - - -

referenced by: -

-


on_conflict:

- - - - - - - ON - - - CONFLICT - - - DO - - - NOTHING - - - ( - - - - name_list - - - - ) - - - - opt_where_clause - - - - DO - - - NOTHING - - - UPDATE - - - SET - - - - set_clause_list - - - - - opt_where_clause - - - - - -

referenced by: -

-


pause_jobs_stmt:

- - - - - - - PAUSE - - - JOB - - - - a_expr - - - - JOBS - - - - select_stmt - - - - - for_schedules_clause - - - - - -

referenced by: -

-


pause_schedules_stmt:

- - - - - - - PAUSE - - - SCHEDULE - - - - a_expr - - - - SCHEDULES - - - - select_stmt - - - - - -

referenced by: -

-


reset_session_stmt:

- - - - - - - RESET - - - SESSION - - - - session_var - - - - - -

referenced by: -

-


reset_csetting_stmt:

- - - - - - - RESET - - - CLUSTER - - - SETTING - - - - var_name - - - - - -

referenced by: -

-


list_of_string_or_placeholder_opt_list:

- - - - - - - - string_or_placeholder_opt_list - - - - , - - - - -

referenced by: -

-


opt_with_restore_options:

- - - - - - - WITH - - - - restore_options_list - - - - OPTIONS - - - ( - - - - restore_options_list - - - - ) - - - - -

referenced by: -

-


resume_jobs_stmt:

- - - - - - - RESUME - - - JOB - - - - a_expr - - - - JOBS - - - - select_stmt - - - - - for_schedules_clause - - - - - -

referenced by: -

-


resume_schedules_stmt:

- - - - - - - RESUME - - - SCHEDULE - - - - a_expr - - - - SCHEDULES - - - - select_stmt - - - - - -

referenced by: -

-


scrub_table_stmt:

- - - - - - - EXPERIMENTAL - - - SCRUB - - - TABLE - - - - table_name - - - - - opt_as_of_clause - - - - - opt_scrub_options_clause - - - - - -

referenced by: -

-


scrub_database_stmt:

- - - - - - - EXPERIMENTAL - - - SCRUB - - - DATABASE - - - - database_name - - - - - opt_as_of_clause - - - - - -

referenced by: -

-


select_no_parens:

- - - - - - - - simple_select - - - - - select_clause - - - - - sort_clause - - - - - opt_sort_clause - - - - - for_locking_clause - - - - - opt_select_limit - - - - - select_limit - - - - - opt_for_locking_clause - - - - - with_clause - - - - - select_clause - - - - - sort_clause - - - - - opt_sort_clause - - - - - for_locking_clause - - - - - opt_select_limit - - - - - select_limit - - - - - opt_for_locking_clause - - - - - -

referenced by: -

-


select_with_parens:

- - - - - - - ( - - - - select_no_parens - - - - - select_with_parens - - - - ) - - - - -

referenced by: -

-


set_session_stmt:

- - - - - - - SET - - - SESSION - - - - set_rest_more - - - - CHARACTERISTICS - - - AS - - - TRANSACTION - - - - transaction_mode_list - - - - - set_rest_more - - - - - -

referenced by: -

-


set_csetting_stmt:

- - - - - - - SET - - - CLUSTER - - - SETTING - - - - var_name - - - - - to_or_eq - - - - - var_value - - - - - -

referenced by: -

-


use_stmt:

- - - - - - - USE - - - - var_value - - - - - -

referenced by: -

-


show_backup_stmt:

- - - - - - - SHOW - - - BACKUPS - - - IN - - - - string_or_placeholder - - - - BACKUP - - - - string_or_placeholder - - - - IN - - - SCHEMAS - - - - string_or_placeholder - - - - - opt_with_options - - - - - -

referenced by: -

-


show_columns_stmt:

- - - - - - - SHOW - - - COLUMNS - - - FROM - - - - table_name - - - - - with_comment - - - - - -

referenced by: -

-


show_constraints_stmt:

- - - - - - - SHOW - - - CONSTRAINT - - - CONSTRAINTS - - - FROM - - - - table_name - - - - - -

referenced by: -

-


show_create_stmt:

- - - - - - - SHOW - - - CREATE - - - - table_name - - - - ALL - - - TABLES - - - - -

referenced by: -

-


show_csettings_stmt:

- - - - - - - SHOW - - - CLUSTER - - - SETTING - - - - var_name - - - - ALL - - - SETTINGS - - - ALL - - - PUBLIC - - - CLUSTER - - - SETTINGS - - - - -

referenced by: -

-


show_databases_stmt:

- - - - - - - SHOW - - - DATABASES - - - - with_comment - - - - - -

referenced by: -

-


show_enums_stmt:

- - - - - - - SHOW - - - ENUMS - - - FROM - - - - name - - - - . - - - - name - - - - - -

referenced by: -

-


show_types_stmt:

- - - - - - - SHOW - - - TYPES - - - - -

referenced by: -

-


show_grants_stmt:

- - - - - - - SHOW - - - GRANTS - - - - opt_on_targets_roles - - - - - for_grantee_clause - - - - - -

referenced by: -

-


show_indexes_stmt:

- - - - - - - SHOW - - - INDEX - - - INDEXES - - - KEYS - - - FROM - - - - table_name - - - - DATABASE - - - - database_name - - - - - with_comment - - - - - -

referenced by: -

-


show_partitions_stmt:

- - - - - - - SHOW - - - PARTITIONS - - - FROM - - - TABLE - - - - table_name - - - - DATABASE - - - - database_name - - - - INDEX - - - - table_index_name - - - - - table_name - - - - @ - - - * - - - - -

referenced by: -

-


show_jobs_stmt:

- - - - - - - SHOW - - - AUTOMATIC - - - JOBS - - - JOBS - - - WHEN - - - COMPLETE - - - - select_stmt - - - - - for_schedules_clause - - - - JOB - - - WHEN - - - COMPLETE - - - - a_expr - - - - - -

referenced by: -

-


show_locality_stmt:

- - - - - - - SHOW - - - LOCALITY - - - - -

referenced by: -

-


show_schedules_stmt:

- - - - - - - SHOW - - - - schedule_state - - - - SCHEDULES - - - - opt_schedule_executor_type - - - - SCHEDULE - - - - a_expr - - - - - -

referenced by: -

-


show_statements_stmt:

- - - - - - - SHOW - - - ALL - - - - opt_cluster - - - - - statements_or_queries - - - - - -

referenced by: -

-


show_ranges_stmt:

- - - - - - - SHOW - - - RANGES - - - FROM - - - TABLE - - - - table_name - - - - INDEX - - - - table_index_name - - - - DATABASE - - - - database_name - - - - - -

referenced by: -

-


show_range_for_row_stmt:

- - - - - - - SHOW - - - RANGE - - - FROM - - - TABLE - - - - table_name - - - - INDEX - - - - table_index_name - - - - FOR - - - ROW - - - ( - - - - expr_list - - - - ) - - - - -

referenced by: -

-


show_regions_stmt:

- - - - - - - SHOW - - - REGIONS - - - FROM - - - CLUSTER - - - DATABASE - - - - database_name - - - - ALL - - - DATABASES - - - - -

referenced by: -

-


show_survival_goal_stmt:

- - - - - - - SHOW - - - SURVIVAL - - - GOAL - - - FROM - - - DATABASE - - - - database_name - - - - - -

referenced by: -

-


show_roles_stmt:

- - - - - - - SHOW - - - ROLES - - - - -

referenced by: -

-


show_savepoint_stmt:

- - - - - - - SHOW - - - SAVEPOINT - - - STATUS - - - - -

referenced by: -

-


show_schemas_stmt:

- - - - - - - SHOW - - - SCHEMAS - - - FROM - - - - name - - - - - -

referenced by: -

-


show_sequences_stmt:

- - - - - - - SHOW - - - SEQUENCES - - - FROM - - - - name - - - - - -

referenced by: -

-


show_session_stmt:

- - - - - - - SHOW - - - SESSION - - - - session_var - - - - - -

referenced by: -

-


show_sessions_stmt:

- - - - - - - SHOW - - - ALL - - - - opt_cluster - - - - SESSIONS - - - - -

referenced by: -

-


show_stats_stmt:

- - - - - - - SHOW - - - STATISTICS - - - FOR - - - TABLE - - - - table_name - - - - - -

referenced by: -

-


show_tables_stmt:

- - - - - - - SHOW - - - TABLES - - - FROM - - - - name - - - - . - - - - name - - - - - with_comment - - - - - -

referenced by: -

-


show_trace_stmt:

- - - - - - - SHOW - - - - opt_compact - - - - KV - - - TRACE - - - FOR - - - SESSION - - - - -

referenced by: -

-


show_transactions_stmt:

- - - - - - - SHOW - - - ALL - - - - opt_cluster - - - - TRANSACTIONS - - - - -

referenced by: -

-


show_users_stmt:

- - - - - - - SHOW - - - USERS - - - - -

referenced by: -

-


show_zone_stmt:

- - - - - - - SHOW - - - ZONE - - - CONFIGURATION - - - FROM - - - RANGE - - - - zone_name - - - - DATABASE - - - - database_name - - - - TABLE - - - - table_name - - - - INDEX - - - - table_index_name - - - - - opt_partition - - - - PARTITION - - - - partition_name - - - - OF - - - TABLE - - - - table_name - - - - INDEX - - - - table_index_name - - - - CONFIGURATIONS - - - ALL - - - ZONE - - - CONFIGURATIONS - - - - -

referenced by: -

-


opt_table:

- - - - - - - TABLE - - - - -

referenced by: -

-


relation_expr_list:

- - - - - - - - relation_expr - - - - , - - - - -

referenced by: -

-


set_clause_list:

- - - - - - - - set_clause - - - - , - - - - -

referenced by: -

-


opt_from_list:

- - - - - - - FROM - - - - from_list - - - - - -

referenced by: -

-


db_object_name:

- - - - - - - - simple_db_object_name - - - - - complex_db_object_name - - - - - -

referenced by: -

-


opt_with:

- - - - - - - WITH - - - - -

referenced by: -

-


copy_options_list:

- - - - - - - - copy_options - - - - - -

referenced by: -

-


where_clause:

- - - - - - - WHERE - - - - a_expr - - - - - -

referenced by: -

-


prefixed_column_path:

- - - - - - - - db_object_name_component - - - - . - - - - unrestricted_name - - - - . - - - - unrestricted_name - - - - . - - - - unrestricted_name - - - - - -

referenced by: -

-


index_name:

- - - - - - - - unrestricted_name - - - - - -

referenced by: -

-


standalone_index_name:

- - - - - - - - db_object_name - - - - - -

referenced by: -

-


expr_list:

- - - - - - - - a_expr - - - - , - - - - -

referenced by: -

-


unreserved_keyword:

- - - - - - - ABORT - - - ACTION - - - ACCESS - - - ADD - - - ADMIN - - - AFTER - - - AGGREGATE - - - ALTER - - - ALWAYS - - - AT - - - ATTRIBUTE - - - AUTOMATIC - - - AVAILABILITY - - - BACKUP - - - BACKUPS - - - BEFORE - - - BEGIN - - - BINARY - - - BUCKET_COUNT - - - BUNDLE - - - BY - - - CACHE - - - CANCEL - - - CANCELQUERY - - - CASCADE - - - CHANGEFEED - - - CLOSE - - - CLUSTER - - - COLUMNS - - - COMMENT - - - COMMENTS - - - COMMIT - - - COMMITTED - - - COMPACT - - - COMPLETE - - - CONFLICT - - - CONFIGURATION - - - CONFIGURATIONS - - - CONFIGURE - - - CONNECTION - - - CONSTRAINTS - - - CONTROLCHANGEFEED - - - CONTROLJOB - - - CONVERSION - - - CONVERT - - - COPY - - - COVERING - - - CREATEDB - - - CREATELOGIN - - - CREATEROLE - - - CSV - - - CUBE - - - CURRENT - - - CURSOR - - - CYCLE - - - DATA - - - DATABASE - - - DATABASES - - - DAY - - - DEALLOCATE - - - DECLARE - - - DELETE - - - DEFAULTS - - - DEFERRED - - - DELIMITER - - - DESTINATION - - - DETACHED - - - DISCARD - - - DOMAIN - - - DOUBLE - - - DROP - - - ENCODING - - - ENCRYPTION_PASSPHRASE - - - ENUM - - - ENUMS - - - ESCAPE - - - EXCLUDE - - - EXCLUDING - - - EXECUTE - - - EXECUTION - - - EXPERIMENTAL - - - EXPERIMENTAL_AUDIT - - - EXPERIMENTAL_FINGERPRINTS - - - EXPERIMENTAL_RELOCATE - - - EXPERIMENTAL_REPLICA - - - EXPIRATION - - - EXPLAIN - - - EXPORT - - - EXTENSION - - - FAILURE - - - FILES - - - FILTER - - - FIRST - - - FOLLOWING - - - FORCE - - - FORCE_INDEX - - - FUNCTION - - - GENERATED - - - GEOMETRYM - - - GEOMETRYZ - - - GEOMETRYZM - - - GEOMETRYCOLLECTION - - - GEOMETRYCOLLECTIONM - - - GEOMETRYCOLLECTIONZ - - - GEOMETRYCOLLECTIONZM - - - GLOBAL - - - GOAL - - - GRANTS - - - GROUPS - - - HASH - - - HIGH - - - HISTOGRAM - - - HOUR - - - IDENTITY - - - IMMEDIATE - - - IMPORT - - - INCLUDE - - - INCLUDE_DEPRECATED_INTERLEAVES - - - INCLUDING - - - INCREMENT - - - INCREMENTAL - - - INDEXES - - - INHERITS - - - INJECT - - - INSERT - - - INTERLEAVE - - - INTO_DB - - - INVERTED - - - ISOLATION - - - JOB - - - JOBS - - - JSON - - - KEY - - - KEYS - - - KMS - - - KV - - - LANGUAGE - - - LAST - - - LATEST - - - LC_COLLATE - - - LC_CTYPE - - - LEASE - - - LESS - - - LEVEL - - - LINESTRING - - - LIST - - - LOCAL - - - LOCKED - - - LOGIN - - - LOCALITY - - - LOOKUP - - - LOW - - - MATCH - - - MATERIALIZED - - - MAXVALUE - - - MERGE - - - METHOD - - - MINUTE - - - MINVALUE - - - MODIFYCLUSTERSETTING - - - MULTILINESTRING - - - MULTILINESTRINGM - - - MULTILINESTRINGZ - - - MULTILINESTRINGZM - - - MULTIPOINT - - - MULTIPOINTM - - - MULTIPOINTZ - - - MULTIPOINTZM - - - MULTIPOLYGON - - - MULTIPOLYGONM - - - MULTIPOLYGONZ - - - MULTIPOLYGONZM - - - MONTH - - - NAMES - - - NAN - - - NEVER - - - NEXT - - - NO - - - NORMAL - - - NO_INDEX_JOIN - - - NOCREATEDB - - - NOCREATELOGIN - - - NOCANCELQUERY - - - NOCREATEROLE - - - NOCONTROLCHANGEFEED - - - NOCONTROLJOB - - - NOLOGIN - - - NOMODIFYCLUSTERSETTING - - - NON_VOTERS - - - NOVIEWACTIVITY - - - NOWAIT - - - NULLS - - - IGNORE_FOREIGN_KEYS - - - OF - - - OFF - - - OIDS - - - OPERATOR - - - OPT - - - OPTION - - - OPTIONS - - - ORDINALITY - - - OTHERS - - - OVER - - - OWNED - - - OWNER - - - PARENT - - - PARTIAL - - - PARTITION - - - PARTITIONS - - - PASSWORD - - - PAUSE - - - PAUSED - - - PHYSICAL - - - PLAN - - - PLANS - - - POINTM - - - POINTZ - - - POINTZM - - - POLYGONM - - - POLYGONZ - - - POLYGONZM - - - PRECEDING - - - PREPARE - - - PRESERVE - - - PRIORITY - - - PRIVILEGES - - - PUBLIC - - - PUBLICATION - - - QUERIES - - - QUERY - - - RANGE - - - RANGES - - - READ - - - REASSIGN - - - RECURRING - - - RECURSIVE - - - REF - - - REFRESH - - - REGION - - - REGIONAL - - - REGIONS - - - REINDEX - - - RELEASE - - - RENAME - - - REPEATABLE - - - REPLACE - - - REPLICATION - - - RESET - - - RESTORE - - - RESTRICT - - - RESUME - - - RETRY - - - REVISION_HISTORY - - - REVOKE - - - ROLE - - - ROLES - - - ROLLBACK - - - ROLLUP - - - ROWS - - - RULE - - - RUNNING - - - SCHEDULE - - - SCHEDULES - - - SETTING - - - SETTINGS - - - STATUS - - - SAVEPOINT - - - SCANS - - - SCATTER - - - SCHEMA - - - SCHEMAS - - - SCRUB - - - SEARCH - - - SECOND - - - SERIALIZABLE - - - SEQUENCE - - - SEQUENCES - - - SERVER - - - SESSION - - - SESSIONS - - - SET - - - SETS - - - SHARE - - - SHOW - - - SIMPLE - - - SKIP - - - SKIP_MISSING_FOREIGN_KEYS - - - SKIP_MISSING_SEQUENCES - - - SKIP_MISSING_SEQUENCE_OWNERS - - - SKIP_MISSING_VIEWS - - - SNAPSHOT - - - SPLIT - - - SQL - - - START - - - STATEMENTS - - - STATISTICS - - - STDIN - - - STORAGE - - - STORE - - - STORED - - - STORING - - - STREAM - - - STRICT - - - SUBSCRIPTION - - - SURVIVE - - - SURVIVAL - - - SYNTAX - - - SYSTEM - - - TABLES - - - TABLESPACE - - - TEMP - - - TEMPLATE - - - TEMPORARY - - - TENANT - - - TESTING_RELOCATE - - - TEXT - - - TIES - - - TRACE - - - TRANSACTION - - - TRANSACTIONS - - - TRIGGER - - - TRUNCATE - - - TRUSTED - - - TYPE - - - TYPES - - - THROTTLING - - - UNBOUNDED - - - UNCOMMITTED - - - UNKNOWN - - - UNLOGGED - - - UNSPLIT - - - UNTIL - - - UPDATE - - - UPSERT - - - USE - - - USERS - - - VALID - - - VALIDATE - - - VALUE - - - VARYING - - - VIEW - - - VIEWACTIVITY - - - VOTERS - - - WITHIN - - - WITHOUT - - - WRITE - - - YEAR - - - ZONE - - - - -

referenced by: -

-


col_name_keyword:

- - - - - - - ANNOTATE_TYPE - - - BETWEEN - - - BIGINT - - - BIT - - - BOOLEAN - - - BOX2D - - - CHAR - - - CHARACTER - - - CHARACTERISTICS - - - COALESCE - - - DEC - - - DECIMAL - - - EXISTS - - - EXTRACT - - - EXTRACT_DURATION - - - FLOAT - - - GEOGRAPHY - - - GEOMETRY - - - GREATEST - - - GROUPING - - - IF - - - IFERROR - - - IFNULL - - - INT - - - INTEGER - - - INTERVAL - - - ISERROR - - - LEAST - - - NULLIF - - - NUMERIC - - - OUT - - - OVERLAY - - - POINT - - - POLYGON - - - POSITION - - - PRECISION - - - REAL - - - ROW - - - SMALLINT - - - STRING - - - SUBSTRING - - - TIME - - - TIMETZ - - - TIMESTAMP - - - TIMESTAMPTZ - - - TREAT - - - TRIM - - - VALUES - - - VARBIT - - - VARCHAR - - - VIRTUAL - - - WORK - - - - -

referenced by: -

-


opt_privileges_clause:

- - - - - - - PRIVILEGES - - - - -

referenced by: -

-


complex_table_pattern:

- - - - - - - - complex_db_object_name - - - - - db_object_name_component - - - - . - - - - unrestricted_name - - - - . - - - * - - - - -

referenced by: -

-


table_pattern:

- - - - - - - - simple_db_object_name - - - - - complex_table_pattern - - - - - -

referenced by: -

-


table_pattern_list:

- - - - - - - - table_pattern - - - - , - - - - -

referenced by: -

-


privilege:

- - - - - - - - name - - - - CREATE - - - GRANT - - - SELECT - - - - -

referenced by: -

-


type_name_list:

- - - - - - - - type_name - - - - , - - - - -

referenced by: -

-


qualifiable_schema_name:

- - - - - - - - name - - - - . - - - - name - - - - - -

referenced by: -

-


type_list:

- - - - - - - - typename - - - - , - - - - -

referenced by: -

-


username_or_sconst:

- - - - - - - - non_reserved_word - - - - SCONST - - - - -

referenced by: -

-


transaction_mode_list:

- - - - - - - - transaction_mode - - - - - opt_comma - - - - - -

referenced by: -

-


opt_transaction:

- - - - - - - TRANSACTION - - - - -

referenced by: -

-


begin_transaction:

- - - - - - - - transaction_mode_list - - - - - -

referenced by: -

-


opt_abort_mod:

- - - - - - - TRANSACTION - - - WORK - - - - -

referenced by: -

-


alter_table_stmt:

- - - - - - - - alter_onetable_stmt - - - - - alter_split_stmt - - - - - alter_unsplit_stmt - - - - - alter_scatter_stmt - - - - - alter_zone_table_stmt - - - - - alter_rename_table_stmt - - - - - alter_table_set_schema_stmt - - - - - alter_table_locality_stmt - - - - - alter_table_owner_stmt - - - - - -

referenced by: -

-


alter_index_stmt:

- - - - - - - - alter_oneindex_stmt - - - - - alter_split_index_stmt - - - - - alter_unsplit_index_stmt - - - - - alter_scatter_index_stmt - - - - - alter_rename_index_stmt - - - - - alter_zone_index_stmt - - - - - -

referenced by: -

-


alter_view_stmt:

- - - - - - - - alter_rename_view_stmt - - - - - alter_view_set_schema_stmt - - - - - alter_view_owner_stmt - - - - - -

referenced by: -

-


alter_sequence_stmt:

- - - - - - - - alter_rename_sequence_stmt - - - - - alter_sequence_options_stmt - - - - - alter_sequence_set_schema_stmt - - - - - alter_sequence_owner_stmt - - - - - -

referenced by: -

-


alter_database_stmt:

- - - - - - - - alter_rename_database_stmt - - - - - alter_zone_database_stmt - - - - - alter_database_owner - - - - - alter_database_to_schema_stmt - - - - - alter_database_add_region_stmt - - - - - alter_database_drop_region_stmt - - - - - alter_database_survival_goal_stmt - - - - - alter_database_primary_region_stmt - - - - - -

referenced by: -

-


alter_range_stmt:

- - - - - - - - alter_zone_range_stmt - - - - - -

referenced by: -

-


alter_partition_stmt:

- - - - - - - - alter_zone_partition_stmt - - - - - -

referenced by: -

-


alter_schema_stmt:

- - - - - - - ALTER - - - SCHEMA - - - - qualifiable_schema_name - - - - RENAME - - - TO - - - - schema_name - - - - OWNER - - - TO - - - - role_spec - - - - - -

referenced by: -

-


alter_type_stmt:

- - - - - - - ALTER - - - TYPE - - - - type_name - - - - ADD - - - VALUE - - - IF - - - NOT - - - EXISTS - - - SCONST - - - - opt_add_val_placement - - - - DROP - - - VALUE - - - SCONST - - - RENAME - - - VALUE - - - SCONST - - - TO - - - SCONST - - - TO - - - - name - - - - SET - - - SCHEMA - - - - schema_name - - - - OWNER - - - TO - - - - role_spec - - - - - -

referenced by: -

-


role_or_group_or_user:

- - - - - - - ROLE - - - USER - - - - -

referenced by: -

-


opt_role_options:

- - - - - - - - opt_with - - - - - role_options - - - - - -

referenced by: -

-


as_of_clause:

- - - - - - - AS - - - OF - - - SYSTEM - - - TIME - - - - a_expr - - - - - -

referenced by: -

-


backup_options_list:

- - - - - - - - backup_options - - - - , - - - - -

referenced by: -

-


a_expr:

- - - - - - - - c_expr - - - - + - - - - - - - ~ - - - SQRT - - - CBRT - - - NOT - - - - a_expr - - - - DEFAULT - - - TYPECAST - - - - cast_target - - - - TYPEANNOTATE - - - - typename - - - - COLLATE - - - - collation_name - - - - AT - - - TIME - - - ZONE - - - - a_expr - - - - + - - - - a_expr - - - - - - - - - a_expr - - - - * - - - - a_expr - - - - / - - - - a_expr - - - - FLOORDIV - - - - a_expr - - - - % - - - - a_expr - - - - ^ - - - - a_expr - - - - # - - - - a_expr - - - - & - - - - a_expr - - - - | - - - - a_expr - - - - < - - - - a_expr - - - - > - - - - a_expr - - - - ? - - - - a_expr - - - - JSON_SOME_EXISTS - - - - a_expr - - - - JSON_ALL_EXISTS - - - - a_expr - - - - CONTAINS - - - - a_expr - - - - CONTAINED_BY - - - - a_expr - - - - = - - - - a_expr - - - - CONCAT - - - - a_expr - - - - LSHIFT - - - - a_expr - - - - RSHIFT - - - - a_expr - - - - FETCHVAL - - - - a_expr - - - - FETCHTEXT - - - - a_expr - - - - FETCHVAL_PATH - - - - a_expr - - - - FETCHTEXT_PATH - - - - a_expr - - - - REMOVE_PATH - - - - a_expr - - - - INET_CONTAINED_BY_OR_EQUALS - - - - a_expr - - - - AND_AND - - - - a_expr - - - - INET_CONTAINS_OR_EQUALS - - - - a_expr - - - - LESS_EQUALS - - - - a_expr - - - - GREATER_EQUALS - - - - a_expr - - - - NOT_EQUALS - - - - a_expr - - - - AND - - - - a_expr - - - - OR - - - - a_expr - - - - LIKE - - - - a_expr - - - - LIKE - - - - a_expr - - - - ESCAPE - - - - a_expr - - - - NOT - - - LIKE - - - - a_expr - - - - NOT - - - LIKE - - - - a_expr - - - - ESCAPE - - - - a_expr - - - - ILIKE - - - - a_expr - - - - ILIKE - - - - a_expr - - - - ESCAPE - - - - a_expr - - - - NOT - - - ILIKE - - - - a_expr - - - - NOT - - - ILIKE - - - - a_expr - - - - ESCAPE - - - - a_expr - - - - SIMILAR - - - TO - - - - a_expr - - - - SIMILAR - - - TO - - - - a_expr - - - - ESCAPE - - - - a_expr - - - - NOT - - - SIMILAR - - - TO - - - - a_expr - - - - NOT - - - SIMILAR - - - TO - - - - a_expr - - - - ESCAPE - - - - a_expr - - - - ~ - - - - a_expr - - - - NOT_REGMATCH - - - - a_expr - - - - REGIMATCH - - - - a_expr - - - - NOT_REGIMATCH - - - - a_expr - - - - IS - - - NAN - - - IS - - - NOT - - - NAN - - - IS - - - NULL - - - ISNULL - - - IS - - - NOT - - - NULL - - - NOTNULL - - - IS - - - TRUE - - - IS - - - NOT - - - TRUE - - - IS - - - FALSE - - - IS - - - NOT - - - FALSE - - - IS - - - UNKNOWN - - - IS - - - NOT - - - UNKNOWN - - - IS - - - DISTINCT - - - FROM - - - - a_expr - - - - IS - - - NOT - - - DISTINCT - - - FROM - - - - a_expr - - - - IS - - - OF - - - ( - - - - type_list - - - - ) - - - IS - - - NOT - - - OF - - - ( - - - - type_list - - - - ) - - - BETWEEN - - - - opt_asymmetric - - - - - b_expr - - - - AND - - - - a_expr - - - - NOT - - - BETWEEN - - - - opt_asymmetric - - - - - b_expr - - - - AND - - - - a_expr - - - - BETWEEN - - - SYMMETRIC - - - - b_expr - - - - AND - - - - a_expr - - - - NOT - - - BETWEEN - - - SYMMETRIC - - - - b_expr - - - - AND - - - - a_expr - - - - IN - - - - in_expr - - - - NOT - - - IN - - - - in_expr - - - - - subquery_op - - - - - sub_type - - - - - a_expr - - - - - -

referenced by: -

-


for_schedules_clause:

- - - - - - - FOR - - - SCHEDULES - - - - select_stmt - - - - SCHEDULE - - - - a_expr - - - - - -

referenced by: -

-


create_database_stmt:

- - - - - - - CREATE - - - DATABASE - - - IF - - - NOT - - - EXISTS - - - - database_name - - - - - opt_with - - - - - opt_template_clause - - - - - opt_encoding_clause - - - - - opt_lc_collate_clause - - - - - opt_lc_ctype_clause - - - - - opt_connection_limit - - - - - opt_primary_region_clause - - - - - opt_regions_list - - - - - opt_survival_goal_clause - - - - - -

referenced by: -

-


create_index_stmt:

- - - - - - - CREATE - - - - opt_unique - - - - INDEX - - - - opt_concurrently - - - - - opt_index_name - - - - IF - - - NOT - - - EXISTS - - - - index_name - - - - ON - - - - table_name - - - - - opt_index_access_method - - - - ( - - - - index_params - - - - ) - - - - opt_hash_sharded - - - - INVERTED - - - INDEX - - - - opt_concurrently - - - - - opt_index_name - - - - IF - - - NOT - - - EXISTS - - - - index_name - - - - ON - - - - table_name - - - - ( - - - - index_params - - - - ) - - - - opt_storing - - - - - opt_interleave - - - - - opt_partition_by_index - - - - - opt_with_storage_parameter_list - - - - - opt_where_clause - - - - - -

referenced by: -

-


create_schema_stmt:

- - - - - - - CREATE - - - SCHEMA - - - IF - - - NOT - - - EXISTS - - - - qualifiable_schema_name - - - - - opt_schema_name - - - - AUTHORIZATION - - - - role_spec - - - - - -

referenced by: -

-


create_table_stmt:

- - - - - - - CREATE - - - - opt_persistence_temp_table - - - - TABLE - - - IF - - - NOT - - - EXISTS - - - - table_name - - - - ( - - - - opt_table_elem_list - - - - ) - - - - opt_interleave - - - - - opt_partition_by_table - - - - - opt_table_with - - - - - opt_create_table_on_commit - - - - - opt_locality - - - - - -

referenced by: -

-


create_table_as_stmt:

- - - - - - - CREATE - - - - opt_persistence_temp_table - - - - TABLE - - - IF - - - NOT - - - EXISTS - - - - table_name - - - - - create_as_opt_col_list - - - - - opt_table_with - - - - AS - - - - select_stmt - - - - - opt_create_table_on_commit - - - - - -

referenced by: -

-


create_type_stmt:

- - - - - - - CREATE - - - TYPE - - - IF - - - NOT - - - EXISTS - - - - type_name - - - - AS - - - ENUM - - - ( - - - - opt_enum_val_list - - - - ) - - - - -

referenced by: -

-


create_view_stmt:

- - - - - - - CREATE - - - - opt_temp - - - - MATERIALIZED - - - VIEW - - - IF - - - NOT - - - EXISTS - - - OR - - - REPLACE - - - - opt_temp - - - - VIEW - - - - view_name - - - - - opt_column_list - - - - AS - - - - select_stmt - - - - - -

referenced by: -

-


create_sequence_stmt:

- - - - - - - CREATE - - - - opt_temp - - - - SEQUENCE - - - IF - - - NOT - - - EXISTS - - - - sequence_name - - - - - opt_sequence_option_list - - - - - -

referenced by: -

-


statistics_name:

- - - - - - - - name - - - - - -

referenced by: -

-


opt_stats_columns:

- - - - - - - ON - - - - name_list - - - - - -

referenced by: -

-


create_stats_target:

- - - - - - - - table_name - - - - - -

referenced by: -

-


opt_create_stats_options:

- - - - - - - - as_of_clause - - - - - -

referenced by: -

-


opt_description:

- - - - - - - - string_or_placeholder - - - - - -

referenced by: -

-


cron_expr:

- - - - - - - RECURRING - - - - sconst_or_placeholder - - - - - -

referenced by: -

-


opt_full_backup_clause:

- - - - - - - FULL - - - BACKUP - - - - sconst_or_placeholder - - - - ALWAYS - - - - -

referenced by: -

-


opt_with_schedule_options:

- - - - - - - WITH - - - SCHEDULE - - - OPTIONS - - - - kv_option_list - - - - ( - - - - kv_option_list - - - - ) - - - - -

referenced by: -

-


changefeed_targets:

- - - - - - - TABLE - - - - single_table_pattern_list - - - - - -

referenced by: -

-


opt_changefeed_sink:

- - - - - - - INTO - - - - string_or_placeholder - - - - - -

referenced by: -

-


opt_with_replication_options:

- - - - - - - WITH - - - - replication_options_list - - - - OPTIONS - - - ( - - - - replication_options_list - - - - ) - - - - -

referenced by: -

-


with_clause:

- - - - - - - WITH - - - RECURSIVE - - - - cte_list - - - - - -

referenced by: -

-


table_name_opt_idx:

- - - - - - - - opt_only - - - - - table_name - - - - - opt_index_flags - - - - - opt_descendant - - - - - -

referenced by: -

-


sort_clause:

- - - - - - - ORDER - - - BY - - - - sortby_list - - - - - -

referenced by: -

-


limit_clause:

- - - - - - - LIMIT - - - ALL - - - - a_expr - - - - FETCH - - - - first_or_next - - - - - select_fetch_first_value - - - - - row_or_rows - - - - ONLY - - - - -

referenced by: -

-


target_list:

- - - - - - - - target_elem - - - - , - - - - -

referenced by: -

-


drop_database_stmt:

- - - - - - - DROP - - - DATABASE - - - IF - - - EXISTS - - - - database_name - - - - - opt_drop_behavior - - - - - -

referenced by: -

-


drop_index_stmt:

- - - - - - - DROP - - - INDEX - - - - opt_concurrently - - - - IF - - - EXISTS - - - - table_index_name_list - - - - - opt_drop_behavior - - - - - -

referenced by: -

-


drop_table_stmt:

- - - - - - - DROP - - - TABLE - - - IF - - - EXISTS - - - - table_name_list - - - - - opt_drop_behavior - - - - - -

referenced by: -

-


drop_view_stmt:

- - - - - - - DROP - - - MATERIALIZED - - - VIEW - - - IF - - - EXISTS - - - - table_name_list - - - - - opt_drop_behavior - - - - - -

referenced by: -

-


drop_sequence_stmt:

- - - - - - - DROP - - - SEQUENCE - - - IF - - - EXISTS - - - - table_name_list - - - - - opt_drop_behavior - - - - - -

referenced by: -

-


drop_schema_stmt:

- - - - - - - DROP - - - SCHEMA - - - IF - - - EXISTS - - - - schema_name_list - - - - - opt_drop_behavior - - - - - -

referenced by: -

-


drop_type_stmt:

- - - - - - - DROP - - - TYPE - - - IF - - - EXISTS - - - - type_name_list - - - - - opt_drop_behavior - - - - - -

referenced by: -

-


explain_option_name:

- - - - - - - - non_reserved_word - - - - - -

referenced by: -

-


non_reserved_word_or_sconst:

- - - - - - - - non_reserved_word - - - - SCONST - - - - -

referenced by: -

-


kv_option_list:

- - - - - - - - kv_option - - - - , - - - - -

referenced by: -

-


table_elem:

- - - - - - - - column_def - - - - - index_def - - - - - family_def - - - - - table_constraint - - - - - opt_validate_behavior - - - - LIKE - - - - table_name - - - - - like_table_option_list - - - - - -

referenced by: -

-


insert_column_item:

- - - - - - - - column_name - - - - - -

referenced by: -

-


session_var:

- - - - - - - identifier - - - ALL - - - DATABASE - - - NAMES - - - SESSION_USER - - - TIME - - - ZONE - - - - -

referenced by: -

-


var_name:

- - - - - - - - name - - - - - attrs - - - - - -

referenced by: -

-


restore_options_list:

- - - - - - - - restore_options - - - - , - - - - -

referenced by: -

-


opt_scrub_options_clause:

- - - - - - - WITH - - - OPTIONS - - - - scrub_option_list - - - - - -

referenced by: -

-


simple_select:

- - - - - - - - simple_select_clause - - - - - values_clause - - - - - table_clause - - - - - set_operation - - - - - -

referenced by: -

-


select_clause:

- - - - - - - - simple_select - - - - - select_with_parens - - - - - -

referenced by: -

-


for_locking_clause:

- - - - - - - - for_locking_items - - - - FOR - - - READ - - - ONLY - - - - -

referenced by: -

-


opt_select_limit:

- - - - - - - - select_limit - - - - - -

referenced by: -

-


select_limit:

- - - - - - - - limit_clause - - - - - offset_clause - - - - - offset_clause - - - - - limit_clause - - - - - -

referenced by: -

-


opt_for_locking_clause:

- - - - - - - - for_locking_clause - - - - - -

referenced by: -

-


set_rest_more:

- - - - - - - - generic_set - - - - - -

referenced by: -

-


to_or_eq:

- - - - - - - = - - - TO - - - - -

referenced by: -

-


var_value:

- - - - - - - - a_expr - - - - - extra_var_value - - - - - -

referenced by: -

-


with_comment:

- - - - - - - WITH - - - COMMENT - - - - -

referenced by: -

-


opt_on_targets_roles:

- - - - - - - ON - - - - targets_roles - - - - - -

referenced by: -

-


for_grantee_clause:

- - - - - - - FOR - - - - name_list - - - - - -

referenced by: -

-


opt_schedule_executor_type:

- - - - - - - FOR - - - BACKUP - - - - -

referenced by: -

-


schedule_state:

- - - - - - - RUNNING - - - PAUSED - - - - -

referenced by: -

-


opt_cluster:

- - - - - - - CLUSTER - - - LOCAL - - - - -

referenced by: -

-


statements_or_queries:

- - - - - - - STATEMENTS - - - QUERIES - - - - -

referenced by: -

-


opt_compact:

- - - - - - - COMPACT - - - - -

referenced by: -

-


zone_name:

- - - - - - - - unrestricted_name - - - - - -

referenced by: -

-


opt_partition:

- - - - - - - - partition - - - - - -

referenced by: -

-


partition_name:

- - - - - - - - unrestricted_name - - - - - -

referenced by: -

-


relation_expr:

- - - - - - - - table_name - - - - * - - - ONLY - - - - table_name - - - - ( - - - - table_name - - - - ) - - - - -

referenced by: -

-


set_clause:

- - - - - - - - single_set_clause - - - - - multiple_set_clause - - - - - -

referenced by: -

-


from_list:

- - - - - - - - table_ref - - - - , - - - - -

referenced by: -

-


simple_db_object_name:

- - - - - - - - db_object_name_component - - - - - -

referenced by: -

-


complex_db_object_name:

- - - - - - - - db_object_name_component - - - - . - - - - unrestricted_name - - - - . - - - - unrestricted_name - - - - - -

referenced by: -

-


copy_options:

- - - - - - - DESTINATION - - - = - - - DELIMITER - - - NULL - - - - string_or_placeholder - - - - BINARY - - - CSV - - - - -

referenced by: -

-


db_object_name_component:

- - - - - - - - name - - - - FAMILY - - - - cockroachdb_extra_reserved_keyword - - - - - -

referenced by: -

-


unrestricted_name:

- - - - - - - identifier - - - - unreserved_keyword - - - - - col_name_keyword - - - - - type_func_name_keyword - - - - - reserved_keyword - - - - - -

referenced by: -

-


type_name:

- - - - - - - - db_object_name - - - - - -

referenced by: -

-


typename:

- - - - - - - - simple_typename - - - - - opt_array_bounds - - - - ARRAY - - - - -

referenced by: -

-


non_reserved_word:

- - - - - - - identifier - - - - unreserved_keyword - - - - - col_name_keyword - - - - - type_func_name_keyword - - - - - -

referenced by: -

-


transaction_mode:

- - - - - - - - transaction_user_priority - - - - - transaction_read_mode - - - - - as_of_clause - - - - - transaction_deferrable_mode - - - - - -

referenced by: -

-


opt_comma:

- - - - - - - , - - - - -

referenced by: -

-


alter_onetable_stmt:

- - - - - - - ALTER - - - TABLE - - - IF - - - EXISTS - - - - relation_expr - - - - - alter_table_cmds - - - - - -

referenced by: -

-


alter_split_stmt:

- - - - - - - ALTER - - - TABLE - - - - table_name - - - - SPLIT - - - AT - - - - select_stmt - - - - WITH - - - EXPIRATION - - - - a_expr - - - - - -

referenced by: -

-


alter_unsplit_stmt:

- - - - - - - ALTER - - - TABLE - - - - table_name - - - - UNSPLIT - - - AT - - - - select_stmt - - - - ALL - - - - -

referenced by: -

-


alter_scatter_stmt:

- - - - - - - ALTER - - - TABLE - - - - table_name - - - - SCATTER - - - FROM - - - ( - - - - expr_list - - - - ) - - - TO - - - ( - - - - expr_list - - - - ) - - - - -

referenced by: -

-


alter_zone_table_stmt:

- - - - - - - ALTER - - - TABLE - - - - table_name - - - - - set_zone_config - - - - - -

referenced by: -

-


alter_rename_table_stmt:

- - - - - - - ALTER - - - TABLE - - - IF - - - EXISTS - - - - relation_expr - - - - RENAME - - - TO - - - - table_name - - - - - -

referenced by: -

-


alter_table_set_schema_stmt:

- - - - - - - ALTER - - - TABLE - - - IF - - - EXISTS - - - - relation_expr - - - - SET - - - SCHEMA - - - - schema_name - - - - - -

referenced by: -

-


alter_table_locality_stmt:

- - - - - - - ALTER - - - TABLE - - - IF - - - EXISTS - - - - relation_expr - - - - SET - - - - locality - - - - - -

referenced by: -

-


alter_table_owner_stmt:

- - - - - - - ALTER - - - TABLE - - - IF - - - EXISTS - - - - relation_expr - - - - OWNER - - - TO - - - - role_spec - - - - - -

referenced by: -

-


alter_oneindex_stmt:

- - - - - - - ALTER - - - INDEX - - - IF - - - EXISTS - - - - table_index_name - - - - - alter_index_cmds - - - - - -

referenced by: -

-


alter_split_index_stmt:

- - - - - - - ALTER - - - INDEX - - - - table_index_name - - - - SPLIT - - - AT - - - - select_stmt - - - - WITH - - - EXPIRATION - - - - a_expr - - - - - -

referenced by: -

-


alter_unsplit_index_stmt:

- - - - - - - ALTER - - - INDEX - - - - table_index_name - - - - UNSPLIT - - - AT - - - - select_stmt - - - - ALL - - - - -

referenced by: -

-


alter_scatter_index_stmt:

- - - - - - - ALTER - - - INDEX - - - - table_index_name - - - - SCATTER - - - FROM - - - ( - - - - expr_list - - - - ) - - - TO - - - ( - - - - expr_list - - - - ) - - - - -

referenced by: -

-


alter_rename_index_stmt:

- - - - - - - ALTER - - - INDEX - - - IF - - - EXISTS - - - - table_index_name - - - - RENAME - - - TO - - - - index_name - - - - - -

referenced by: -

-


alter_zone_index_stmt:

- - - - - - - ALTER - - - INDEX - - - - table_index_name - - - - - set_zone_config - - - - - -

referenced by: -

-


alter_rename_view_stmt:

- - - - - - - ALTER - - - MATERIALIZED - - - VIEW - - - IF - - - EXISTS - - - - relation_expr - - - - RENAME - - - TO - - - - view_name - - - - - -

referenced by: -

-


alter_view_set_schema_stmt:

- - - - - - - ALTER - - - MATERIALIZED - - - VIEW - - - IF - - - EXISTS - - - - relation_expr - - - - SET - - - SCHEMA - - - - schema_name - - - - - -

referenced by: -

-


alter_view_owner_stmt:

- - - - - - - ALTER - - - MATERIALIZED - - - VIEW - - - IF - - - EXISTS - - - - relation_expr - - - - OWNER - - - TO - - - - role_spec - - - - - -

referenced by: -

-


alter_rename_sequence_stmt:

- - - - - - - ALTER - - - SEQUENCE - - - IF - - - EXISTS - - - - relation_expr - - - - RENAME - - - TO - - - - sequence_name - - - - - -

referenced by: -

-


alter_sequence_options_stmt:

- - - - - - - ALTER - - - SEQUENCE - - - IF - - - EXISTS - - - - sequence_name - - - - - sequence_option_list - - - - - -

referenced by: -

-


alter_sequence_set_schema_stmt:

- - - - - - - ALTER - - - SEQUENCE - - - IF - - - EXISTS - - - - relation_expr - - - - SET - - - SCHEMA - - - - schema_name - - - - - -

referenced by: -

-


alter_sequence_owner_stmt:

- - - - - - - ALTER - - - SEQUENCE - - - IF - - - EXISTS - - - - relation_expr - - - - OWNER - - - TO - - - - role_spec - - - - - -

referenced by: -

-


alter_rename_database_stmt:

- - - - - - - ALTER - - - DATABASE - - - - database_name - - - - RENAME - - - TO - - - - database_name - - - - - -

referenced by: -

-


alter_zone_database_stmt:

- - - - - - - ALTER - - - DATABASE - - - - database_name - - - - - set_zone_config - - - - - -

referenced by: -

-


alter_database_owner:

- - - - - - - ALTER - - - DATABASE - - - - database_name - - - - OWNER - - - TO - - - - role_spec - - - - - -

referenced by: -

-


alter_database_to_schema_stmt:

- - - - - - - ALTER - - - DATABASE - - - - database_name - - - - CONVERT - - - TO - - - SCHEMA - - - WITH - - - PARENT - - - - database_name - - - - - -

referenced by: -

-


alter_database_add_region_stmt:

- - - - - - - ALTER - - - DATABASE - - - - database_name - - - - ADD - - - REGION - - - - region_name - - - - - -

referenced by: -

-


alter_database_drop_region_stmt:

- - - - - - - ALTER - - - DATABASE - - - - database_name - - - - DROP - - - REGION - - - - region_name - - - - - -

referenced by: -

-


alter_database_survival_goal_stmt:

- - - - - - - ALTER - - - DATABASE - - - - database_name - - - - - survival_goal_clause - - - - - -

referenced by: -

-


alter_database_primary_region_stmt:

- - - - - - - ALTER - - - DATABASE - - - - database_name - - - - - primary_region_clause - - - - - -

referenced by: -

-


alter_zone_range_stmt:

- - - - - - - ALTER - - - RANGE - - - - zone_name - - - - - set_zone_config - - - - - -

referenced by: -

-


alter_zone_partition_stmt:

- - - - - - - ALTER - - - PARTITION - - - - partition_name - - - - OF - - - TABLE - - - - table_name - - - - INDEX - - - - table_index_name - - - - - table_name - - - - @ - - - * - - - - set_zone_config - - - - - -

referenced by: -

-


schema_name:

- - - - - - - - name - - - - - -

referenced by: -

-


opt_add_val_placement:

- - - - - - - BEFORE - - - AFTER - - - SCONST - - - - -

referenced by: -

-


role_options:

- - - - - - - - role_option - - - - - -

referenced by: -

-


backup_options:

- - - - - - - ENCRYPTION_PASSPHRASE - - - = - - - - string_or_placeholder - - - - REVISION_HISTORY - - - DETACHED - - - KMS - - - = - - - - string_or_placeholder_opt_list - - - - INCLUDE_DEPRECATED_INTERLEAVES - - - - -

referenced by: -

-


c_expr:

- - - - - - - - d_expr - - - - - array_subscripts - - - - - case_expr - - - - EXISTS - - - - select_with_parens - - - - - -

referenced by: -

-


cast_target:

- - - - - - - - typename - - - - - -

referenced by: -

-


collation_name:

- - - - - - - - unrestricted_name - - - - - -

referenced by: -

-


opt_asymmetric:

- - - - - - - ASYMMETRIC - - - - -

referenced by: -

-


b_expr:

- - - - - - - - c_expr - - - - + - - - - - - - ~ - - - - b_expr - - - - TYPECAST - - - - cast_target - - - - TYPEANNOTATE - - - - typename - - - - + - - - - - - - * - - - / - - - FLOORDIV - - - % - - - ^ - - - # - - - & - - - | - - - < - - - > - - - = - - - CONCAT - - - LSHIFT - - - RSHIFT - - - LESS_EQUALS - - - GREATER_EQUALS - - - NOT_EQUALS - - - - b_expr - - - - IS - - - NOT - - - DISTINCT - - - FROM - - - - b_expr - - - - OF - - - ( - - - - type_list - - - - ) - - - - -

referenced by: -

-


in_expr:

- - - - - - - - select_with_parens - - - - - expr_tuple1_ambiguous - - - - - -

referenced by: -

-


subquery_op:

- - - - - - - - math_op - - - - NOT - - - LIKE - - - ILIKE - - - - -

referenced by: -

-


sub_type:

- - - - - - - ANY - - - SOME - - - ALL - - - - -

referenced by: -

-


opt_template_clause:

- - - - - - - TEMPLATE - - - - opt_equal - - - - - non_reserved_word_or_sconst - - - - - -

referenced by: -

-


opt_encoding_clause:

- - - - - - - ENCODING - - - - opt_equal - - - - - non_reserved_word_or_sconst - - - - - -

referenced by: -

-


opt_lc_collate_clause:

- - - - - - - LC_COLLATE - - - - opt_equal - - - - - non_reserved_word_or_sconst - - - - - -

referenced by: -

-


opt_lc_ctype_clause:

- - - - - - - LC_CTYPE - - - - opt_equal - - - - - non_reserved_word_or_sconst - - - - - -

referenced by: -

-


opt_connection_limit:

- - - - - - - CONNECTION - - - LIMIT - - - - opt_equal - - - - - signed_iconst - - - - - -

referenced by: -

-


opt_primary_region_clause:

- - - - - - - - primary_region_clause - - - - - -

referenced by: -

-


opt_regions_list:

- - - - - - - REGIONS - - - - opt_equal - - - - - region_name_list - - - - - -

referenced by: -

-


opt_survival_goal_clause:

- - - - - - - - survival_goal_clause - - - - - -

referenced by: -

-


opt_unique:

- - - - - - - UNIQUE - - - - -

referenced by: -

-


opt_index_name:

- - - - - - - - opt_name - - - - - -

referenced by: -

-


opt_index_access_method:

- - - - - - - USING - - - - name - - - - - -

referenced by: -

-


index_params:

- - - - - - - - index_elem - - - - , - - - - -

referenced by: -

-


opt_hash_sharded:

- - - - - - - USING - - - HASH - - - WITH - - - BUCKET_COUNT - - - = - - - - a_expr - - - - - -

referenced by: -

-


opt_storing:

- - - - - - - - storing - - - - ( - - - - name_list - - - - ) - - - - -

referenced by: -

-


opt_interleave:

- - - - - - - INTERLEAVE - - - IN - - - PARENT - - - - table_name - - - - ( - - - - name_list - - - - ) - - - - -

referenced by: -

-


opt_partition_by_index:

- - - - - - - - partition_by - - - - - -

referenced by: -

-


opt_with_storage_parameter_list:

- - - - - - - WITH - - - ( - - - - storage_parameter_list - - - - ) - - - - -

referenced by: -

-


opt_schema_name:

- - - - - - - - qualifiable_schema_name - - - - - -

referenced by: -

-


opt_persistence_temp_table:

- - - - - - - - opt_temp - - - - LOCAL - - - GLOBAL - - - TEMPORARY - - - TEMP - - - UNLOGGED - - - - -

referenced by: -

-


opt_table_elem_list:

- - - - - - - - table_elem_list - - - - - -

referenced by: -

-


opt_partition_by_table:

- - - - - - - - partition_by_table - - - - - -

referenced by: -

-


opt_table_with:

- - - - - - - - opt_with_storage_parameter_list - - - - - -

referenced by: -

-


opt_create_table_on_commit:

- - - - - - - ON - - - COMMIT - - - PRESERVE - - - ROWS - - - - -

referenced by: -

-


opt_locality:

- - - - - - - - locality - - - - - -

referenced by: -

-


create_as_opt_col_list:

- - - - - - - ( - - - - create_as_table_defs - - - - ) - - - - -

referenced by: -

-


opt_enum_val_list:

- - - - - - - - enum_val_list - - - - - -

referenced by: -

-


opt_temp:

- - - - - - - TEMPORARY - - - TEMP - - - - -

referenced by: -

-


sequence_name:

- - - - - - - - db_object_name - - - - - -

referenced by: -

-


opt_sequence_option_list:

- - - - - - - - sequence_option_list - - - - - -

referenced by: -

-


single_table_pattern_list:

- - - - - - - - table_name - - - - , - - - - -

referenced by: -

-


replication_options_list:

- - - - - - - - replication_options - - - - , - - - - -

referenced by: -

-


cte_list:

- - - - - - - - common_table_expr - - - - , - - - - -

referenced by: -

-


opt_only:

- - - - - - - ONLY - - - - -

referenced by: -

-


opt_index_flags:

- - - - - - - @ - - - - index_name - - - - [ - - - ICONST - - - ] - - - { - - - - index_flags_param_list - - - - } - - - - -

referenced by: -

-


opt_descendant:

- - - - - - - * - - - - -

referenced by: -

-


sortby_list:

- - - - - - - - sortby - - - - , - - - - -

referenced by: -

-


first_or_next:

- - - - - - - FIRST - - - NEXT - - - - -

referenced by: -

-


select_fetch_first_value:

- - - - - - - - c_expr - - - - - only_signed_iconst - - - - - only_signed_fconst - - - - - -

referenced by: -

-


row_or_rows:

- - - - - - - ROW - - - ROWS - - - - -

referenced by: -

-


target_elem:

- - - - - - - - a_expr - - - - AS - - - - target_name - - - - identifier - - - * - - - - -

referenced by: -

-


table_index_name_list:

- - - - - - - - table_index_name - - - - , - - - - -

referenced by: -

-


table_name_list:

- - - - - - - - table_name - - - - , - - - - -

referenced by: -

-


kv_option:

- - - - - - - - name - - - - SCONST - - - = - - - - string_or_placeholder - - - - - -

referenced by: -

-


column_def:

- - - - - - - - column_name - - - - - typename - - - - - col_qual_list - - - - - -

referenced by: -

-


index_def:

- - - - - - - UNIQUE - - - INDEX - - - - opt_index_name - - - - ( - - - - index_params - - - - ) - - - - opt_hash_sharded - - - - - opt_storing - - - - - opt_interleave - - - - INVERTED - - - INDEX - - - - opt_name - - - - ( - - - - index_params - - - - ) - - - - opt_partition_by_index - - - - - opt_with_storage_parameter_list - - - - - opt_where_clause - - - - - -

referenced by: -

-


family_def:

- - - - - - - FAMILY - - - - opt_family_name - - - - ( - - - - name_list - - - - ) - - - - -

referenced by: -

-


table_constraint:

- - - - - - - CONSTRAINT - - - - constraint_name - - - - - constraint_elem - - - - - -

referenced by: -

-


opt_validate_behavior:

- - - - - - - NOT - - - VALID - - - - -

referenced by: -

-


like_table_option_list:

- - - - - - - INCLUDING - - - EXCLUDING - - - - like_table_option - - - - - -

referenced by: -

-


column_name:

- - - - - - - - name - - - - - -

referenced by: -

-


attrs:

- - - - - - - . - - - - unrestricted_name - - - - - -

referenced by: -

-


restore_options:

- - - - - - - ENCRYPTION_PASSPHRASE - - - INTO_DB - - - = - - - - string_or_placeholder - - - - KMS - - - = - - - - string_or_placeholder_opt_list - - - - SKIP_MISSING_FOREIGN_KEYS - - - SKIP_MISSING_SEQUENCES - - - SKIP_MISSING_SEQUENCE_OWNERS - - - SKIP_MISSING_VIEWS - - - DETACHED - - - - -

referenced by: -

-


scrub_option_list:

- - - - - - - - scrub_option - - - - , - - - - -

referenced by: -

-


simple_select_clause:

- - - - - - - SELECT - - - - opt_all_clause - - - - DISTINCT - - - - distinct_on_clause - - - - - target_list - - - - - from_clause - - - - - opt_where_clause - - - - - group_clause - - - - - having_clause - - - - - window_clause - - - - - -

referenced by: -

-


values_clause:

- - - - - - - VALUES - - - ( - - - - expr_list - - - - ) - - - , - - - - -

referenced by: -

-


table_clause:

- - - - - - - TABLE - - - - table_ref - - - - - -

referenced by: -

-


set_operation:

- - - - - - - - select_clause - - - - UNION - - - INTERSECT - - - EXCEPT - - - - all_or_distinct - - - - - select_clause - - - - - -

referenced by: -

-


for_locking_items:

- - - - - - - - for_locking_item - - - - - -

referenced by: -

-


offset_clause:

- - - - - - - OFFSET - - - - a_expr - - - - - select_fetch_first_value - - - - - row_or_rows - - - - - -

referenced by: -

-


generic_set:

- - - - - - - - var_name - - - - - to_or_eq - - - - - var_list - - - - - -

referenced by: -

-


extra_var_value:

- - - - - - - ON - - - - cockroachdb_extra_reserved_keyword - - - - - -

referenced by: -

-


targets_roles:

- - - - - - - ROLE - - - - name_list - - - - SCHEMA - - - - schema_name_list - - - - TYPE - - - - type_name_list - - - - - targets - - - - - -

referenced by: -

-


partition:

- - - - - - - PARTITION - - - - partition_name - - - - - -

referenced by: -

-


single_set_clause:

- - - - - - - - column_name - - - - = - - - - a_expr - - - - - -

referenced by: -

-


multiple_set_clause:

- - - - - - - ( - - - - insert_column_list - - - - ) - - - = - - - - in_expr - - - - - -

referenced by: -

-


table_ref:

- - - - - - - - relation_expr - - - - - opt_index_flags - - - - LATERAL - - - - select_with_parens - - - - - func_table - - - - [ - - - - row_source_extension_stmt - - - - ] - - - - opt_ordinality - - - - - opt_alias_clause - - - - - joined_table - - - - ( - - - - joined_table - - - - ) - - - - opt_ordinality - - - - - alias_clause - - - - - -

referenced by: -

-


cockroachdb_extra_reserved_keyword:

- - - - - - - INDEX - - - NOTHING - - - - -

referenced by: -

-


type_func_name_keyword:

- - - - - - - - type_func_name_no_crdb_extra_keyword - - - - FAMILY - - - - -

referenced by: -

-


reserved_keyword:

- - - - - - - ALL - - - ANALYSE - - - ANALYZE - - - AND - - - ANY - - - ARRAY - - - AS - - - ASC - - - ASYMMETRIC - - - BOTH - - - CASE - - - CAST - - - CHECK - - - COLLATE - - - COLUMN - - - CONCURRENTLY - - - CONSTRAINT - - - CREATE - - - CURRENT_CATALOG - - - CURRENT_DATE - - - CURRENT_ROLE - - - CURRENT_SCHEMA - - - CURRENT_TIME - - - CURRENT_TIMESTAMP - - - CURRENT_USER - - - DEFAULT - - - DEFERRABLE - - - DESC - - - DISTINCT - - - DO - - - ELSE - - - END - - - EXCEPT - - - FALSE - - - FETCH - - - FOR - - - FOREIGN - - - FROM - - - GRANT - - - GROUP - - - HAVING - - - IN - - - INITIALLY - - - INTERSECT - - - INTO - - - LATERAL - - - LEADING - - - LIMIT - - - LOCALTIME - - - LOCALTIMESTAMP - - - NOT - - - NULL - - - OFFSET - - - ON - - - ONLY - - - OR - - - ORDER - - - PLACING - - - PRIMARY - - - REFERENCES - - - RETURNING - - - SELECT - - - SESSION_USER - - - SOME - - - SYMMETRIC - - - TABLE - - - THEN - - - TO - - - TRAILING - - - TRUE - - - UNION - - - UNIQUE - - - USER - - - USING - - - VARIADIC - - - VISIBLE - - - WHEN - - - WHERE - - - WINDOW - - - WITH - - - - cockroachdb_extra_reserved_keyword - - - - - -

referenced by: -

-


simple_typename:

- - - - - - - - general_type_name - - - - @ - - - ICONST - - - - complex_type_name - - - - - const_typename - - - - - bit_with_length - - - - - character_with_length - - - - - interval_type - - - - - -

referenced by: -

-


opt_array_bounds:

- - - - - - - [ - - - ] - - - - -

referenced by: -

-


transaction_user_priority:

- - - - - - - PRIORITY - - - - user_priority - - - - - -

referenced by: -

-


transaction_read_mode:

- - - - - - - READ - - - ONLY - - - WRITE - - - - -

referenced by: -

-


transaction_deferrable_mode:

- - - - - - - NOT - - - DEFERRABLE - - - - -

referenced by: -

-


alter_table_cmds:

- - - - - - - - alter_table_cmd - - - - , - - - - -

referenced by: -

-


set_zone_config:

- - - - - - - CONFIGURE - - - ZONE - - - USING - - - - var_set_list - - - - DISCARD - - - - -

referenced by: -

-


locality:

- - - - - - - LOCALITY - - - GLOBAL - - - REGIONAL - - - BY - - - TABLE - - - IN - - - - region_name - - - - PRIMARY - - - REGION - - - ROW - - - AS - - - - name - - - - IN - - - - region_name - - - - PRIMARY - - - REGION - - - - -

referenced by: -

-


alter_index_cmds:

- - - - - - - - alter_index_cmd - - - - , - - - - -

referenced by: -

-


sequence_option_list:

- - - - - - - - sequence_option_elem - - - - - -

referenced by: -

-


region_name:

- - - - - - - - name - - - - - -

referenced by: -

-


survival_goal_clause:

- - - - - - - SURVIVE - - - - opt_equal - - - - REGION - - - ZONE - - - FAILURE - - - - -

referenced by: -

-


primary_region_clause:

- - - - - - - PRIMARY - - - REGION - - - - opt_equal - - - - - region_name - - - - - -

referenced by: -

-


role_option:

- - - - - - - CREATEROLE - - - NOCREATEROLE - - - LOGIN - - - NOLOGIN - - - CONTROLJOB - - - NOCONTROLJOB - - - CONTROLCHANGEFEED - - - NOCONTROLCHANGEFEED - - - CREATEDB - - - NOCREATEDB - - - CREATELOGIN - - - NOCREATELOGIN - - - VIEWACTIVITY - - - NOVIEWACTIVITY - - - CANCELQUERY - - - NOCANCELQUERY - - - MODIFYCLUSTERSETTING - - - NOMODIFYCLUSTERSETTING - - - - password_clause - - - - - valid_until_clause - - - - - -

referenced by: -

-


d_expr:

- - - - - - - @ - - - ICONST - - - FCONST - - - SCONST - - - BCONST - - - BITCONST - - - - typed_literal - - - - - interval_value - - - - TRUE - - - FALSE - - - NULL - - - - column_path_with_star - - - - PLACEHOLDER - - - ( - - - - a_expr - - - - ) - - - . - - - * - - - - unrestricted_name - - - - @ - - - ICONST - - - - func_expr - - - - - select_with_parens - - - - - labeled_row - - - - ARRAY - - - - select_with_parens - - - - - row - - - - - array_expr - - - - - -

referenced by: -

-


array_subscripts:

- - - - - - - - array_subscript - - - - - -

referenced by: -

-


case_expr:

- - - - - - - CASE - - - - case_arg - - - - - when_clause_list - - - - - case_default - - - - END - - - - -

referenced by: -

-


expr_tuple1_ambiguous:

- - - - - - - ( - - - - tuple1_ambiguous_values - - - - ) - - - - -

referenced by: -

-


math_op:

- - - - - - - + - - - - - - - * - - - / - - - FLOORDIV - - - % - - - & - - - | - - - ^ - - - # - - - < - - - > - - - = - - - LESS_EQUALS - - - GREATER_EQUALS - - - NOT_EQUALS - - - - -

referenced by: -

-


opt_equal:

- - - - - - - = - - - - -

referenced by: -

-


signed_iconst:

- - - - - - - ICONST - - - - only_signed_iconst - - - - - -

referenced by: -

-


region_name_list:

- - - - - - - - name_list - - - - - -

referenced by: -

-


opt_name:

- - - - - - - - name - - - - - -

referenced by: -

-


index_elem:

- - - - - - - - func_expr_windowless - - - - ( - - - - a_expr - - - - ) - - - - name - - - - - index_elem_options - - - - - -

referenced by: -

-


storing:

- - - - - - - COVERING - - - STORING - - - INCLUDE - - - - -

referenced by: -

-


partition_by:

- - - - - - - PARTITION - - - BY - - - - partition_by_inner - - - - - -

referenced by: -

-


storage_parameter_list:

- - - - - - - - storage_parameter - - - - , - - - - -

referenced by: -

-


partition_by_table:

- - - - - - - - partition_by - - - - PARTITION - - - ALL - - - BY - - - - partition_by_inner - - - - - -

referenced by: -

-


create_as_table_defs:

- - - - - - - - column_name - - - - - create_as_col_qual_list - - - - , - - - - column_name - - - - - create_as_col_qual_list - - - - - family_def - - - - - create_as_constraint_def - - - - - -

referenced by: -

-


enum_val_list:

- - - - - - - SCONST - - - , - - - - -

referenced by: -

-


replication_options:

- - - - - - - CURSOR - - - = - - - - a_expr - - - - DETACHED - - - - -

referenced by: -

-


common_table_expr:

- - - - - - - - table_alias_name - - - - - opt_column_list - - - - AS - - - - materialize_clause - - - - ( - - - - preparable_stmt - - - - ) - - - - -

referenced by: -

-


index_flags_param_list:

- - - - - - - - index_flags_param - - - - , - - - - -

referenced by: -

-


sortby:

- - - - - - - - a_expr - - - - - opt_asc_desc - - - - - opt_nulls_order - - - - PRIMARY - - - KEY - - - - table_name - - - - INDEX - - - - table_name - - - - @ - - - - index_name - - - - - opt_asc_desc - - - - - -

referenced by: -

-


only_signed_iconst:

- - - - - - - + - - - - - - - ICONST - - - - -

referenced by: -

-


only_signed_fconst:

- - - - - - - + - - - - - - - FCONST - - - - -

referenced by: -

-


target_name:

- - - - - - - - unrestricted_name - - - - - -

referenced by: -

-


col_qual_list:

- - - - - - - - col_qualification - - - - - -

referenced by: -

-


opt_family_name:

- - - - - - - - opt_name - - - - - -

referenced by: -

-


constraint_name:

- - - - - - - - name - - - - - -

referenced by: -

-


constraint_elem:

- - - - - - - CHECK - - - ( - - - - a_expr - - - - ) - - - UNIQUE - - - ( - - - - index_params - - - - ) - - - - opt_storing - - - - - opt_interleave - - - - - opt_partition_by_index - - - - - opt_where_clause - - - - PRIMARY - - - KEY - - - ( - - - - index_params - - - - ) - - - - opt_hash_sharded - - - - - opt_interleave - - - - FOREIGN - - - KEY - - - ( - - - - name_list - - - - ) - - - REFERENCES - - - - table_name - - - - - opt_column_list - - - - - key_match - - - - - reference_actions - - - - - -

referenced by: -

-


like_table_option:

- - - - - - - CONSTRAINTS - - - DEFAULTS - - - GENERATED - - - INDEXES - - - ALL - - - - -

referenced by: -

-


scrub_option:

- - - - - - - INDEX - - - CONSTRAINT - - - ALL - - - ( - - - - name_list - - - - ) - - - PHYSICAL - - - - -

referenced by: -

-


opt_all_clause:

- - - - - - - ALL - - - - -

referenced by: -

-


from_clause:

- - - - - - - FROM - - - - from_list - - - - - opt_as_of_clause - - - - - -

referenced by: -

-


group_clause:

- - - - - - - GROUP - - - BY - - - - group_by_list - - - - - -

referenced by: -

-


having_clause:

- - - - - - - HAVING - - - - a_expr - - - - - -

referenced by: -

-


window_clause:

- - - - - - - WINDOW - - - - window_definition_list - - - - - -

referenced by: -

-


distinct_on_clause:

- - - - - - - DISTINCT - - - ON - - - ( - - - - expr_list - - - - ) - - - - -

referenced by: -

-


all_or_distinct:

- - - - - - - ALL - - - DISTINCT - - - - -

referenced by: -

-


for_locking_item:

- - - - - - - - for_locking_strength - - - - - opt_locked_rels - - - - - opt_nowait_or_skip - - - - - -

referenced by: -

-


var_list:

- - - - - - - - var_value - - - - , - - - - -

referenced by: -

-


opt_ordinality:

- - - - - - - WITH - - - ORDINALITY - - - - -

referenced by: -

-


opt_alias_clause:

- - - - - - - - alias_clause - - - - - -

referenced by: -

-


joined_table:

- - - - - - - ( - - - - joined_table - - - - ) - - - - table_ref - - - - CROSS - - - - opt_join_hint - - - - NATURAL - - - - join_type - - - - - opt_join_hint - - - - JOIN - - - - table_ref - - - - - join_type - - - - - opt_join_hint - - - - JOIN - - - - table_ref - - - - - join_qual - - - - - -

referenced by: -

-


alias_clause:

- - - - - - - AS - - - - table_alias_name - - - - - opt_column_list - - - - - -

referenced by: -

-


func_table:

- - - - - - - - func_expr_windowless - - - - ROWS - - - FROM - - - ( - - - - rowsfrom_list - - - - ) - - - - -

referenced by: -

-


row_source_extension_stmt:

- - - - - - - - delete_stmt - - - - - explain_stmt - - - - - insert_stmt - - - - - select_stmt - - - - - show_stmt - - - - - update_stmt - - - - - upsert_stmt - - - - - -

referenced by: -

-


type_func_name_no_crdb_extra_keyword:

- - - - - - - AUTHORIZATION - - - COLLATION - - - CROSS - - - FULL - - - INNER - - - ILIKE - - - IS - - - ISNULL - - - JOIN - - - LEFT - - - LIKE - - - NATURAL - - - NONE - - - NOTNULL - - - OUTER - - - OVERLAPS - - - RIGHT - - - SIMILAR - - - - -

referenced by: -

-


general_type_name:

- - - - - - - - type_function_name_no_crdb_extra - - - - - -

referenced by: -

-


complex_type_name:

- - - - - - - - general_type_name - - - - . - - - - unrestricted_name - - - - . - - - - unrestricted_name - - - - - -

referenced by: -

-


const_typename:

- - - - - - - - numeric - - - - - bit_without_length - - - - - character_without_length - - - - - const_datetime - - - - - const_geo - - - - - -

referenced by: -

-


bit_with_length:

- - - - - - - BIT - - - - opt_varying - - - - VARBIT - - - ( - - - ICONST - - - ) - - - - -

referenced by: -

-


character_with_length:

- - - - - - - - character_base - - - - ( - - - ICONST - - - ) - - - - -

referenced by: -

-


interval_type:

- - - - - - - INTERVAL - - - - interval_qualifier - - - - ( - - - ICONST - - - ) - - - - -

referenced by: -

-


user_priority:

- - - - - - - LOW - - - NORMAL - - - HIGH - - - - -

referenced by: -

-


alter_table_cmd:

- - - - - - - RENAME - - - - opt_column - - - - CONSTRAINT - - - - column_name - - - - TO - - - - column_name - - - - ADD - - - COLUMN - - - IF - - - NOT - - - EXISTS - - - - column_def - - - - - table_constraint - - - - - opt_validate_behavior - - - - ALTER - - - - opt_column - - - - - column_name - - - - - alter_column_default - - - - - alter_column_visible - - - - DROP - - - NOT - - - NULL - - - STORED - - - SET - - - NOT - - - NULL - - - - opt_set_data - - - - TYPE - - - - typename - - - - - opt_collate - - - - - opt_alter_column_using - - - - PRIMARY - - - KEY - - - USING - - - COLUMNS - - - ( - - - - index_params - - - - ) - - - - opt_hash_sharded - - - - - opt_interleave - - - - DROP - - - - opt_column - - - - IF - - - EXISTS - - - - column_name - - - - CONSTRAINT - - - IF - - - EXISTS - - - - constraint_name - - - - - opt_drop_behavior - - - - VALIDATE - - - CONSTRAINT - - - - constraint_name - - - - EXPERIMENTAL_AUDIT - - - SET - - - - audit_mode - - - - - partition_by_table - - - - - -

referenced by: -

-


var_set_list:

- - - - - - - - var_name - - - - = - - - COPY - - - FROM - - - PARENT - - - - var_value - - - - , - - - - var_name - - - - = - - - - var_value - - - - COPY - - - FROM - - - PARENT - - - - -

referenced by: -

-


alter_index_cmd:

- - - - - - - - partition_by_index - - - - - -

referenced by: -

-


sequence_option_elem:

- - - - - - - NO - - - CYCLE - - - MINVALUE - - - MAXVALUE - - - OWNED - - - BY - - - NONE - - - - column_path - - - - CACHE - - - MINVALUE - - - MAXVALUE - - - INCREMENT - - - BY - - - START - - - WITH - - - - signed_iconst64 - - - - VIRTUAL - - - - -

referenced by: -

-


password_clause:

- - - - - - - PASSWORD - - - - string_or_placeholder - - - - NULL - - - - -

referenced by: -

-


valid_until_clause:

- - - - - - - VALID - - - UNTIL - - - - string_or_placeholder - - - - NULL - - - - -

referenced by: -

-


typed_literal:

- - - - - - - - func_name_no_crdb_extra - - - - - const_typename - - - - SCONST - - - - -

referenced by: -

-


interval_value:

- - - - - - - INTERVAL - - - SCONST - - - - opt_interval_qualifier - - - - ( - - - ICONST - - - ) - - - SCONST - - - - -

referenced by: -

-


column_path_with_star:

- - - - - - - - column_path - - - - - db_object_name_component - - - - . - - - - unrestricted_name - - - - . - - - - unrestricted_name - - - - . - - - * - - - - -

referenced by: -

-


func_expr:

- - - - - - - - func_application - - - - - within_group_clause - - - - - filter_clause - - - - - over_clause - - - - - func_expr_common_subexpr - - - - - -

referenced by: -

-


labeled_row:

- - - - - - - - row - - - - ( - - - - row - - - - AS - - - - name_list - - - - ) - - - - -

referenced by: -

-


row:

- - - - - - - ROW - - - ( - - - - opt_expr_list - - - - ) - - - - expr_tuple_unambiguous - - - - - -

referenced by: -

-


array_expr:

- - - - - - - [ - - - - opt_expr_list - - - - - array_expr_list - - - - ] - - - - -

referenced by: -

-


array_subscript:

- - - - - - - [ - - - - a_expr - - - - - opt_slice_bound - - - - : - - - - opt_slice_bound - - - - ] - - - - -

referenced by: -

-


case_arg:

- - - - - - - - a_expr - - - - - -

referenced by: -

-


when_clause_list:

- - - - - - - - when_clause - - - - - -

referenced by: -

-


case_default:

- - - - - - - ELSE - - - - a_expr - - - - - -

referenced by: -

-


tuple1_ambiguous_values:

- - - - - - - - a_expr - - - - , - - - - expr_list - - - - - -

referenced by: -

-


func_expr_windowless:

- - - - - - - - func_application - - - - - func_expr_common_subexpr - - - - - -

referenced by: -

-


index_elem_options:

- - - - - - - - opt_class - - - - - opt_asc_desc - - - - - opt_nulls_order - - - - - -

referenced by: -

-


partition_by_inner:

- - - - - - - LIST - - - ( - - - - name_list - - - - ) - - - ( - - - - list_partitions - - - - RANGE - - - ( - - - - name_list - - - - ) - - - ( - - - - range_partitions - - - - ) - - - NOTHING - - - - -

referenced by: -

-


storage_parameter:

- - - - - - - - name - - - - SCONST - - - = - - - - var_value - - - - - -

referenced by: -

-


create_as_col_qual_list:

- - - - - - - - create_as_col_qualification - - - - - -

referenced by: -

-


create_as_constraint_def:

- - - - - - - - create_as_constraint_elem - - - - - -

referenced by: -

-


materialize_clause:

- - - - - - - NOT - - - MATERIALIZED - - - - -

referenced by: -

-


index_flags_param:

- - - - - - - FORCE_INDEX - - - = - - - - index_name - - - - NO_INDEX_JOIN - - - - -

referenced by: -

-


opt_asc_desc:

- - - - - - - ASC - - - DESC - - - - -

referenced by: -

-


opt_nulls_order:

- - - - - - - NULLS - - - FIRST - - - LAST - - - - -

referenced by: -

-


col_qualification:

- - - - - - - CONSTRAINT - - - - constraint_name - - - - - col_qualification_elem - - - - COLLATE - - - - collation_name - - - - FAMILY - - - - family_name - - - - CREATE - - - FAMILY - - - - family_name - - - - IF - - - NOT - - - EXISTS - - - FAMILY - - - - family_name - - - - - -

referenced by: -

-


key_match:

- - - - - - - MATCH - - - SIMPLE - - - FULL - - - - -

referenced by: -

-


reference_actions:

- - - - - - - - reference_on_update - - - - - reference_on_delete - - - - - reference_on_delete - - - - - reference_on_update - - - - - -

referenced by: -

-


group_by_list:

- - - - - - - - group_by_item - - - - , - - - - -

referenced by: -

-


window_definition_list:

- - - - - - - - window_definition - - - - , - - - - -

referenced by: -

-


for_locking_strength:

- - - - - - - FOR - - - NO - - - KEY - - - UPDATE - - - KEY - - - SHARE - - - - -

referenced by: -

-


opt_locked_rels:

- - - - - - - OF - - - - table_name_list - - - - - -

referenced by: -

-


opt_nowait_or_skip:

- - - - - - - SKIP - - - LOCKED - - - NOWAIT - - - - -

referenced by: -

-


opt_join_hint:

- - - - - - - HASH - - - MERGE - - - LOOKUP - - - INVERTED - - - - -

referenced by: -

-


join_type:

- - - - - - - FULL - - - LEFT - - - RIGHT - - - - join_outer - - - - INNER - - - - -

referenced by: -

-


join_qual:

- - - - - - - USING - - - ( - - - - name_list - - - - ) - - - ON - - - - a_expr - - - - - -

referenced by: -

-


rowsfrom_list:

- - - - - - - - rowsfrom_item - - - - , - - - - -

referenced by: -

-


type_function_name_no_crdb_extra:

- - - - - - - identifier - - - - unreserved_keyword - - - - - type_func_name_no_crdb_extra_keyword - - - - - -

referenced by: -

-


numeric:

- - - - - - - INT - - - INTEGER - - - SMALLINT - - - BIGINT - - - REAL - - - FLOAT - - - - opt_float - - - - DOUBLE - - - PRECISION - - - DECIMAL - - - DEC - - - NUMERIC - - - - opt_numeric_modifiers - - - - BOOLEAN - - - - -

referenced by: -

-


bit_without_length:

- - - - - - - BIT - - - VARYING - - - VARBIT - - - - -

referenced by: -

-


character_without_length:

- - - - - - - - character_base - - - - - -

referenced by: -

-


const_datetime:

- - - - - - - DATE - - - TIME - - - TIMESTAMP - - - ( - - - ICONST - - - ) - - - - opt_timezone - - - - TIMETZ - - - TIMESTAMPTZ - - - ( - - - ICONST - - - ) - - - - -

referenced by: -

-


const_geo:

- - - - - - - GEOGRAPHY - - - GEOMETRY - - - ( - - - - geo_shape_type - - - - , - - - - signed_iconst - - - - ) - - - BOX2D - - - - -

referenced by: -

-


opt_varying:

- - - - - - - VARYING - - - - -

referenced by: -

-


character_base:

- - - - - - - - char_aliases - - - - VARYING - - - VARCHAR - - - STRING - - - - -

referenced by: -

-


interval_qualifier:

- - - - - - - YEAR - - - TO - - - MONTH - - - MONTH - - - DAY - - - TO - - - HOUR - - - MINUTE - - - - interval_second - - - - HOUR - - - TO - - - MINUTE - - - - interval_second - - - - MINUTE - - - TO - - - - interval_second - - - - - interval_second - - - - - -

referenced by: -

-


opt_column:

- - - - - - - COLUMN - - - - -

referenced by: -

-


alter_column_default:

- - - - - - - SET - - - DEFAULT - - - - a_expr - - - - DROP - - - DEFAULT - - - - -

referenced by: -

-


alter_column_visible:

- - - - - - - SET - - - NOT - - - VISIBLE - - - - -

referenced by: -

-


opt_set_data:

- - - - - - - SET - - - DATA - - - - -

referenced by: -

-


opt_collate:

- - - - - - - COLLATE - - - - collation_name - - - - - -

referenced by: -

-


opt_alter_column_using:

- - - - - - - USING - - - - a_expr - - - - - -

referenced by: -

-


audit_mode:

- - - - - - - READ - - - WRITE - - - OFF - - - - -

referenced by: -

-


partition_by_index:

- - - - - - - - partition_by - - - - - -

referenced by: -

-


signed_iconst64:

- - - - - - - - signed_iconst - - - - - -

referenced by: -

-


func_name_no_crdb_extra:

- - - - - - - - type_function_name_no_crdb_extra - - - - - prefixed_column_path - - - - - -

referenced by: -

-


opt_interval_qualifier:

- - - - - - - - interval_qualifier - - - - - -

referenced by: -

-


func_application:

- - - - - - - - func_name - - - - ( - - - ALL - - - - expr_list - - - - - opt_sort_clause - - - - DISTINCT - - - - expr_list - - - - * - - - ) - - - - -

referenced by: -

-


within_group_clause:

- - - - - - - WITHIN - - - GROUP - - - ( - - - - single_sort_clause - - - - ) - - - - -

referenced by: -

-


filter_clause:

- - - - - - - FILTER - - - ( - - - WHERE - - - - a_expr - - - - ) - - - - -

referenced by: -

-


over_clause:

- - - - - - - OVER - - - - window_specification - - - - - window_name - - - - - -

referenced by: -

-


func_expr_common_subexpr:

- - - - - - - COLLATION - - - FOR - - - ( - - - IF - - - ( - - - - a_expr - - - - , - - - NULLIF - - - IFNULL - - - ( - - - IFERROR - - - ( - - - - a_expr - - - - , - - - - a_expr - - - - , - - - ISERROR - - - ( - - - - a_expr - - - - , - - - - a_expr - - - - CAST - - - ( - - - - a_expr - - - - AS - - - - cast_target - - - - ANNOTATE_TYPE - - - ( - - - - a_expr - - - - , - - - - typename - - - - COALESCE - - - ( - - - - expr_list - - - - ) - - - CURRENT_DATE - - - CURRENT_SCHEMA - - - CURRENT_CATALOG - - - CURRENT_TIMESTAMP - - - CURRENT_TIME - - - LOCALTIMESTAMP - - - LOCALTIME - - - CURRENT_USER - - - CURRENT_ROLE - - - SESSION_USER - - - USER - - - - special_function - - - - - -

referenced by: -

-


opt_expr_list:

- - - - - - - - expr_list - - - - - -

referenced by: -

-


expr_tuple_unambiguous:

- - - - - - - ( - - - - tuple1_unambiguous_values - - - - ) - - - - -

referenced by: -

-


array_expr_list:

- - - - - - - - array_expr - - - - , - - - - -

referenced by: -

-


opt_slice_bound:

- - - - - - - - a_expr - - - - - -

referenced by: -

-


when_clause:

- - - - - - - WHEN - - - - a_expr - - - - THEN - - - - a_expr - - - - - -

referenced by: -

-


opt_class:

- - - - - - - - name - - - - - -

referenced by: -

-


list_partitions:

- - - - - - - - list_partition - - - - , - - - - -

referenced by: -

-


range_partitions:

- - - - - - - - range_partition - - - - , - - - - -

referenced by: -

-


create_as_col_qualification:

- - - - - - - - create_as_col_qualification_elem - - - - FAMILY - - - - family_name - - - - - -

referenced by: -

-


create_as_constraint_elem:

- - - - - - - PRIMARY - - - KEY - - - ( - - - - create_as_params - - - - ) - - - - -

referenced by: -

-


col_qualification_elem:

- - - - - - - NOT - - - NULL - - - VISIBLE - - - NULL - - - UNIQUE - - - PRIMARY - - - KEY - - - USING - - - HASH - - - WITH - - - BUCKET_COUNT - - - = - - - - a_expr - - - - CHECK - - - ( - - - - a_expr - - - - ) - - - DEFAULT - - - - b_expr - - - - REFERENCES - - - - table_name - - - - - opt_name_parens - - - - - key_match - - - - - reference_actions - - - - - generated_as - - - - ( - - - - a_expr - - - - ) - - - STORED - - - VIRTUAL - - - - -

referenced by: -

-


family_name:

- - - - - - - - name - - - - - -

referenced by: -

-


reference_on_update:

- - - - - - - ON - - - UPDATE - - - - reference_action - - - - - -

referenced by: -

-


reference_on_delete:

- - - - - - - ON - - - DELETE - - - - reference_action - - - - - -

referenced by: -

-


group_by_item:

- - - - - - - - a_expr - - - - - -

referenced by: -

-


window_definition:

- - - - - - - - window_name - - - - AS - - - - window_specification - - - - - -

referenced by: -

-


join_outer:

- - - - - - - OUTER - - - - -

referenced by: -

-


rowsfrom_item:

- - - - - - - - func_expr_windowless - - - - - -

referenced by: -

-


opt_float:

- - - - - - - ( - - - ICONST - - - ) - - - - -

referenced by: -

-


opt_numeric_modifiers:

- - - - - - - ( - - - ICONST - - - , - - - ICONST - - - ) - - - - -

referenced by: -

-


opt_timezone:

- - - - - - - WITH - - - WITHOUT - - - TIME - - - ZONE - - - - -

referenced by: -

-


geo_shape_type:

- - - - - - - POINT - - - POINTM - - - POINTZ - - - POINTZM - - - LINESTRING - - - LINESTRINGM - - - LINESTRINGZ - - - LINESTRINGZM - - - POLYGON - - - POLYGONM - - - POLYGONZ - - - POLYGONZM - - - MULTIPOINT - - - MULTIPOINTM - - - MULTIPOINTZ - - - MULTIPOINTZM - - - MULTILINESTRING - - - MULTILINESTRINGM - - - MULTILINESTRINGZ - - - MULTILINESTRINGZM - - - MULTIPOLYGON - - - MULTIPOLYGONM - - - MULTIPOLYGONZ - - - MULTIPOLYGONZM - - - GEOMETRYCOLLECTION - - - GEOMETRYCOLLECTIONM - - - GEOMETRYCOLLECTIONZ - - - GEOMETRYCOLLECTIONZM - - - GEOMETRY - - - GEOMETRYM - - - GEOMETRYZ - - - GEOMETRYZM - - - - -

referenced by: -

-


char_aliases:

- - - - - - - CHAR - - - CHARACTER - - - - -

referenced by: -

-


interval_second:

- - - - - - - SECOND - - - ( - - - ICONST - - - ) - - - - -

referenced by: -

-


func_name:

- - - - - - - - type_function_name - - - - - prefixed_column_path - - - - - -

referenced by: -

-


single_sort_clause:

- - - - - - - ORDER - - - BY - - - - sortby - - - - , - - - - sortby_list - - - - - -

referenced by: -

-


window_specification:

- - - - - - - ( - - - - opt_existing_window_name - - - - - opt_partition_clause - - - - - opt_sort_clause - - - - - opt_frame_clause - - - - ) - - - - -

referenced by: -

-


window_name:

- - - - - - - - name - - - - - -

referenced by: -

-


special_function:

- - - - - - - CURRENT_DATE - - - CURRENT_SCHEMA - - - CURRENT_USER - - - ( - - - EXTRACT - - - EXTRACT_DURATION - - - ( - - - - extract_list - - - - OVERLAY - - - ( - - - - overlay_list - - - - POSITION - - - ( - - - - position_list - - - - SUBSTRING - - - ( - - - - substr_list - - - - GREATEST - - - LEAST - - - ( - - - - expr_list - - - - CURRENT_TIMESTAMP - - - CURRENT_TIME - - - LOCALTIMESTAMP - - - LOCALTIME - - - ( - - - - a_expr - - - - TRIM - - - ( - - - BOTH - - - LEADING - - - TRAILING - - - - trim_list - - - - ) - - - - -

referenced by: -

-


tuple1_unambiguous_values:

- - - - - - - - a_expr - - - - , - - - - expr_list - - - - - -

referenced by: -

-


list_partition:

- - - - - - - - partition - - - - VALUES - - - IN - - - ( - - - - expr_list - - - - ) - - - - opt_partition_by - - - - - -

referenced by: -

-


range_partition:

- - - - - - - - partition - - - - VALUES - - - FROM - - - ( - - - - expr_list - - - - ) - - - TO - - - ( - - - - expr_list - - - - ) - - - - opt_partition_by - - - - - -

referenced by: -

-


create_as_col_qualification_elem:

- - - - - - - PRIMARY - - - KEY - - - - -

referenced by: -

-


create_as_params:

- - - - - - - - create_as_param - - - - , - - - - -

referenced by: -

-


opt_name_parens:

- - - - - - - ( - - - - name - - - - ) - - - - -

referenced by: -

-


generated_as:

- - - - - - - GENERATED_ALWAYS - - - ALWAYS - - - AS - - - - -

referenced by: -

-


reference_action:

- - - - - - - NO - - - ACTION - - - RESTRICT - - - CASCADE - - - SET - - - NULL - - - DEFAULT - - - - -

referenced by: -

-


type_function_name:

- - - - - - - identifier - - - - unreserved_keyword - - - - - type_func_name_keyword - - - - - -

referenced by: -

-


opt_existing_window_name:

- - - - - - - - name - - - - - -

referenced by: -

-


opt_partition_clause:

- - - - - - - PARTITION - - - BY - - - - expr_list - - - - - -

referenced by: -

-


opt_frame_clause:

- - - - - - - RANGE - - - ROWS - - - GROUPS - - - - frame_extent - - - - - opt_frame_exclusion - - - - - -

referenced by: -

-


extract_list:

- - - - - - - - extract_arg - - - - FROM - - - - a_expr - - - - - expr_list - - - - - -

referenced by: -

-


overlay_list:

- - - - - - - - a_expr - - - - - overlay_placing - - - - - substr_from - - - - - substr_for - - - - - expr_list - - - - - -

referenced by: -

-


position_list:

- - - - - - - - b_expr - - - - IN - - - - b_expr - - - - - -

referenced by: -

-


substr_list:

- - - - - - - - a_expr - - - - - substr_from - - - - - substr_for - - - - - substr_for - - - - - substr_from - - - - - opt_expr_list - - - - - -

referenced by: -

-


trim_list:

- - - - - - - - a_expr - - - - FROM - - - - expr_list - - - - - -

referenced by: -

-


opt_partition_by:

- - - - - - - - partition_by - - - - - -

referenced by: -

-


create_as_param:

- - - - - - - - column_name - - - - - -

referenced by: -

-


frame_extent:

- - - - - - - BETWEEN - - - - frame_bound - - - - AND - - - - frame_bound - - - - - -

referenced by: -

-


opt_frame_exclusion:

- - - - - - - EXCLUDE - - - CURRENT - - - ROW - - - GROUP - - - TIES - - - NO - - - OTHERS - - - - -

referenced by: -

-


extract_arg:

- - - - - - - identifier - - - YEAR - - - MONTH - - - DAY - - - HOUR - - - MINUTE - - - SECOND - - - SCONST - - - - -

referenced by: -

-


overlay_placing:

- - - - - - - PLACING - - - - a_expr - - - - - -

referenced by: -

-


substr_from:

- - - - - - - FROM - - - - a_expr - - - - - -

referenced by: -

-


substr_for:

- - - - - - - FOR - - - - a_expr - - - - - -

referenced by: -

-


frame_bound:

- - - - - - - UNBOUNDED - - - - a_expr - - - - PRECEDING - - - FOLLOWING - - - CURRENT - - - ROW - - - - -

referenced by: -

-


generated by Railroad Diagram Generator

\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/table_clause.html b/src/current/_includes/v21.1/sql/generated/diagrams/table_clause.html deleted file mode 100644 index 97691481d76..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/table_clause.html +++ /dev/null @@ -1,15 +0,0 @@ -
- - - - -TABLE - - - -table_ref - - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/table_constraint.html b/src/current/_includes/v21.1/sql/generated/diagrams/table_constraint.html deleted file mode 100644 index d71511ea614..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/table_constraint.html +++ /dev/null @@ -1,120 +0,0 @@ -
- - - - -CONSTRAINT - - -constraint_name - -CHECK - - -( - - -a_expr - -) - - -UNIQUE - - -( - - -index_params - -) - - -COVERING - - -STORING - - -INCLUDE - - -( - - -name_list - -) - - -opt_interleave - - -opt_partition_by_index - - -opt_where_clause - -PRIMARY - - -KEY - - -( - - -index_params - -) - - -USING - - -HASH - - -WITH - - -BUCKET_COUNT - - -= - - -n_buckets - - -opt_interleave - -FOREIGN - - -KEY - - -( - - -name_list - -) - - -REFERENCES - - -table_name - - -opt_column_list - - -key_match - - -reference_actions - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/table_ref.html b/src/current/_includes/v21.1/sql/generated/diagrams/table_ref.html deleted file mode 100644 index db27a233acc..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/table_ref.html +++ /dev/null @@ -1,72 +0,0 @@ - diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/truncate.html b/src/current/_includes/v21.1/sql/generated/diagrams/truncate.html deleted file mode 100644 index 06cb91a310c..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/truncate.html +++ /dev/null @@ -1,28 +0,0 @@ -
- - - - - - TRUNCATE - - - TABLE - - - - table_name - - - - , - - - CASCADE - - - RESTRICT - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/unique_column_level.html b/src/current/_includes/v21.1/sql/generated/diagrams/unique_column_level.html deleted file mode 100644 index c7c178e9351..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/unique_column_level.html +++ /dev/null @@ -1,59 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - - table_name - - - - ( - - - - column_name - - - - - column_type - - - - UNIQUE - - - - column_constraints - - - - , - - - - column_def - - - - - table_constraints - - - - ) - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/unique_table_level.html b/src/current/_includes/v21.1/sql/generated/diagrams/unique_table_level.html deleted file mode 100644 index e77a972161a..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/unique_table_level.html +++ /dev/null @@ -1,63 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - - table_name - - - - ( - - - - column_def - - - - , - - - CONSTRAINT - - - - name - - - - UNIQUE - - - ( - - - - column_name - - - - , - - - ) - - - - table_constraints - - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/unsplit_index_at.html b/src/current/_includes/v21.1/sql/generated/diagrams/unsplit_index_at.html deleted file mode 100644 index f2c1e35f6fd..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/unsplit_index_at.html +++ /dev/null @@ -1,30 +0,0 @@ -
- - - - -ALTER - - -INDEX - - -table_name - -@ - - -index_name - -UNSPLIT - - -AT - - -select_stmt - -ALL - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/unsplit_table_at.html b/src/current/_includes/v21.1/sql/generated/diagrams/unsplit_table_at.html deleted file mode 100644 index 797c0beda03..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/unsplit_table_at.html +++ /dev/null @@ -1,25 +0,0 @@ -
- - - - -ALTER - - -TABLE - - -table_name - -UNSPLIT - - -AT - - -select_stmt - -ALL - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/update.html b/src/current/_includes/v21.1/sql/generated/diagrams/update.html deleted file mode 100644 index 49b1663edbb..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/update.html +++ /dev/null @@ -1,151 +0,0 @@ -
- - - - -WITH - - -RECURSIVE - - - -common_table_expr - - - -, - - -UPDATE - - -ONLY - - - -table_name - - - - -opt_index_flags - - - -* - - -AS - - - -table_alias_name - - - -SET - - - -column_name - - - -= - - - -a_expr - - - -( - - - -column_name - - - -, - - -) - - -= - - -( - - - -select_stmt - - - - -a_expr - - - -, - - - -a_expr - - - -, - - -) - - -, - - -FROM - - - -table_ref - - - -, - - -WHERE - - - -a_expr - - - - -sort_clause - - - - -limit_clause - - - -RETURNING - - - -target_list - - - -NOTHING - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/upsert.html b/src/current/_includes/v21.1/sql/generated/diagrams/upsert.html deleted file mode 100644 index 9765d9a3b10..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/upsert.html +++ /dev/null @@ -1,57 +0,0 @@ -
- - - - -WITH - - -RECURSIVE - - -common_table_expr - -, - - -UPSERT - - -INTO - - -table_name - -AS - - -table_alias_name - -( - - -column_name - -, - - -) - - -select_stmt - -DEFAULT - - -VALUES - - -RETURNING - - -target_list - -NOTHING - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/validate_constraint.html b/src/current/_includes/v21.1/sql/generated/diagrams/validate_constraint.html deleted file mode 100644 index d470d8dd98f..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/validate_constraint.html +++ /dev/null @@ -1,36 +0,0 @@ -
- - - - - - ALTER - - - TABLE - - - IF - - - EXISTS - - - - table_name - - - - VALIDATE - - - CONSTRAINT - - - - constraint_name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/values_clause.html b/src/current/_includes/v21.1/sql/generated/diagrams/values_clause.html deleted file mode 100644 index 34f78e982b4..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/values_clause.html +++ /dev/null @@ -1,27 +0,0 @@ -
- - - - -VALUES - - -( - - - -a_expr - - - -, - - -) - - -, - - - -
diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/window_definition.html b/src/current/_includes/v21.1/sql/generated/diagrams/window_definition.html deleted file mode 100644 index a5335af0ee9..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/window_definition.html +++ /dev/null @@ -1,28 +0,0 @@ - diff --git a/src/current/_includes/v21.1/sql/generated/diagrams/with_clause.html b/src/current/_includes/v21.1/sql/generated/diagrams/with_clause.html deleted file mode 100644 index a8750514b62..00000000000 --- a/src/current/_includes/v21.1/sql/generated/diagrams/with_clause.html +++ /dev/null @@ -1,80 +0,0 @@ -
- - - - -WITH - - -RECURSIVE - - - -table_alias_name - - - -( - - - -name - - - -, - - -) - - -AS - - -NOT - - -MATERIALIZED - - -( - - - -preparable_stmt - - - -) - - -, - - - -insert_stmt - - - - -update_stmt - - - - -delete_stmt - - - - -upsert_stmt - - - - -select_stmt - - - - -
diff --git a/src/current/_includes/v21.1/sql/global-table-description.md b/src/current/_includes/v21.1/sql/global-table-description.md deleted file mode 100644 index acd3b0be0c1..00000000000 --- a/src/current/_includes/v21.1/sql/global-table-description.md +++ /dev/null @@ -1,7 +0,0 @@ - _Global_ tables are optimized for low-latency reads from every region in the database. The tradeoff is that writes will incur higher latencies from any given region, since writes have to be replicated across every region to make the global low-latency reads possible. - -Use global tables when your application has a "read-mostly" table of reference data that is rarely updated, and needs to be available to all regions. - -For an example of a table that can benefit from the _global_ table locality setting in a multi-region deployment, see the `promo_codes` table from the [MovR application](movr.html). - -For instructions showing how to set a table's locality to `GLOBAL`, see [`ALTER TABLE ... SET LOCALITY`](set-locality.html#global) diff --git a/src/current/_includes/v21.1/sql/import-into-regional-by-row-table.md b/src/current/_includes/v21.1/sql/import-into-regional-by-row-table.md deleted file mode 100644 index abe9e16abe2..00000000000 --- a/src/current/_includes/v21.1/sql/import-into-regional-by-row-table.md +++ /dev/null @@ -1 +0,0 @@ -`IMPORT` and `IMPORT INTO` cannot directly import data to [`REGIONAL BY ROW`](set-locality.html#regional-by-row) tables that are part of [multi-region databases](multiregion-overview.html). For more information, including a workaround for this limitation, see [Known Limitations](known-limitations.html#import-into-a-regional-by-row-table). diff --git a/src/current/_includes/v21.1/sql/indexes-regional-by-row.md b/src/current/_includes/v21.1/sql/indexes-regional-by-row.md deleted file mode 100644 index e00964360b9..00000000000 --- a/src/current/_includes/v21.1/sql/indexes-regional-by-row.md +++ /dev/null @@ -1,3 +0,0 @@ -{% include_cached new-in.html version="v21.1" %} In [multi-region deployments](multiregion-overview.html), most users should use [`REGIONAL BY ROW` tables](multiregion-overview.html#regional-by-row-tables) instead of explicit index [partitioning](partitioning.html). When you add an index to a `REGIONAL BY ROW` table, it is automatically partitioned on the [`crdb_region` column](set-locality.html#crdb_region). Explicit index partitioning is not required. - -While CockroachDB process an [`ADD REGION`](add-region.html) or [`DROP REGION`](drop-region.html) statement on a particular database, creating or modifying an index will throw an error. Similarly, all [`ADD REGION`](add-region.html) and [`DROP REGION`](drop-region.html) statements will be blocked while an index is being modified on a `REGIONAL BY ROW` table within the same database. diff --git a/src/current/_includes/v21.1/sql/insert-vs-upsert.md b/src/current/_includes/v21.1/sql/insert-vs-upsert.md deleted file mode 100644 index cac251a6012..00000000000 --- a/src/current/_includes/v21.1/sql/insert-vs-upsert.md +++ /dev/null @@ -1,9 +0,0 @@ -When inserting or updating all columns of a table, and the table has no secondary -indexes, Cockroach Labs recommends using an `UPSERT` statement instead of the -equivalent `INSERT ON CONFLICT` statement. Whereas `INSERT ON CONFLICT` always -performs a read to determine the necessary writes, the `UPSERT` statement writes -without reading, making it faster. This may be particularly useful if -you are using a simple SQL table of two columns to [simulate direct KV access](sql-faqs.html#can-i-use-cockroachdb-as-a-key-value-store). -In this case, be sure to use the `UPSERT` statement. - -For tables with secondary indexes, there is no performance difference between `UPSERT` and `INSERT ON CONFLICT`. diff --git a/src/current/_includes/v21.1/sql/inverted-joins.md b/src/current/_includes/v21.1/sql/inverted-joins.md deleted file mode 100644 index 1f0c09ec64b..00000000000 --- a/src/current/_includes/v21.1/sql/inverted-joins.md +++ /dev/null @@ -1,102 +0,0 @@ -To run these examples, initialize a demo cluster with the MovR workload. - -{% include {{ page.version.version }}/demo_movr.md %} - -Create a GIN index on the `vehicles` table's `ext` column. - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE INVERTED INDEX idx_vehicle_details ON vehicles(ext); -~~~ - -Check the statement plan for a `SELECT` statement that uses an inner inverted join. - -{% include_cached copy-clipboard.html %} -~~~ sql -EXPLAIN SELECT * FROM vehicles@primary AS v2 INNER INVERTED JOIN vehicles@idx_vehicle_details AS v1 ON v1.ext @> v2.ext; -~~~ - -~~~ - info -------------------------------------------------------------------------------------------- - distribution: full - vectorized: true - - • lookup join - │ table: vehicles@primary - │ equality: (city, id) = (city,id) - │ equality cols are key - │ pred: ext @> ext - │ - └── • inverted join - │ table: vehicles@idx_vehicle_details - │ - └── • scan - estimated row count: 3,750 (100% of the table; stats collected 3 minutes ago) - table: vehicles@primary - spans: FULL SCAN -(16 rows) - -Time: 1ms total (execution 1ms / network 0ms) -~~~ - -You can omit the `INNER INVERTED JOIN` statement by putting `v1.ext` on the left side of a `@>` join condition in a `WHERE` clause and using an index hint for the GIN index. - -{% include_cached copy-clipboard.html %} -~~~ sql -EXPLAIN SELECT * FROM vehicles@idx_vehicle_details AS v1, vehicles AS v2 WHERE v1.ext @> v2.ext; -~~~ - -~~~ - info --------------------------------------------------------------------------------------------- - distribution: full - vectorized: true - - • lookup join - │ table: vehicles@primary - │ equality: (city, id) = (city,id) - │ equality cols are key - │ pred: ext @> ext - │ - └── • inverted join - │ table: vehicles@idx_vehicle_details - │ - └── • scan - estimated row count: 3,750 (100% of the table; stats collected 12 minutes ago) - table: vehicles@primary - spans: FULL SCAN -(16 rows) - -Time: 1ms total (execution 1ms / network 0ms) -~~~ - -Use the `LEFT INVERTED JOIN` hint to perform a left inverted join. - -~~~ sql -EXPLAIN SELECT * FROM vehicles AS v2 LEFT INVERTED JOIN vehicles AS v1 ON v1.ext @> v2.ext; -~~~ - -~~~ - info --------------------------------------------------------------------------------------------- - distribution: full - vectorized: true - - • lookup join (left outer) - │ table: vehicles@primary - │ equality: (city, id) = (city,id) - │ equality cols are key - │ pred: ext @> ext - │ - └── • inverted join (left outer) - │ table: vehicles@idx_vehicle_details - │ - └── • scan - estimated row count: 3,750 (100% of the table; stats collected 16 minutes ago) - table: vehicles@primary - spans: FULL SCAN -(16 rows) - -Time: 2ms total (execution 2ms / network 0ms) -~~~ diff --git a/src/current/_includes/v21.1/sql/limit-row-size.md b/src/current/_includes/v21.1/sql/limit-row-size.md deleted file mode 100644 index 7a27b3bc979..00000000000 --- a/src/current/_includes/v21.1/sql/limit-row-size.md +++ /dev/null @@ -1,22 +0,0 @@ -## Limit the size of rows - -To help you avoid failures arising from misbehaving applications that bloat the size of rows, you can specify the behavior when a row or individual [column family](column-families.html) larger than a specified size is written to the database. Use the [cluster settings](cluster-settings.html) `sql.guardrails.max_row_size_log` to discover large rows and `sql.guardrails.max_row_size_err` to reject large rows. - -When you write a row that exceeds `sql.guardrails.max_row_size_log`: - -- `INSERT`, `UPSERT`, `UPDATE`, `CREATE TABLE AS`, `CREATE INDEX`, `ALTER TABLE`, `ALTER INDEX`, `IMPORT`, or `RESTORE` statements will log a `LargeRow` to the [`SQL_PERF`](logging.html#sql_perf) channel. -- `SELECT`, `DELETE`, `TRUNCATE`, and `DROP` are not affected. - -When you write a row that exceeds `sql.guardrails.max_row_size_err`: - -- `INSERT`, `UPSERT`, and `UPDATE` statements will fail with a code `54000 (program_limit_exceeded)` error. - -- `CREATE TABLE AS`, `CREATE INDEX`, `ALTER TABLE`, `ALTER INDEX`, `IMPORT`, and `RESTORE` statements will log a `LargeRowInternal` event to the [`SQL_INTERNAL_PERF`](logging.html#sql_internal_perf) channel. - -- `SELECT`, `DELETE`, `TRUNCATE`, and `DROP` are not affected. - -You **cannot** update existing rows that violate the limit unless the update shrinks the size of the -row below the limit. You **can** select, delete, alter, back up, and restore such rows. We -recommend using the accompanying setting `sql.guardrails.max_row_size_log` in conjunction with -`SELECT pg_column_size()` queries to detect and fix any existing large rows before lowering -`sql.guardrails.max_row_size_err`. diff --git a/src/current/_includes/v21.1/sql/locality-optimized-search.md b/src/current/_includes/v21.1/sql/locality-optimized-search.md deleted file mode 100644 index 63f8660f95a..00000000000 --- a/src/current/_includes/v21.1/sql/locality-optimized-search.md +++ /dev/null @@ -1 +0,0 @@ -Note that there is a performance benefit for queries that select a single row (e.g., `SELECT * FROM users WHERE email = 'anemailaddress@gmail.com'`). If `'anemailaddress@gmail.com'` is found in the local region, there is no need to search remote regions. This feature, whereby the SQL engine will avoid sending requests to nodes in other regions when it can read a value from a unique column that is stored locally, is known as _locality optimized search_. diff --git a/src/current/_includes/v21.1/sql/movr-statements-geo-partitioned-replicas.md b/src/current/_includes/v21.1/sql/movr-statements-geo-partitioned-replicas.md deleted file mode 100644 index b15c5c92aa7..00000000000 --- a/src/current/_includes/v21.1/sql/movr-statements-geo-partitioned-replicas.md +++ /dev/null @@ -1,10 +0,0 @@ -### Setup - -The following examples use MovR, a fictional vehicle-sharing application, to demonstrate CockroachDB SQL statements. For more information about the MovR example application and dataset, see [MovR: A Global Vehicle-sharing App](movr.html). - -To follow along, run [`cockroach demo`](cockroach-demo.html) with the `--geo-partitioned-replicas` flag. This command opens an interactive SQL shell to a temporary, 9-node in-memory cluster with the `movr` database. - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach demo --geo-partitioned-replicas -~~~ diff --git a/src/current/_includes/v21.1/sql/movr-statements-nodes.md b/src/current/_includes/v21.1/sql/movr-statements-nodes.md deleted file mode 100644 index 4b9eddf612b..00000000000 --- a/src/current/_includes/v21.1/sql/movr-statements-nodes.md +++ /dev/null @@ -1,10 +0,0 @@ -### Setup - -The following examples use MovR, a fictional vehicle-sharing application, to demonstrate CockroachDB SQL statements. For more information about the MovR example application and dataset, see [MovR: A Global Vehicle-sharing App](movr.html). - -To follow along, run [`cockroach demo`](cockroach-demo.html) with the [`--nodes`](cockroach-demo.html#flags) and [`--demo-locality`](cockroach-demo.html#flags) flags. This command opens an interactive SQL shell to a temporary, multi-node in-memory cluster with the `movr` database preloaded and set as the [current database](sql-name-resolution.html#current-database). - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach demo --nodes=6 --demo-locality=region=us-east,zone=us-east-a:region=us-east,zone=us-east-b:region=us-central,zone=us-central-a:region=us-central,zone=us-central-b:region=us-west,zone=us-west-a:region=us-west,zone=us-west-b -~~~ diff --git a/src/current/_includes/v21.1/sql/movr-statements.md b/src/current/_includes/v21.1/sql/movr-statements.md deleted file mode 100644 index f696756213a..00000000000 --- a/src/current/_includes/v21.1/sql/movr-statements.md +++ /dev/null @@ -1,10 +0,0 @@ -### Setup - -The following examples use MovR, a fictional vehicle-sharing application, to demonstrate CockroachDB SQL statements. For more information about the MovR example application and dataset, see [MovR: A Global Vehicle-sharing App](movr.html). - -To follow along, run [`cockroach demo`](cockroach-demo.html) to start a temporary, in-memory cluster with the `movr` dataset preloaded: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach demo -~~~ diff --git a/src/current/_includes/v21.1/sql/multiregion-example-setup.md b/src/current/_includes/v21.1/sql/multiregion-example-setup.md deleted file mode 100644 index e03db9640cd..00000000000 --- a/src/current/_includes/v21.1/sql/multiregion-example-setup.md +++ /dev/null @@ -1,26 +0,0 @@ -### Setup - -Only [cluster regions](multiregion-overview.html#cluster-regions) specified [at node startup](cockroach-start.html#locality) can be used as [database regions](multiregion-overview.html#database-regions). - -To follow along with the examples below, start a [demo cluster](cockroach-demo.html) with the [`--global` flag](cockroach-demo.html#general) to simulate a multi-region cluster: - -{% include_cached copy-clipboard.html %} -~~~ shell -$ cockroach demo --global --nodes 9 --no-example-database -~~~ - -To see the regions available to the databases in the cluster, use a `SHOW REGIONS FROM CLUSTER` statement: - -{% include_cached copy-clipboard.html %} -~~~ sql -SHOW REGIONS FROM CLUSTER; -~~~ - -~~~ - region | zones ----------------+---------- - europe-west1 | {b,c,d} - us-east1 | {b,c,d} - us-west1 | {a,b,c} -(3 rows) -~~~ diff --git a/src/current/_includes/v21.1/sql/physical-plan-url.md b/src/current/_includes/v21.1/sql/physical-plan-url.md deleted file mode 100644 index 0e9109a8586..00000000000 --- a/src/current/_includes/v21.1/sql/physical-plan-url.md +++ /dev/null @@ -1 +0,0 @@ -The generated physical statement plan is encoded into a byte string after the [fragment identifier (`#`)](https://en.wikipedia.org/wiki/Fragment_identifier) in the generated URL. The fragment is not sent to the web server; instead, the browser waits for the web server to return a `decode.html` resource, and then JavaScript on the web page decodes the fragment into a physical statement plan diagram. The statement plan is, therefore, not logged by a server external to the CockroachDB cluster and not exposed to the public internet. diff --git a/src/current/_includes/v21.1/sql/preloaded-databases.md b/src/current/_includes/v21.1/sql/preloaded-databases.md deleted file mode 100644 index 3f1478c9b38..00000000000 --- a/src/current/_includes/v21.1/sql/preloaded-databases.md +++ /dev/null @@ -1,13 +0,0 @@ -New clusters and existing clusters [upgraded](upgrade-cockroach-version.html) to {{ page.version.version }} or later will include auto-generated databases, with the following purposes: - -- The empty `defaultdb` database is used if a client does not specify a database in the [connection parameters](connection-parameters.html). -- The `movr` database contains data about users, vehicles, and rides for the vehicle-sharing app [MovR](movr.html). -- The empty `postgres` database is provided for compatibility with PostgreSQL client applications that require it. -- The `startrek` database contains quotes from episodes. -- The `system` database contains CockroachDB metadata and is read-only. - -All databases except for the `system` database can be [deleted](drop-database.html) if they are not needed. - -{{site.data.alerts.callout_danger}} -Do not query the `system` database directly. Instead, use objects within the [system catalogs](system-catalogs.html). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/sql/privileges.md b/src/current/_includes/v21.1/sql/privileges.md deleted file mode 100644 index abd929a3487..00000000000 --- a/src/current/_includes/v21.1/sql/privileges.md +++ /dev/null @@ -1,13 +0,0 @@ -Privilege | Levels -----------|------------ -`ALL` | Database, Schema, Table, Type -`CREATE` | Database, Schema, Table -`DROP` | Database, Table -`GRANT` | Database, Schema, Table, Type -`CONNECT` | Database -`SELECT` | Table, Database -`INSERT` | Table -`DELETE` | Table -`UPDATE` | Table -`USAGE` | Schema, Type -`ZONECONFIG` | Database, Table diff --git a/src/current/_includes/v21.1/sql/querying-partitions.md b/src/current/_includes/v21.1/sql/querying-partitions.md deleted file mode 100644 index 87663bb388b..00000000000 --- a/src/current/_includes/v21.1/sql/querying-partitions.md +++ /dev/null @@ -1,163 +0,0 @@ -## Querying partitions - -Similar to [indexes](indexes.html), partitions can improve query performance by limiting the numbers of rows that a query must scan. In the case of [geo-partitioned data](regional-tables.html), partitioning can limit a query scan to data in a specific region. - -### Filtering on an indexed column - -If you filter the query of a partitioned table on a [column in the index directly following the partition prefix](indexes.html), the [cost-based optimizer](cost-based-optimizer.html) creates a query plan that scans each partition in parallel, rather than performing a costly sequential scan of the entire table. - -For example, suppose that the tables in the [`movr`](movr.html) database are geo-partitioned by region, and you want to query the `users` table for information about a specific user. - -Here is the `CREATE TABLE` statement for the `users` table: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SHOW CREATE TABLE users; -~~~ - -~~~ - table_name | create_statement -+------------+-------------------------------------------------------------------------------------+ - users | CREATE TABLE users ( - | id UUID NOT NULL, - | city VARCHAR NOT NULL, - | name VARCHAR NULL, - | address VARCHAR NULL, - | credit_card VARCHAR NULL, - | CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC), - | FAMILY "primary" (id, city, name, address, credit_card) - | ) PARTITION BY LIST (city) ( - | PARTITION us_west VALUES IN (('seattle'), ('san francisco'), ('los angeles')), - | PARTITION us_east VALUES IN (('new york'), ('boston'), ('washington dc')), - | PARTITION europe_west VALUES IN (('amsterdam'), ('paris'), ('rome')) - | ); - | ALTER PARTITION europe_west OF INDEX movr.public.users@primary CONFIGURE ZONE USING - | constraints = '[+region=europe-west1]'; - | ALTER PARTITION us_east OF INDEX movr.public.users@primary CONFIGURE ZONE USING - | constraints = '[+region=us-east1]'; - | ALTER PARTITION us_west OF INDEX movr.public.users@primary CONFIGURE ZONE USING - | constraints = '[+region=us-west1]' -(1 row) -~~~ - -If you know the user's id, you can filter on the `id` column: - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM users WHERE id='00000000-0000-4000-8000-000000000000'; -~~~ - -~~~ - id | city | name | address | credit_card -+--------------------------------------+----------+---------------+----------------------+-------------+ - 00000000-0000-4000-8000-000000000000 | new york | Robert Murphy | 99176 Anderson Mills | 8885705228 -(1 row) -~~~ - -An [`EXPLAIN`](explain.html) statement shows more detail about the cost-based optimizer's plan: - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN SELECT * FROM users WHERE id='00000000-0000-4000-8000-000000000000'; -~~~ - -~~~ - tree | field | description -+------+-------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | distributed | true - | vectorized | false - scan | | - | table | users@primary - | spans | -/"amsterdam" /"amsterdam"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"amsterdam"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"amsterdam\x00"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"boston" /"boston"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"boston"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"boston\x00"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"los angeles" /"los angeles"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"los angeles"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"los angeles\x00"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"new york" /"new york"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"new york"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"new york\x00"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"paris" /"paris"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"paris"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"paris\x00"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"rome" /"rome"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"rome"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"rome\x00"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"san francisco" /"san francisco"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"san francisco"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"san francisco\x00"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"seattle" /"seattle"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"seattle"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"seattle\x00"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"washington dc" /"washington dc"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"washington dc"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"washington dc\x00"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"- - | filter | id = '00000000-0000-4000-8000-000000000000' -(6 rows) -~~~ - -Because the `id` column is in the primary index, directly after the partition prefix (`city`), the optimal query is constrained by the partitioned values. This means the query scans each partition in parallel for the unique `id` value. - -If you know the set of all possible partitioned values, adding a check constraint to the table's create statement can also improve performance. For example: - -{% include_cached copy-clipboard.html %} -~~~ sql -> ALTER TABLE users ADD CONSTRAINT check_city CHECK (city IN ('amsterdam','boston','los angeles','new york','paris','rome','san francisco','seattle','washington dc')); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN SELECT * FROM users WHERE id='00000000-0000-4000-8000-000000000000'; -~~~ - -~~~ - tree | field | description -+------+-------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | distributed | false - | vectorized | false - scan | | - | table | users@primary - | spans | /"amsterdam"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"amsterdam"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"boston"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"boston"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"los angeles"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"los angeles"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"new york"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"new york"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"paris"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"paris"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"rome"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"rome"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"san francisco"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"san francisco"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"seattle"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"seattle"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# /"washington dc"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"-/"washington dc"/"\x00\x00\x00\x00\x00\x00@\x00\x80\x00\x00\x00\x00\x00\x00\x00"/# - | parallel | -(6 rows) -~~~ - - -To see the performance improvement over a query that performs a full table scan, compare these queries to a query with a filter on a column that is not in the index. - -### Filtering on a non-indexed column - -Suppose that you want to query the `users` table for information about a specific user, but you only know the user's name. - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM users WHERE name='Robert Murphy'; -~~~ - -~~~ - id | city | name | address | credit_card -+--------------------------------------+----------+---------------+----------------------+-------------+ - 00000000-0000-4000-8000-000000000000 | new york | Robert Murphy | 99176 Anderson Mills | 8885705228 -(1 row) -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN SELECT * FROM users WHERE name='Robert Murphy'; -~~~ - -~~~ - tree | field | description -+------+-------------+------------------------+ - | distributed | true - | vectorized | false - scan | | - | table | users@primary - | spans | ALL - | filter | name = 'Robert Murphy' -(6 rows) -~~~ - -The query returns the same result, but because `name` is not an indexed column, the query performs a full table scan that spans across all partition values. - -### Filtering on an partitioned column - -If you know which partition contains the data that you are querying, using a filter (e.g., a [`WHERE` clause](select-clause.html#filter-rows)) on the column that is used for the partition can further improve performance by limiting the scan to the specific partition(s) that contain the data that you are querying. - -Now suppose that you know the user's name and location. You can query the table with a filter on the user's name and city: - -{% include_cached copy-clipboard.html %} -~~~ sql -> EXPLAIN SELECT * FROM users WHERE name='Robert Murphy' AND city='new york'; -~~~ - -~~~ - tree | field | description -+------+-------------+-----------------------------------+ - | distributed | true - | vectorized | false - scan | | - | table | users@primary - | spans | /"new york"-/"new york"/PrefixEnd - | filter | name = 'Robert Murphy' -(6 rows) -~~~ - -The table returns the same results as before, but at a much lower cost, as the query scan now spans just the `new york` partition value. diff --git a/src/current/_includes/v21.1/sql/regional-by-row-table-description.md b/src/current/_includes/v21.1/sql/regional-by-row-table-description.md deleted file mode 100644 index 65c5a52e3da..00000000000 --- a/src/current/_includes/v21.1/sql/regional-by-row-table-description.md +++ /dev/null @@ -1,7 +0,0 @@ -In _regional by row_ tables, individual rows are optimized for access from different regions. This setting automatically divides a table and all of [its indexes](multiregion-overview.html#indexes-on-regional-by-row-tables) into [partitions](partitioning.html), with each partition optimized for access from a different region. Like [regional tables](multiregion-overview.html#regional-tables), _regional by row_ tables are optimized for access from a single region. However, that region is specified at the row level instead of applying to the whole table. - -Use regional by row tables when your application requires low-latency reads and writes at a row level where individual rows are primarily accessed from a single region. For example, a users table in a global application may need to keep some users' data in specific regions for better performance. - -For an example of a table that can benefit from the _regional by row_ setting in a multi-region deployment, see the `users` table from the [MovR application](movr.html). - -For instructions showing how to set a table's locality to `REGIONAL BY ROW`, see [`ALTER TABLE ... SET LOCALITY`](set-locality.html#regional-by-row) diff --git a/src/current/_includes/v21.1/sql/regional-table-description.md b/src/current/_includes/v21.1/sql/regional-table-description.md deleted file mode 100644 index b3c508f7ea1..00000000000 --- a/src/current/_includes/v21.1/sql/regional-table-description.md +++ /dev/null @@ -1,9 +0,0 @@ -Regional tables work well when your application requires low-latency reads and writes for an entire table from a single region. - -For _regional_ tables, access to the table will be fast in the table's "home region" and slower in other regions. In other words, CockroachDB optimizes access to data in regional tables from a single region. By default, a regional table's home region is the [database's primary region](multiregion-overview.html#database-regions), but that can be changed to use any region in the database. - -For instructions showing how to set a table's locality to `REGIONAL BY TABLE`, see [`ALTER TABLE ... SET LOCALITY`](set-locality.html#regional-by-table) - -{{site.data.alerts.callout_info}} -By default, all tables in a multi-region database are _regional_ tables that use the database's primary region. Unless you know your application needs different performance characteristics than regional tables provide, there is no need to change this setting. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/sql/replication-zone-patterns-to-multiregion-sql-mapping.md b/src/current/_includes/v21.1/sql/replication-zone-patterns-to-multiregion-sql-mapping.md deleted file mode 100644 index 4aa36cf2dec..00000000000 --- a/src/current/_includes/v21.1/sql/replication-zone-patterns-to-multiregion-sql-mapping.md +++ /dev/null @@ -1,5 +0,0 @@ -| Replication Zone Pattern | Multi-Region SQL | -|--------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------| -| [Duplicate indexes](../v20.2/topology-duplicate-indexes.html) | [`GLOBAL` tables](global-tables.html) | -| [Geo-partitioned replicas](../v20.2/topology-geo-partitioned-replicas.html) | [`REGIONAL BY ROW` tables](regional-tables.html#regional-by-row-tables) with [`ZONE` survival goals](multiregion-overview.html#surviving-zone-failures) | -| [Geo-partitioned leaseholders](../v20.2/topology-geo-partitioned-leaseholders.html) | [`REGIONAL BY ROW` tables](regional-tables.html#regional-by-row-tables) with [`REGION` survival goals](multiregion-overview.html#surviving-region-failures) | diff --git a/src/current/_includes/v21.1/sql/retry-savepoints.md b/src/current/_includes/v21.1/sql/retry-savepoints.md deleted file mode 100644 index 6b9e78209f0..00000000000 --- a/src/current/_includes/v21.1/sql/retry-savepoints.md +++ /dev/null @@ -1 +0,0 @@ -A savepoint defined with the name `cockroach_restart` is a "retry savepoint" and is used to implement [advanced client-side transaction retries](advanced-client-side-transaction-retries.html). For more information, see [Retry savepoints](advanced-client-side-transaction-retries.html#retry-savepoints). diff --git a/src/current/_includes/v21.1/sql/savepoint-ddl-rollbacks.md b/src/current/_includes/v21.1/sql/savepoint-ddl-rollbacks.md deleted file mode 100644 index 57da82ae775..00000000000 --- a/src/current/_includes/v21.1/sql/savepoint-ddl-rollbacks.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_danger}} -Rollbacks to savepoints over [DDL](https://en.wikipedia.org/wiki/Data_definition_language) statements are only supported if you're rolling back to a savepoint created at the beginning of the transaction. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v21.1/sql/savepoints-and-high-priority-transactions.md b/src/current/_includes/v21.1/sql/savepoints-and-high-priority-transactions.md deleted file mode 100644 index 4b77f2dd561..00000000000 --- a/src/current/_includes/v21.1/sql/savepoints-and-high-priority-transactions.md +++ /dev/null @@ -1 +0,0 @@ -[`ROLLBACK TO SAVEPOINT`](rollback-transaction.html#rollback-a-nested-transaction) (for either regular savepoints or "restart savepoints" defined with `cockroach_restart`) causes a "feature not supported" error after a DDL statement in a [`HIGH PRIORITY` transaction](transactions.html#transaction-priorities), in order to avoid a transaction deadlock. For more information, see GitHub issue [#46414](https://www.github.com/cockroachdb/cockroach/issues/46414). diff --git a/src/current/_includes/v21.1/sql/savepoints-and-row-locks.md b/src/current/_includes/v21.1/sql/savepoints-and-row-locks.md deleted file mode 100644 index 735c4cebbbb..00000000000 --- a/src/current/_includes/v21.1/sql/savepoints-and-row-locks.md +++ /dev/null @@ -1,12 +0,0 @@ -CockroachDB supports exclusive row locks. - -- In PostgreSQL, row locks are released/cancelled upon [`ROLLBACK TO SAVEPOINT`][rts]. -- In CockroachDB, row locks are preserved upon [`ROLLBACK TO SAVEPOINT`][rts]. - -This is an architectural difference in v20.2 that may or may not be lifted in a later CockroachDB version. - -The code of client applications that rely on row locks must be reviewed and possibly modified to account for this difference. In particular, if an application is relying on [`ROLLBACK TO SAVEPOINT`][rts] to release row locks and allow a concurrent transaction touching the same rows to proceed, this behavior will not work with CockroachDB. - - - -[rts]: rollback-transaction.html diff --git a/src/current/_includes/v21.1/sql/schema-changes.md b/src/current/_includes/v21.1/sql/schema-changes.md deleted file mode 100644 index 04c49c2fbd2..00000000000 --- a/src/current/_includes/v21.1/sql/schema-changes.md +++ /dev/null @@ -1 +0,0 @@ -- Schema changes through [`ALTER TABLE`](alter-table.html), [`DROP DATABASE`](drop-database.html), [`DROP TABLE`](drop-table.html), and [`TRUNCATE`](truncate.html) \ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/schema-terms.md b/src/current/_includes/v21.1/sql/schema-terms.md deleted file mode 100644 index d66ebd4058d..00000000000 --- a/src/current/_includes/v21.1/sql/schema-terms.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -To avoid confusion with the general term "[schema](https://en.wiktionary.org/wiki/schema)", in this guide we refer to the logical object as a *user-defined schema*, and to the relationship structure of logical objects in a cluster as a *database schema*. -{{site.data.alerts.end}} \ No newline at end of file diff --git a/src/current/_includes/v21.1/sql/select-for-update-overview.md b/src/current/_includes/v21.1/sql/select-for-update-overview.md deleted file mode 100644 index 8938ad23df0..00000000000 --- a/src/current/_includes/v21.1/sql/select-for-update-overview.md +++ /dev/null @@ -1,18 +0,0 @@ -The `SELECT FOR UPDATE` statement is used to order transactions by controlling concurrent access to one or more rows of a table. - -It works by locking the rows returned by a [selection query][selection], such that other transactions trying to access those rows are forced to wait for the transaction that locked the rows to finish. These other transactions are effectively put into a queue based on when they tried to read the value of the locked rows. - -Because this queueing happens during the read operation, the [thrashing](https://en.wikipedia.org/wiki/Thrashing_(computer_science)) that would otherwise occur if multiple concurrently executing transactions attempt to `SELECT` the same data and then `UPDATE` the results of that selection is prevented. By preventing thrashing, CockroachDB also prevents [transaction retries][retries] that would otherwise occur. - -As a result, using `SELECT FOR UPDATE` leads to increased throughput and decreased tail latency for contended operations. - -CockroachDB currently does not support the `FOR SHARE`/`FOR KEY SHARE` [locking strengths](select-for-update.html#locking-strengths), or the `SKIP LOCKED` [wait policy](select-for-update.html#wait-policies). - -{{site.data.alerts.callout_info}} -By default, CockroachDB uses the `SELECT FOR UPDATE` locking mechanism during the initial row scan performed in [`UPDATE`](update.html) and [`UPSERT`](upsert.html) statement execution. To turn off implicit `SELECT FOR UPDATE` locking for `UPDATE` and `UPSERT` statements, set `enable_implicit_select_for_update` to `false`. -{{site.data.alerts.end}} - - - -[retries]: transactions.html#transaction-retries -[selection]: selection-queries.html diff --git a/src/current/_includes/v21.1/sql/set-transaction-as-of-system-time-example.md b/src/current/_includes/v21.1/sql/set-transaction-as-of-system-time-example.md deleted file mode 100644 index 8e758f1c303..00000000000 --- a/src/current/_includes/v21.1/sql/set-transaction-as-of-system-time-example.md +++ /dev/null @@ -1,24 +0,0 @@ -{% include_cached copy-clipboard.html %} -~~~ sql -> BEGIN; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SET TRANSACTION AS OF SYSTEM TIME '2019-04-09 18:02:52.0+00:00'; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> SELECT * FROM products; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -> COMMIT; -~~~ diff --git a/src/current/_includes/v21.1/sql/shell-commands.md b/src/current/_includes/v21.1/sql/shell-commands.md deleted file mode 100644 index 158758787a0..00000000000 --- a/src/current/_includes/v21.1/sql/shell-commands.md +++ /dev/null @@ -1,22 +0,0 @@ -The following commands can be used within the interactive SQL shell: - -Command | Usage ---------|------------ -`\?`

`help` | View this help within the shell. -`\q`

`quit`

`exit`

`ctrl-d` | Exit the shell.
When no text follows the prompt, `ctrl-c` exits the shell as well; otherwise, `ctrl-c` clears the line. -`\!` | Run an external command and print its results to `stdout`. [See an example](cockroach-sql.html#run-external-commands-from-the-sql-shell). -\| | Run the output of an external command as SQL statements. [See an example](cockroach-sql.html#run-external-commands-from-the-sql-shell). -`\set