Skip to content

Releases: takapi327/ldbc

v0.5.0

28 Dec 02:08
b6e70bb

Choose a tag to compare

ldbc v0.5.0 is released. 🎉
This release brings major enhancements to the ecosystem with ZIO support, advanced authentication capabilities, and significant security and performance improvements.

Note

ldbc is pre-1.0 software and is still undergoing active development. New versions are not binary compatible with prior versions, although in most cases user code will be source compatible.
The major version will be the stable version.

🎉 Release Highlights

  • ZIO Ecosystem Integration: Complete ZIO support through the new ldbc-zio-interop module
  • Enhanced Authentication: Pure Scala3 authentication plugins including AWS Aurora IAM support
  • Security Improvements: Enhanced SQL parameter escaping and SSRF attack protection
  • API Enhancements: File-based query execution with the new updateRaws method
  • Performance Optimizations: Maximum packet size configuration and improved connection pool concurrency

What's Changed

🆕 New Modules

  • ldbc-zio-interop: ZIO ecosystem integration for seamless ZIO application development
  • ldbc-authentication-plugin: Pure Scala3 MySQL authentication plugins (Clear Password)
  • ldbc-aws-authentication-plugin: AWS Aurora IAM authentication support

⚠️ Deprecated Modules

  • ldbc-hikari: Officially deprecated in favor of built-in connection pooling

🔧 Breaking Changes

  • ldbc-hikari deprecation: Package is deprecated and will be removed in future versions
  • Migration required: Applications using ldbc-hikari should migrate to built-in connection pooling
  • Binary compatibility: Not binary compatible with prior versions, but source compatibility maintained for most user code
  • API changes: Some minor API adjustments for enhanced security and performance

✨ New Features

ZIO Ecosystem Support

Complete integration with the ZIO ecosystem for functional programming enthusiasts:

libraryDependencies += "io.github.takapi327" %% "ldbc-zio-interop" % "0.5.0"

Example Usage:

import zio.*
import ldbc.zio.interop.*
import ldbc.connector.*
import ldbc.dsl.*

object Main extends ZIOAppDefault:
  private val datasource = MySQLDataSource
    .build[Task]("127.0.0.1", 3306, "ldbc")
    .setPassword("password")
    .setDatabase("world")

  override def run = 
    for
      connection <- datasource.getConnection
      connector = Connector.fromConnection(connection)
      result <- sql"SELECT 1".query[Int].to[List].readOnly(connector)
    yield result

Enhanced Authentication Plugins

Pure Scala3 authentication plugins provide enhanced security and cross-platform compatibility.

MySQL Clear Password Authentication

import ldbc.connector.*
import ldbc.authentication.plugin.*

val datasource = MySQLDataSource
  .build[IO]("localhost", 3306, "cleartext-user")
  .setPassword("plaintext-password")
  .setDatabase("mydb")
  .setSSL(SSL.Trusted)  // Required for security
  .setDefaultAuthenticationPlugin(MysqlClearPasswordPlugin)

AWS Aurora IAM Authentication

import ldbc.amazon.plugin.AwsIamAuthenticationPlugin
import ldbc.connector.*

val hostname = "aurora-instance.cluster-xxx.region.rds.amazonaws.com"
val username = "iam-user"

val config = MySQLConfig.default
  .setHost(hostname)
  .setUser(username)
  .setDatabase("mydb")
  .setSSL(SSL.Trusted)

val plugin = AwsIamAuthenticationPlugin.default[IO]("ap-northeast-1", hostname, username)

MySQLDataSource.pooling[IO](config, plugins = List(plugin)).use { datasource =>
  val connector = Connector.fromDataSource(datasource)
  // Execute queries
}

File-Based Query Execution

Execute SQL scripts and migrations directly from files with the new updateRaws method:

import ldbc.dsl.*
import fs2.io.file.{Files, Path}
import fs2.text

private def readFile(filename: String): IO[String] =
  Files[IO]
    .readAll(Path(filename))
    .through(text.utf8.decode)
    .compile
    .string

for
  sql <- readFile("migration.sql")
  _ <- DBIO.updateRaws(sql).commit(connector)
yield ()

🔒 Security Enhancements

Enhanced SQL Parameter Escaping

Improved string parameter escaping provides stronger protection against SQL injection attacks.

⚡ Performance Optimizations

Maximum Packet Size Configuration

Better compatibility with MySQL server's max_allowed_packet setting:

val datasource = MySQLDataSource
  .build[IO]("localhost", 3306, "user")
  .setPassword("password")
  .setDatabase("mydb")
  .setMaxPacketSize(16777216)  // 16MB (match MySQL server configuration)

Connection Pool Concurrency Improvements

Enhanced connection pool state management with atomic checks for improved stability in concurrent environments.

🚀 Features

🔧 Refactoring

📖 Documentation

⛓️ Dependency update

Read more

v0.4.0

30 Sep 07:44
5062f7c

Choose a tag to compare

ldbc v0.4.0 is released. 🎉
This release includes new feature additions, enhancements to existing features, disruptive changes and much more.

Note

ldbc is pre-1.0 software and is still undergoing active development. New versions are not binary compatible with prior versions, although in most cases user code will be source compatible.
The major version will be the stable version.

What's Changed

This release adds high-performance built-in connection pooling to the Pure Scala MySQL Connector. This enables efficient connection management optimized for the fiber-based concurrency model of Cats Effect, without requiring external libraries such as HikariCP.

Key Changes:

  • New MySQLDataSource API: A new API replacing ConnectionProvider
  • Built-in Connection Pooling: Includes CircuitBreaker, adaptive sizing, and leak detection
  • Connector API: A new pattern for DBIO execution
  • Streaming Support: Efficiently process large data volumes using fs2.Stream
  • Enhanced OpenTelemetry Integration: MySQL-specific attributes and span names

🎯 Migration Guide

From ConnectionProvider to MySQLDataSource

Old (0.3.x):

val provider = ConnectionProvider
  .default[IO]("localhost", 3306, "root")
  .setPassword("password")
  .setDatabase("test")

New (0.4.x):

val dataSource = MySQLDataSource
  .build[IO]("localhost", 3306, "root")
  .setPassword("password")
  .setDatabase("test")

val connector = Connector.fromDataSource(dataSource)

Using Connection Pooling

val pooledDataSource = MySQLDataSource.pooling[IO](
  MySQLConfig.default
    .setHost("localhost")
    .setPort(3306)
    .setUser("root")
    .setPassword("password")
    .setDatabase("test")
    .setMinConnections(5)
    .setMaxConnections(20)
)

pooledDataSource.use { pool =>
  val connector = Connector.fromDataSource(pool)
  // Execute DBIOs
}

Stream Support

import fs2.Stream
import ldbc.dsl.*

// Stream large datasets efficiently
val stream: Stream[DBIO, City] = 
  sql"SELECT * FROM city"
    .query[City]
    .stream(fetchSize = 1000)

⚠️ Important Notes

  • Scala Native users: Connection pooling is not recommended on Scala Native due to single-threaded execution model
  • Built-in connection pooling: External connection pool libraries (e.g., HikariCP) are no longer required
  • Performance: The new connection pool is optimized for Cats Effect's fiber-based concurrency model

🚀 Features

💪 Enhancement

🪲 Bug Fixes

🔧 Refactoring

📖 Documentation

⛓️ Dependency update

Full Changelog: v0.3.3...v0.4.0

v0.3.3

19 Sep 13:16

Choose a tag to compare

ldbc v0.3.3 is released.

This release updates dependencies.

Note

ldbc is pre-1.0 software and is still undergoing active development. New versions are not binary compatible with prior versions, although in most cases user code will be source compatible.
The major version will be the stable version.

What's Changed

⛓️ Dependency update

Full Changelog: v0.3.2...v0.3.3

v0.3.2

15 Jun 15:28

Choose a tag to compare

ldbc v0.3.2 is released.

This release fixes bugs and updates dependencies.

Note

ldbc is pre-1.0 software and is still undergoing active development. New versions are not binary compatible with prior versions, although in most cases user code will be source compatible.
The major version will be the stable version.

What's Changed

Errors occur when null is returned when using a non-Option type to convert to another type.

Here, a String is returned as null, which causes a NullPointerException to be generated by the split process.

given Code[List[String]] = Codec[String].imap(_.split(","))(_.mkString("",))

This error has been corrected so that custom type definitions will not raise an exception.

🪲 Bug Fixes

⛓️ Dependency update

Full Changelog: v0.3.1...v0.3.2

v0.3.1

11 Jun 14:25
a48e44b

Choose a tag to compare

ldbc v0.3.1 is released.

This release fixes some bugs and updates some dependencies.

In addition, test coverage using Codecov has been introduced and additional missing tests have been implemented accordingly.

Note

ldbc is pre-1.0 software and is still undergoing active development. New versions are not binary compatible with prior versions, although in most cases user code will be source compatible.
The major version will be the stable version.

What's Changed

💪 Enhancement

  • Enhancement/2025 05 correction of coverages testing omissions by @takapi327 in #474
  • Enhancement/2025 05 correction of coverages testing omissions by @takapi327 in #475
  • Enhancement/2025 05 correction of coverages testing omissions by @takapi327 in #477
  • Enhancement/2025 05 correction of coverages testing omissions by @takapi327 in #479
  • Enhancement/2025 05 correction of coverages testing omissions by @takapi327 in #480
  • Enhancement/2025 05 used nix by @takapi327 in #481

🪲 Bug Fixes

🔧 Refactoring

⛓️ Dependency update

Full Changelog: v0.3.0...v0.3.1

v0.3.0

17 May 15:25
69da37a

Choose a tag to compare

ldbc v0.3.0 is released. 🎉
This release includes new feature additions, enhancements to existing features, disruptive changes and much more.

Note

ldbc is pre-1.0 software and is still undergoing active development. New versions are not binary compatible with prior versions, although in most cases user code will be source compatible.
The major version will be the stable version.

What's Changed

This is the first official release of 0.3.x, which includes significant improvements over 0.2.x.
Please refer to Migration Notes (from 0.2.x to 0.3.x) for migration instructions.

ldbc is available on the JVM, Scala.js, and ScalaNative

Module / Platform JVM Scala Native Scala.js Scaladoc
ldbc-core Scaladoc
ldbc-sql Scaladoc
ldbc-connector Scaladoc
jdbc-connector Scaladoc
ldbc-dsl Scaladoc
ldbc-statement Scaladoc
ldbc-query-builder Scaladoc
ldbc-schema Scaladoc
ldbc-schemaSpy Scaladoc
ldbc-codegen Scaladoc
ldbc-hikari Scaladoc
ldbc-plugin Scaladoc

Please refer to Tutorial and QA for more information on how to use it.

You can also use MCP Server to perform learning and code generation.

npm version

Additional features that were not in the RC version

Repealed original Enum definition and changed to use scala.reflect.Enum type.

This allows users to use pure Enum as is.

enum Status:
  case Active, InActive
given Codec[Status] = Codec.derivedEnum[Status]

sql"SELECT 'Active'".query[Status].to[Option]

The DataType definition in Schema is also simpler.

- enum Status extends ldbc.schema.model.Enum:
-  case Active, InActive
- object Status extends EnumDataType[Status]	
+ enum Status:
+  case Active, InActive

- ENUM[Status](using Status).queryString
+ ENUM[Status].queryString
// "ENUM('Active','InActive') NOT NULL"

Add support for NamedTuple, which became an official feature in Scala 3.7.

Adding this support will allow users to use NamedTuple directly.

for
  (user, order) <- sql"SELECT u.*, o.* FROM `user` AS u JOIN `order` AS o ON u.id = o.user_id".query[(user: User, order: Order)].unsafe
  users <- sql"SELECT id, name, email FROM `user`".query[(id: Long, name: String, email: String)].to[List]
yield
  println(s"Result User: $user")
  println(s"Result Order: $order")
  users.foreach { user =>
    println(s"User ID: ${user.id}, Name: ${user.name}, Email: ${user.email}")
  }

// Result User: User(1,Alice,alice@example.com,2025-05-20T03:22:09,2025-05-20T03:22:09)
// Result Order: Order(1,1,1,2025-05-20T03:22:09,1,2025-05-20T03:22:09,2025-05-20T03:22:09)
// User ID: 1, Name: Alice, Email: alice@example.com
// User ID: 2, Name: Bob, Email: bob@example.com
// User ID: 3, Name: Charlie, Email: charlie@example.com

🚀 Features

💪 Enhancement

  • Enhancement/2024 01 add sbt header plugin by @takapi327 in #120
  • Enhancement/2024 01 multi platform support for core project by @takapi327 in #123
  • Enhancement/2024 01 multi platform support for sql project by @takapi327 in #124
  • Enhancement/2024 01 multi platform support for query builder project by @takapi327 in #125
  • Enhancement/2024 01 multi platform support for codegen project by @takapi327 in #126
  • Update issue templates by @takapi327 in #154
  • Enhancement/2024 03 add database login by @takapi327 in #157
  • Enhancement/2024 05 add large update by @takapi327 in #219
  • Enhancement/2024 07 sql exception message enhancement by @takapi327 in #246
  • Enhancement/2024 07 performance improvements by @takapi327 in #256
  • Enhancement/2024 08 use scala native config brew by @takapi327 in #275
  • Enhancement/2024 07 additional benchmarks by @takapi327 in #257
  • Enhancement/2024 09 raises er...
Read more

v0.3.0-RC2

04 May 10:27

Choose a tag to compare

ldbc v0.3.0-RC2 is released.
This release includes new feature additions, enhancements to existing features, disruptive changes and much more.

Note

ldbc is pre-1.0 software and is still undergoing active development. New versions are not binary compatible with prior versions, although in most cases user code will be source compatible.
The major version will be the stable version.

SSL Extensions

Added functionality for SSL connections using certificates on various platforms.

For JVM

def fromKeyStoreResource(
  resource:      String,
  storePassword: Array[Char],
  keyPassword:   Array[Char]
): SSL

Sample code

ConnectionProvider
  .default[IO]("127.0.0.1", 3306, "user", "password", "database")
  .setSSL(SSL.fromKeyStoreResource("keystore.jks", "password".toCharArray, "password".toCharArray))
  .use { conn =>
    ???
  }

For JS

def fromSecureContext(
  secureContext: SecureContext
): SSL

Sample code

for
  ca <- Files[IO].readAll(Path("path/to/ca.pem")).through(text.utf8.decode).compile.string
  secureContext = SecureContext(
                    ca   = List(ca.asRight).some,
                    cert = None,
                    key  = None
                  )
  result <- ConnectionProvider
              .default[IO]("127.0.0.1", 3306, "user", "password", "database")
              .setSSL(SSL.fromSecureContext(secureContext))
              .user { conn => ??? }
yield result

For Native

def fromS2nConfig(
  config: S2nConfig
): SSL

Sample code

for
 ca <- Resource.eval(Files[IO].readAll(Path("path/to/ca.pem")).through(text.utf8.decode).compile.string)
 cfg <- S2nConfig.builder.withPemsToTrustStore(List(ca)).build[IO]
 connection <- ConnectionProvider
                 .default[IO]("127.0.0.1", 3306, "user", "password", "database")
                 .setSSL(SSL.fromS2nConfig(cfg))
                 .createConnection()
yield connection

from this ldbc supports all TLS modes provided by fs2. Below is a list of available SSL modes:

Mode Platform Details
SSL.None JVM/JS/Native ldbc will not request SSL. This is the default.
SSL.Trusted JVM/JS/Native Connect via SSL and trust all certificates. Use this if you're running with a self-signed certificate, for instance.
SSL.System JVM/JS/Native Connect via SSL and use the system default SSLContext to verify certificates. Use this if you're running with a CA-signed certificate.
SSL.fromSSLContext(…) JVM Connect via SSL using an existing SSLContext.
SSL.fromKeyStoreFile(…) JVM Connect via SSL using a specified keystore file.
SSL.fromKeyStoreResource(…) JVM Connect via SSL using a specified keystore classpath resource.
SSL.fromKeyStore(…) JVM Connect via SSL using an existing Keystore.
SSL.fromSecureContext(...) JS Connect via SSL using an existing SecureContext.
SSL.fromS2nConfig(...) Native Connect via SSL using an existing S2nConfig.

Documentation for LLMs

Documentation for llms has been created and published.

Currently, we have the following root-level files...

What's Changed

💪 Enhancement

🔧 Refactoring

📖 Documentation

⛓️ Dependency update

Full Changelog: v0.3.0-RC1...v0.3.0-RC2

v0.3.0-RC1

27 Mar 16:01

Choose a tag to compare

ldbc v0.3.0-RC1 is released.
This release includes new feature additions, enhancements to existing features, disruptive changes and much more.

Note

ldbc is pre-1.0 software and is still undergoing active development. New versions are not binary compatible with prior versions, although in most cases user code will be source compatible.
The major version will be the stable version.

Significant performance improvements

The most significant feature of this release is the significant improvement in performance.

Previous versions were clearly degraded compared to jdbc in terms of read processing.

After making several improvements, performance was greatly improved and benchmark results exceeded jdbc.

Before

Select

After

Image

What's Changed

Added withBeforeAfter function

Added functionality to connectors to allow processing to be added after the connection to the database is created and before it is destroyed.

This functionality can be achieved by using the withBeforeAfter method for Connection generation.

The second argument of withBeforeAfter specifies the type of the result of processing Before to be passed to After.

Connection.withBeforeAfter[IO, Unit](
  ...,
  before = _ => IO.unit,
  after = (_, _) => IO.unit
)

This feature allows users to build connections where common processing takes place.

For example, it is possible to build a table and then delete it.

def before(connection: Connection[IO]): IO[Int] =
  DBIO.sequence(user.schema.create).commit(connection) *>
    IO(user.schema.create.statements.length)

def after(length: Int, connection: Connection[IO]): IO[Unit] =
  DBIO.sequence(user.schema.drop).commit(connection) *>
    IO.println(s"Created $length tables and dropped them")

Connection.withBeforeAfter[IO, Unit](
  ...,
  before,
  after
)

Change implicit handover of LogHandler

LogHandler was passed implicitly to various DBIO functions.

However, this is redundant because a LogHandler must always be provided at the point where DBIO is executed.

There are few requests to change the LogHandler for each process, and it should be sufficient to simply use the LogHandler once it has been set up for the first time.

Therefore, the LogHandler can be set at the time of connection creation so that a common LogHandler can be used for each connection.

-given LogHandler[IO] = ???
Connection[IO](
+ logHandler = ???
)

Changed create Connection to Provider

Changed the connection creation method to use Provider when using either ldbc or jdbc connectors.

ldbc

The ldbc Provider is constructed by passing mandatory properties such as host, port, and user.

ConnectionProvider.default[IO]("127.0.0.1", 13306, "ldbc")

Additional settings are set using the setXXX method. The following sets additional password and database values.

ConnectionProvider
  .default[IO]("127.0.0.1", 13306, "ldbc")
  .setPassword("password")
  .setDatabase("world")

ldbc connections can be set to any process before or after, using withBeforeAfter when using Provider.

val before = ???
val after = ???

ConnectionProvider
  .default[IO]("127.0.0.1", 13306, "ldbc")
  .withBeforeAfter(before, after)

jdbc

Note that jdbc creates connections based on a DataSource, and that a DB-specific execution context must be specified when creating a connection from a DataSource.

val ds = new MysqlDataSource()
ConnectionProvider.fromDataSource(ds, ExecutionContexts.synchronous)

Methods such as fromConnection and fromDriverManager are provided as well as creation from DataSource.

Usage

The Provider can use the connection with use.

provider.use { connection =>
  ???
}

The use uses the Resource internally and disconnects the connection when it is finished.

It is also possible to use a connection wrapped in Resource using createConnection.

provider.createConnection().use { connection =>
  ???
}

Change DBIO to Free Monad

DBIO was converted to Free Monad using Cats. This removed the Effect Type from DBIO.

This eliminates the need for syntax, etc. to provide extended methods. This eliminates the need for users to write multiple imports when using a function.

Using dsl

- import ldbc.dsl.io.*
+ import ldbc.dsl.*

Using query builder

- import ldbc.query.builder.io.*
+ import ldbc.query.builder.*

Using schema

- import ldbc.schema.io.*
+ import ldbc.schema.*

Elimination of automatic derivation

Prevents implicit auto-derivation, since recurrent generation by auto-derivation will cause compile speed to explode.

Automatic derivation should be provided under packages that can be quickly identified as affecting compile speed, rather than eliminated.

import ldbc.dsl.codec.auto.generic.toSlowCompile.given

🚀 Features

💪 Enhancement

🔧 Refactoring

📖 Documentation

⛓️ Dependency update

Full Changelog: v0.3.0-beta11...v0.3.0-RC1

v0.3.0-beta11

21 Feb 12:14

Choose a tag to compare

ldbc v0.3.0-beta11 is released.
This release includes new feature additions, enhancements to existing features, disruptive changes and much more.

Note

ldbc is pre-1.0 software and is still undergoing active development. New versions are not binary compatible with prior versions, although in most cases user code will be source compatible.
The major version will be the stable version.

What's Changed

Adding DataType Column

If a column was to be given a data type or other setting, it had to be passed as an argument.
With this correction, it is now possible to define columns with data type characteristics.

In this method of definition, the column name can be used as the variable name, so it is no longer necessary to pass the column name as an argument.

class EntityTable extends Table[Entity]("entity"):
-  def c1: Column[Long] = column[Long]("ci", BIGINT, AUTO_INCREMENT)
+  def c1: Column[Long] = bigint().autoIncrement

Column names can change their format by implicitly passing Naming.
The default is CamelCase, but to change this to PascalCase, do the following

class EntityTable extends Table[Entity]("entity"):
  given Naming = Naming.PASCAL

  def c1: Column[Long] = bigint().autoIncrement

If you want to change the format of a particular column, you can still define it by passing the column name as an argument.

class EntityTable extends Table[Entity]("entity"):
  given Naming = Naming.PASCAL

  def c1: Column[Long] = bigint().autoIncrement
  def c2: Column[Long] = bigint("c_2")

Adding DDL Schema

Add schema function to perform DDL.

class UserTable extends Table[User]("user"):
  def id:   Column[Long]        = bigint().autoIncrement.primaryKey
  def name: Column[String]      = varchar(255)
  def age:  Column[Option[Int]] = int()

  override def * : Column[User] = (id *: name *: age).to[User]

val userTable = TableQuery[UserTable]

connection
  .use { conn =>
    DBIO
      .sequence(
        userTable.schema.create,
        userTable.schema.createIfNotExists,
        userTable.schema.dropIfExists,
        userTable.schema.create
      )
      .commit(conn)
  }

Schema can also be composed with other Schemas.

userTable.schema ++ userProfileTable.schema

SQL executable statements can be checked with the statements method.

userTable.schema.create.statements.foreach(println)
userTable.schema.createIfNotExists.statements.foreach(println)
userTable.schema.drop.statements.foreach(println)
userTable.schema.dropIfExists.statements.foreach(println)
userTable.schema.truncate.statements.foreach(println)

Performance Improvement

The change was made because the method of splitting using splitAt was faster than using the helper functions provided by scodec.

Before

Select

After

Select

💪 Enhancement

🪲 Bug Fixes

  • Comparison operator compile error due to use of Opaque Type Alias by @takapi327 in #357
  • Empty VALUES in INSERT statement causes error statement to be issued by @takapi327 in #362
  • Passing an empty value in the IN clause of a WHERE statement throws an error by @takapi327 in #364
  • Fixed a bug that caused incorrect statements to be issued when keys w… by @takapi327 in #392

🔧 Refactoring

⛓️ Dependency update

New Contributors

New @i10416 contributor. Thanks!

Full Changelog: v0.3.0-beta10...v0.3.0-beta11

v0.3.0-beta10

06 Jan 12:59

Choose a tag to compare

ldbc v0.3.0-beta10 is released.
This release includes new feature additions, enhancements to existing features, disruptive changes and much more.

Note

ldbc is pre-1.0 software and is still undergoing active development. New versions are not binary compatible with prior versions, although in most cases user code will be source compatible.
The major version will be the stable version.

What's Changed

Caution

This version is not compatible with the previous version, v0.3.0-beta9.

Adding Codec

If both Encoder and Decoder were needed, each had to be defined. With this modification, a new Codec has been added, allowing Encoder and Decoder to be defined together.

enum Status:
  case Active, InActive

-given Encoder[Status] = Encoder[Boolean].contramap {
-  case Status.Active   => true
-  case Status.InActive => false
-}

-given Decoder[Status] = Decoder[Boolean].map {
-  case true  => Status.Active
-  case false => Status.InActive
-}

+given Codec[Status] = Codec[Boolean].imap {
+  case true => Status.Active
+  case false => Status.InActive
+} {
+  case Status.Active   => true
+  case Status.InActive => false
+}

It can also be used in place of a Decoder or Encoder by building a Codec.

-given Decoder[City] = (Decoder[Int] *: Decoder[String] *: Decoder[Int]).to[City]
+given Codec[City] = (Codec[Int] *: Codec[String] *: Codec[Int]).to[City]

Modified to allow Either to be used during Decoder construction.

This allows us to construct a process for cases that do not match the following pattern

enum Status(val code: Int):
  case InActive extends Status(0)
  case Active extends Status(1)

given Decoder[Status] = Decoder[Int].emap {
  case 0 => Right(Status.InActive)
  case 1 => Right(Status.Active)
  case unknown => Left(s"$unknown is Unknown Status code")
}

This modification allows Codec to use Either for construction.

given Codec[Status] = Codec[Int].eimap {
  case 0 => Right(Status.InActive)
  case 1 => Right(Status.Active)
  case unknown => Left(s"$unknown is Unknown Status code")
}(_.code)

Additional function

Allow additional conditions of Where to be conditionally excluded

TableQuery[City]
  .select(_.name)
  .where(_.population > 1000000)
  .and(_.name == "Tokyo", false)
// SELECT name FROM city WHERE population > ?

Added a function to the Where conditional statement to determine whether to add a condition to the statement depending on the Option value.

val opt: Option[String] = ???

TableQuery[City]
  .select(_.name)
  .whereOpt(city => opt.map(value => city.name === value))

TableQuery[City]
  .select(_.name)
  .whereOpt(opt)((city, value) => city.name === value)

💣 Breaking Change

Modification of Encoder and Decoder into a composable form using twiddles.

The method of building custom-type Decoders has changed. With this modification, Decoder can be converted to any type using the map function.

- given Decoder.Elem[Continent] = Decoder.Elem.mapping[String, Continent](str => Continent.valueOf(str.replace(" ", "_")))
+ given Decoder[Continent] = Decoder[String].map(str => Continent.valueOf(str.replace(" ", "_")))

Decoder is still constructed implicitly.

case class City(id: Int, name: String, age: Int)


sql"SELECT id, name, age FROM city LIMIT 1"
  .query[City]
  .to[Option]
  .readOnly(conn)

However, implicit searches may fail if there are many properties in the model.

[error]    |Implicit search problem too large.
[error]    |an implicit search was terminated with failure after trying 100000 expressions.
[error]    |The root candidate for the search was:
[error]    |
[error]    |  given instance given_Decoder_P in object Decoder  for  ldbc.dsl.codec.Decoder[City]}

In such cases, raising the search limit in the compilation options may resolve the problem.

scalacOptions += "-Ximplicit-search-limit:100000"

However, it may lead to amplification of compilation time. In that case, it can also be resolved by manually building the Decoder as follows.

given Decoder[City] = (Decoder[Int] *: Decoder[String] *: Decoder[Int]).to[City]

This is true not only for Decoder but also for Encoder.

Rename Executor to DBIO

The type used to represent IO to the DB was Executor, but this was changed because the DBIO type is more intuitive for the user.

- trait Executor[F[_]: Temporal, T]:
+ trait DBIO[F[_]: Temporal, T]:

Migrate table name designation to derived

The method of specifying the table name using query builder has been changed from passing it as an argument of TableQuery to passing it as an argument of Table's derived.

Before

case class City(
  id: Int,
  name:             String,
  countryCode:      String,
  district:         String,
  population:       Int
) derives Table

val table = TableQuery[Test]("city")

After

case class City(
  id: Int,
  name:             String,
  countryCode:      String,
  district:         String,
  population:       Int
)

object City:
  given Table[City] = Table.derived[City]("city")

Renewal of Schema project

This modification changes the way Table types are constructed using the Schema project.
Below we will look at the construction of the Table type corresponding to the User model.

case class User(
  id: Long,
  name: String,
  age: Option[Int],
)

Before

Until now, we had to create instances of Table directly; the arguments of Table had to be passed the corresponding columns in the same order as the properties possessed by the User class, and the data type of the columns had to be set as well, which was mandatory.

The TableQuery using this table type was implemented using Dynamic, which allows type-safe access, but the development tools could not do the completion.

This method of construction was also a bit slower in compile time than class generation

val userTable = Table[User]("user")(
  column("id", BIGINT, AUTO_INCREMENT, PRIMARY_KEY),
  column("name", VARCHAR(255)),
  column("age", INT.UNSIGNED.DEFAULT(None)),
)    

After

In this modification, Table type generation has been changed to a method of creating a class by extending Table. In addition, the data type of a column is no longer required, but can be set arbitrarily by the implementer.

This change to a construction method similar to that of Slick has made it more familiar to implementers.

class UserTable extends Table[User]("user"):
  def id: Column[Long] = column[Long]("id")
  def name: Column[String] = column[String]("name")
  def age: Column[Option[Int]] = column[Option[Int]]("age")

  override def * : Column[User] = (id *: name *: age).to[User]

The data type of the columns can still be set. This setting is used, for example, when generating a schema using this table class.

class UserTable extends Table[User]("user"):
  def id: Column[Long] = column[Long]("id", BIGINT, AUTO_INCREMENT, PRIMARY_KEY)
  def name: Column[String] = column[String]("name", VARCHAR(255))
  def age: Column[Option[Int]] = column[Option[Int]]("age", INT.UNSIGNED.DEFAULT(None))

  override def * : Column[User] = (id *: name *: age).to[User]

🚀 Features

💪 Enhancement

  • Enhancement/2024 12 added where condition by @takapi327 in #338
  • Enhancement/2024 12 encoder extensions by @takapi327 in #347
  • Enhancement/2024 12 make encoder and decoder compatible with twiddles by @takapi327 in #348
  • Enhancement/2025 01 make decoder support either by @takapi327 in #350
  • Enhancement/2025 01 update where statement by @takapi327 in #353

🪲 Bug Fixes

🔧 Refactoring

⛓️ Dependency update

Read more