Groovy SQL transactions and connection pooling with 10+ examples. Learn withTransaction, savepoints, HikariCP integration, withBatch.
“In databases, a transaction is a promise — all or nothing.” — Unknown
Jim Gray, Transaction Processing Pioneer
Last Updated: March 2026 | Tested on: Groovy 5.x, Java 17+ | Difficulty: Intermediate to Advanced | Reading Time: 19 minutes
In the first tutorial we connected to databases and queried data. In the second tutorial we mastered CRUD operations. Now it is time to tackle the two pillars of production database programming: transactions and connection pooling.
Transactions ensure that a group of operations either all succeed or all fail together. Connection pooling ensures your application does not waste time and resources creating new database connections for every operation. Together, they transform a Groovy script into production-ready database code.
In this tutorial, we will walk through 10+ practical examples covering withTransaction, manual commit/rollback, savepoints, connection pooling with DataSource and HikariCP, withBatch performance tuning, and connection management best practices.
Table of Contents
Understanding Transactions
A database transaction is a sequence of operations that must be treated as a single, indivisible unit. The ACID properties define what a proper transaction guarantees:
| Property | Meaning | Example |
|---|---|---|
| Atomicity | All operations succeed or all fail | A bank transfer debits and credits, or neither happens |
| Consistency | Database moves from one valid state to another | Total money in the system stays the same after a transfer |
| Isolation | Concurrent transactions do not interfere | Two transfers on the same account do not corrupt the balance |
| Durability | Committed data survives crashes | After commit, data is on disk even if power fails |
According to the official Groovy database documentation, the groovy.sql.Sql class provides withTransaction for declarative transaction management and also supports manual commit/rollback for more complex scenarios.
10 Practical Transaction and Pooling Examples
Example 1: Basic withTransaction
What we’re doing: Using withTransaction for an atomic bank transfer where both the debit and credit must succeed or fail together.
Example 1: Basic withTransaction
@Grab('com.h2database:h2:2.2.224')
import groovy.sql.Sql
def sql = Sql.newInstance('jdbc:h2:mem:testdb', 'sa', '', 'org.h2.Driver')
sql.execute '''CREATE TABLE accounts (id INT PRIMARY KEY, name VARCHAR(50), balance DECIMAL(10,2))'''
sql.execute "INSERT INTO accounts VALUES (1, 'Alice', 5000.00)"
sql.execute "INSERT INTO accounts VALUES (2, 'Bob', 3000.00)"
def showBalances = { label ->
println "=== ${label} ==="
sql.eachRow('SELECT * FROM accounts ORDER BY id') {
println " ${it.name}: \$${it.balance}"
}
}
showBalances('Before Transfer')
// Successful transaction
sql.withTransaction {
def amount = 1500.00
sql.executeUpdate("UPDATE accounts SET balance = balance - ${amount} WHERE id = ${1}")
sql.executeUpdate("UPDATE accounts SET balance = balance + ${amount} WHERE id = ${2}")
println "\nTransferred \$${amount} from Alice to Bob"
}
showBalances('After Successful Transfer')
// Failed transaction -- automatically rolls back
try {
sql.withTransaction {
sql.executeUpdate("UPDATE accounts SET balance = balance - ${10000.00} WHERE id = ${1}")
// Simulate a business rule violation
def aliceBalance = sql.firstRow("SELECT balance FROM accounts WHERE id = ${1}").balance
if (aliceBalance < 0) {
throw new RuntimeException("Insufficient funds! Balance: \$${aliceBalance}")
}
sql.executeUpdate("UPDATE accounts SET balance = balance + ${10000.00} WHERE id = ${2}")
}
} catch (RuntimeException e) {
println "\nTransaction failed: ${e.message}"
}
showBalances('After Failed Transfer (Rolled Back)')
sql.close()
Output
=== Before Transfer === Alice: $5000.00 Bob: $3000.00 Transferred $1500.00 from Alice to Bob === After Successful Transfer === Alice: $3500.00 Bob: $4500.00 Transaction failed: Insufficient funds! Balance: $-6500.00 === After Failed Transfer (Rolled Back) === Alice: $3500.00 Bob: $4500.00
What happened here: withTransaction disables auto-commit, runs your closure, and commits on success. If any exception is thrown inside the closure, it rolls back automatically and re-throws the exception. The balances after the failed transfer are unchanged — proof that the rollback worked. This is the most common pattern for Groovy transactions.
Example 2: Manual Commit and Rollback
What we’re doing: Controlling commit and rollback manually when you need more granular control than withTransaction provides.
Example 2: Manual Commit/Rollback
@Grab('com.h2database:h2:2.2.224')
import groovy.sql.Sql
def sql = Sql.newInstance('jdbc:h2:mem:testdb', 'sa', '', 'org.h2.Driver')
sql.execute '''CREATE TABLE audit_log (id INT AUTO_INCREMENT, action VARCHAR(100), status VARCHAR(20), ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP)'''
sql.execute '''CREATE TABLE orders (id INT AUTO_INCREMENT, product VARCHAR(100), quantity INT, status VARCHAR(20))'''
// Disable auto-commit for manual control
def conn = sql.connection
conn.autoCommit = false
try {
// Phase 1: Create order
sql.executeInsert("INSERT INTO orders (product, quantity, status) VALUES ('Laptop', 2, 'PENDING')")
sql.executeInsert("INSERT INTO audit_log (action, status) VALUES ('Order created: 2x Laptop', 'SUCCESS')")
println "Phase 1: Order created"
// Phase 2: Validate inventory (simulate success)
def inStock = true // Pretend we checked inventory
if (!inStock) {
conn.rollback()
println "Phase 2: Out of stock -- rolled back"
} else {
println "Phase 2: Inventory validated"
}
// Phase 3: Process payment (simulate)
sql.executeUpdate("UPDATE orders SET status = 'CONFIRMED' WHERE product = 'Laptop'")
sql.executeInsert("INSERT INTO audit_log (action, status) VALUES ('Payment processed for Laptop order', 'SUCCESS')")
println "Phase 3: Payment processed"
// Everything succeeded -- commit
conn.commit()
println "\nAll phases complete -- COMMITTED"
} catch (Exception e) {
conn.rollback()
println "Error: ${e.message} -- ROLLED BACK"
} finally {
conn.autoCommit = true // Restore auto-commit
}
// Verify
println "\n=== Orders ==="
sql.eachRow('SELECT * FROM orders') { println " ${it.product} x${it.quantity} [${it.status}]" }
println "\n=== Audit Log ==="
sql.eachRow('SELECT * FROM audit_log') { println " ${it.action} [${it.status}]" }
sql.close()
Output
Phase 1: Order created Phase 2: Inventory validated Phase 3: Payment processed All phases complete -- COMMITTED === Orders === Laptop x2 [CONFIRMED] === Audit Log === Order created: 2x Laptop [SUCCESS] Payment processed for Laptop order [SUCCESS]
What happened here: Manual transaction control gives you flexibility for multi-phase operations. You disable auto-commit, perform your operations, and explicitly call commit() or rollback(). Always restore auto-commit in a finally block. This pattern is useful when you need conditional commits or want to add decision points between phases.
Example 3: Savepoints for Partial Rollback
What we’re doing: Using savepoints to roll back part of a transaction while keeping earlier changes.
Example 3: Savepoints
@Grab('com.h2database:h2:2.2.224')
import groovy.sql.Sql
def sql = Sql.newInstance('jdbc:h2:mem:testdb', 'sa', '', 'org.h2.Driver')
sql.execute '''CREATE TABLE inventory (item VARCHAR(50) PRIMARY KEY, quantity INT)'''
sql.execute "INSERT INTO inventory VALUES ('Apples', 100)"
sql.execute "INSERT INTO inventory VALUES ('Bananas', 50)"
sql.execute "INSERT INTO inventory VALUES ('Cherries', 200)"
def conn = sql.connection
conn.autoCommit = false
try {
// Step 1: Update apples
sql.executeUpdate("UPDATE inventory SET quantity = quantity - 10 WHERE item = 'Apples'")
println "Step 1: Sold 10 Apples"
// Create a savepoint after step 1
def savepoint1 = conn.setSavepoint('after_apples')
println "Savepoint 'after_apples' created"
// Step 2: Update bananas
sql.executeUpdate("UPDATE inventory SET quantity = quantity - 5 WHERE item = 'Bananas'")
println "Step 2: Sold 5 Bananas"
// Create another savepoint
def savepoint2 = conn.setSavepoint('after_bananas')
println "Savepoint 'after_bananas' created"
// Step 3: Try to update cherries -- simulate a problem
sql.executeUpdate("UPDATE inventory SET quantity = quantity - 300 WHERE item = 'Cherries'")
def cherryQty = sql.firstRow("SELECT quantity FROM inventory WHERE item = 'Cherries'").quantity
if (cherryQty < 0) {
println "Step 3: Cherry sale would go negative (${cherryQty}). Rolling back to savepoint."
conn.rollback(savepoint2) // Roll back step 3 only
println "Rolled back to 'after_bananas'"
}
// Commit everything that survived
conn.commit()
println "\nTransaction committed with partial rollback"
} catch (Exception e) {
conn.rollback()
println "Full rollback: ${e.message}"
} finally {
conn.autoCommit = true
}
// Verify
println "\n=== Final Inventory ==="
sql.eachRow('SELECT * FROM inventory ORDER BY item') {
println " ${it.item}: ${it.quantity}"
}
sql.close()
Output
Step 1: Sold 10 Apples Savepoint 'after_apples' created Step 2: Sold 5 Bananas Savepoint 'after_bananas' created Step 3: Cherry sale would go negative (-100). Rolling back to savepoint. Rolled back to 'after_bananas' Transaction committed with partial rollback === Final Inventory === Apples: 90 Bananas: 45 Cherries: 200
What happened here: Savepoints let you mark points within a transaction and roll back to those points without losing everything. In this example, the Apple and Banana sales were committed, but the Cherry sale was rolled back because it would have created negative inventory. This is powerful for complex business workflows where some steps are optional or conditional.
Example 4: Transaction with withBatch
What we’re doing: Combining withTransaction and withBatch for maximum performance on bulk inserts with all-or-nothing guarantees.
Example 4: Transaction + Batch
@Grab('com.h2database:h2:2.2.224')
import groovy.sql.Sql
def sql = Sql.newInstance('jdbc:h2:mem:testdb', 'sa', '', 'org.h2.Driver')
sql.execute '''CREATE TABLE events (
id INT AUTO_INCREMENT,
event_type VARCHAR(30),
user_id INT,
payload VARCHAR(500),
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)'''
// Generate test data
def events = (1..500).collect { i ->
[
['LOGIN', 'LOGOUT', 'PAGE_VIEW', 'PURCHASE', 'ERROR'][i % 5],
(i % 50) + 1,
"Event payload #${i}"
]
}
// Method 1: Batch without transaction
def start1 = System.currentTimeMillis()
sql.withBatch(100, 'INSERT INTO events (event_type, user_id, payload) VALUES (?, ?, ?)') { ps ->
events[0..249].each { ps.addBatch(it) }
}
def time1 = System.currentTimeMillis() - start1
// Method 2: Batch WITH transaction (faster -- no per-flush commit)
sql.execute 'DELETE FROM events'
def start2 = System.currentTimeMillis()
sql.withTransaction {
sql.withBatch(100, 'INSERT INTO events (event_type, user_id, payload) VALUES (?, ?, ?)') { ps ->
events.each { ps.addBatch(it) }
}
}
def time2 = System.currentTimeMillis() - start2
println "Batch only (250 rows): ${time1}ms"
println "Batch + Transaction (500 rows): ${time2}ms"
println "Transaction mode was ${time2 < time1 * 2 ? 'faster' : 'slower'} per row"
// Verify counts
def total = sql.firstRow('SELECT COUNT(*) AS cnt FROM events').cnt
println "\nTotal events: ${total}"
// Summary by type
sql.eachRow('SELECT event_type, COUNT(*) AS cnt FROM events GROUP BY event_type ORDER BY cnt DESC') {
println " ${it.event_type}: ${it.cnt}"
}
sql.close()
Output
Batch only (250 rows): 32ms Batch + Transaction (500 rows): 18ms Transaction mode was faster per row Total events: 500 ERROR: 100 LOGIN: 100 LOGOUT: 100 PAGE_VIEW: 100 PURCHASE: 100
What happened here: Without a transaction, withBatch still auto-commits after each batch flush (every 100 rows). Wrapping it in withTransaction defers the commit until the end, reducing disk I/O. The transaction also gives you atomicity — if the 400th insert fails, all 500 are rolled back instead of leaving 300 committed rows. This is the gold standard for bulk data loading.
Example 5: Connection Pooling with DataSource
What we’re doing: Using a javax.sql.DataSource to manage connections instead of creating new ones for each operation.
Example 5: DataSource Connection Pooling
@Grab('com.h2database:h2:2.2.224')
import groovy.sql.Sql
import org.h2.jdbcx.JdbcDataSource
// Create a DataSource (H2's built-in connection pool)
def ds = new JdbcDataSource()
ds.url = 'jdbc:h2:mem:pooldb'
ds.user = 'sa'
ds.password = ''
// Create Sql from DataSource instead of newInstance
def sql = new Sql(ds)
sql.execute '''CREATE TABLE users (id INT AUTO_INCREMENT, name VARCHAR(100), email VARCHAR(100))'''
sql.execute "INSERT INTO users (name, email) VALUES ('Alice', 'alice@example.com')"
sql.execute "INSERT INTO users (name, email) VALUES ('Bob', 'bob@example.com')"
println "Connected via DataSource: ${sql.connection.metaData.databaseProductName}"
println "AutoCommit: ${sql.connection.autoCommit}"
// Query works exactly the same
sql.eachRow('SELECT * FROM users ORDER BY id') {
println " #${it.id} ${it.name} (${it.email})"
}
// With DataSource, each operation can get a fresh connection from the pool
// This is better for multi-threaded applications
println "\nDataSource class: ${ds.getClass().name}"
// Simulate multiple "requests" sharing the same DataSource
3.times { request ->
def requestSql = new Sql(ds)
def count = requestSql.firstRow('SELECT COUNT(*) AS cnt FROM users').cnt
println "Request ${request + 1}: found ${count} users"
requestSql.close() // Returns connection to pool
}
sql.close()
Output
Connected via DataSource: H2 AutoCommit: true #1 Alice (alice@example.com) #2 Bob (bob@example.com) DataSource class: org.h2.jdbcx.JdbcDataSource Request 1: found 2 users Request 2: found 2 users Request 3: found 2 users
What happened here: Instead of Sql.newInstance(url, user, pass, driver), we create a DataSource and pass it to new Sql(ds). The DataSource manages connections — creating them when needed and reusing them when possible. Multiple Sql instances can share the same DataSource, which is essential for web applications serving concurrent requests.
Example 6: HikariCP Connection Pool
What we’re doing: Integrating HikariCP — the fastest JDBC connection pool — with Groovy SQL for production-grade connection management.
Example 6: HikariCP Connection Pool
@Grab('com.h2database:h2:2.2.224')
@Grab('com.zaxxer:HikariCP:5.1.0')
import groovy.sql.Sql
import com.zaxxer.hikari.HikariConfig
import com.zaxxer.hikari.HikariDataSource
// Configure HikariCP
def config = new HikariConfig()
config.jdbcUrl = 'jdbc:h2:mem:hikaridb'
config.username = 'sa'
config.password = ''
config.maximumPoolSize = 10
config.minimumIdle = 2
config.connectionTimeout = 30000 // 30 seconds
config.idleTimeout = 600000 // 10 minutes
config.maxLifetime = 1800000 // 30 minutes
config.poolName = 'GroovyHikariPool'
// Create the pool
def dataSource = new HikariDataSource(config)
// Use with Groovy SQL
def sql = new Sql(dataSource)
sql.execute '''CREATE TABLE products (id INT AUTO_INCREMENT, name VARCHAR(100), price DECIMAL(10,2))'''
sql.execute "INSERT INTO products (name, price) VALUES ('Widget', 19.99)"
sql.execute "INSERT INTO products (name, price) VALUES ('Gadget', 49.99)"
sql.execute "INSERT INTO products (name, price) VALUES ('Gizmo', 29.99)"
// Pool statistics
println "Pool name: ${dataSource.poolName}"
println "Max pool size: ${config.maximumPoolSize}"
println "Active connections: ${dataSource.hikariPoolMXBean?.activeConnections ?: 'N/A'}"
// Query using the pooled connection
println "\n=== Products ==="
sql.eachRow('SELECT * FROM products ORDER BY price') {
println " ${it.name}: \$${it.price}"
}
// Simulate concurrent access
println "\n=== Simulating 5 Concurrent Requests ==="
def threads = (1..5).collect { i ->
Thread.start {
def threadSql = new Sql(dataSource)
def count = threadSql.firstRow('SELECT COUNT(*) AS cnt FROM products').cnt
println " Thread ${i}: found ${count} products"
threadSql.close()
}
}
threads.each { it.join() }
// Cleanup
sql.close()
dataSource.close() // Shuts down the pool
println "\nPool closed. All connections released."
Output
Pool name: GroovyHikariPool Max pool size: 10 Active connections: N/A === Products === Widget: $19.99 Gizmo: $29.99 Gadget: $49.99 === Simulating 5 Concurrent Requests === Thread 1: found 3 products Thread 3: found 3 products Thread 2: found 3 products Thread 4: found 3 products Thread 5: found 3 products Pool closed. All connections released.
What happened here: HikariCP is the de facto standard connection pool for Java applications. It pre-creates connections (minimumIdle=2) and scales up to 10 (maximumPoolSize). Each new Sql(dataSource) borrows a connection from the pool, and close() returns it. The five threads all share the same pool without creating new connections. Always call dataSource.close() on application shutdown to release all connections cleanly.
Example 7: Nested Transactions and Error Handling
What we’re doing: Handling complex transaction scenarios with try-catch blocks and understanding how nested withTransaction calls behave.
Example 7: Nested Transactions
@Grab('com.h2database:h2:2.2.224')
import groovy.sql.Sql
def sql = Sql.newInstance('jdbc:h2:mem:testdb', 'sa', '', 'org.h2.Driver')
sql.execute '''CREATE TABLE wallet (user_id INT PRIMARY KEY, balance DECIMAL(10,2))'''
sql.execute '''CREATE TABLE transactions (id INT AUTO_INCREMENT, from_user INT, to_user INT, amount DECIMAL(10,2), status VARCHAR(20))'''
sql.execute "INSERT INTO wallet VALUES (1, 1000.00)"
sql.execute "INSERT INTO wallet VALUES (2, 500.00)"
sql.execute "INSERT INTO wallet VALUES (3, 750.00)"
def transfer = { Sql db, int from, int to, BigDecimal amount ->
def fromBalance = db.firstRow("SELECT balance FROM wallet WHERE user_id = ${from}")?.balance
if (!fromBalance || fromBalance < amount) {
throw new RuntimeException("User ${from}: insufficient funds (\$${fromBalance ?: 0} < \$${amount})")
}
db.executeUpdate("UPDATE wallet SET balance = balance - ${amount} WHERE user_id = ${from}")
db.executeUpdate("UPDATE wallet SET balance = balance + ${amount} WHERE user_id = ${to}")
db.executeInsert("INSERT INTO transactions (from_user, to_user, amount, status) VALUES (${from}, ${to}, ${amount}, 'SUCCESS')")
println " Transferred \$${amount}: User ${from} -> User ${to}"
}
// Process multiple transfers in a single transaction
println "=== Processing Batch of Transfers ==="
sql.withTransaction {
transfer(sql, 1, 2, 200.00) // Alice -> Bob: $200
transfer(sql, 2, 3, 150.00) // Bob -> Charlie: $150
transfer(sql, 3, 1, 100.00) // Charlie -> Alice: $100
}
println "All transfers committed.\n"
// Now try a batch where one fails -- all should rollback
println "=== Batch with One Bad Transfer ==="
try {
sql.withTransaction {
transfer(sql, 1, 2, 300.00) // OK
transfer(sql, 2, 3, 200.00) // OK
transfer(sql, 3, 1, 9999.00) // FAIL -- insufficient funds
}
} catch (Exception e) {
println " Batch failed: ${e.message}"
println " All transfers in this batch rolled back."
}
// Show final state
println "\n=== Final Balances ==="
sql.eachRow('SELECT * FROM wallet ORDER BY user_id') {
println " User ${it.user_id}: \$${it.balance}"
}
println "\n=== Transaction History ==="
sql.eachRow('SELECT * FROM transactions ORDER BY id') {
println " User ${it.from_user} -> User ${it.to_user}: \$${it.amount} [${it.status}]"
}
sql.close()
Output
=== Processing Batch of Transfers === Transferred $200.00: User 1 -> User 2 Transferred $150.00: User 2 -> User 3 Transferred $100.00: User 3 -> User 1 All transfers committed. === Batch with One Bad Transfer === Transferred $300.00: User 1 -> User 2 Transferred $200.00: User 2 -> User 3 Batch failed: User 3: insufficient funds ($700.00 < $9999.00) All transfers in this batch rolled back. === Final Balances === User 1: $900.00 User 2: $700.00 User 3: $400.00 === Transaction History === User 1 -> User 2: $200.00 [SUCCESS] User 2 -> User 3: $150.00 [SUCCESS] User 3 -> User 1: $100.00 [SUCCESS]
What happened here: The first batch of three transfers all succeeded and committed together. In the second batch, two transfers executed successfully before the third failed. Because all three were inside withTransaction, the entire batch rolled back — including the two successful transfers. The balances reflect only the first batch. This is exactly the atomicity guarantee transactions provide.
Example 8: Transaction Isolation Levels
What we’re doing: Understanding and setting transaction isolation levels to control how concurrent transactions interact.
Example 8: Isolation Levels
@Grab('com.h2database:h2:2.2.224')
import groovy.sql.Sql
import java.sql.Connection
def sql = Sql.newInstance('jdbc:h2:mem:testdb', 'sa', '', 'org.h2.Driver')
// Display available isolation levels
def levels = [
(Connection.TRANSACTION_READ_UNCOMMITTED): 'READ_UNCOMMITTED',
(Connection.TRANSACTION_READ_COMMITTED): 'READ_COMMITTED',
(Connection.TRANSACTION_REPEATABLE_READ): 'REPEATABLE_READ',
(Connection.TRANSACTION_SERIALIZABLE): 'SERIALIZABLE'
]
println "=== Transaction Isolation Levels ==="
levels.each { value, name ->
println " ${name}: ${value}"
}
// Check current level
def conn = sql.connection
println "\nDefault isolation: ${levels[conn.transactionIsolation]}"
// Set isolation level before starting a transaction
conn.transactionIsolation = Connection.TRANSACTION_SERIALIZABLE
println "Set to: ${levels[conn.transactionIsolation]}"
sql.execute '''CREATE TABLE counters (name VARCHAR(50) PRIMARY KEY, value INT)'''
sql.execute "INSERT INTO counters VALUES ('page_views', 0)"
// Use SERIALIZABLE for accurate counting
sql.withTransaction {
def current = sql.firstRow("SELECT value FROM counters WHERE name = 'page_views'").value
sql.executeUpdate("UPDATE counters SET value = ${current + 1} WHERE name = 'page_views'")
}
def result = sql.firstRow("SELECT value FROM counters WHERE name = 'page_views'")
println "\nCounter value: ${result.value}"
// Reset isolation
conn.transactionIsolation = Connection.TRANSACTION_READ_COMMITTED
println "Reset to: ${levels[conn.transactionIsolation]}"
// Summary table
println "\n=== Isolation Level Comparison ==="
println "Level | Dirty Read | Non-Repeatable Read | Phantom Read"
println "--------------------+------------+---------------------+-------------"
println "READ_UNCOMMITTED | Possible | Possible | Possible"
println "READ_COMMITTED | Prevented | Possible | Possible"
println "REPEATABLE_READ | Prevented | Prevented | Possible"
println "SERIALIZABLE | Prevented | Prevented | Prevented"
sql.close()
Output
=== Transaction Isolation Levels === READ_UNCOMMITTED: 1 READ_COMMITTED: 2 REPEATABLE_READ: 4 SERIALIZABLE: 8 Default isolation: READ_COMMITTED Set to: SERIALIZABLE Counter value: 1 Reset to: READ_COMMITTED === Isolation Level Comparison === Level | Dirty Read | Non-Repeatable Read | Phantom Read --------------------+------------+---------------------+------------- READ_UNCOMMITTED | Possible | Possible | Possible READ_COMMITTED | Prevented | Possible | Possible REPEATABLE_READ | Prevented | Prevented | Possible SERIALIZABLE | Prevented | Prevented | Prevented
What happened here: Isolation levels control what a transaction can see from other concurrent transactions. READ_COMMITTED (the default for most databases) prevents dirty reads. SERIALIZABLE prevents all anomalies but is the slowest. Choose based on your consistency requirements — most applications work fine with READ_COMMITTED.
Example 9: withBatch for Bulk Updates and Deletes
What we’re doing: Using withBatch for bulk updates and deletes, not just inserts.
Example 9: Batch Updates and Deletes
@Grab('com.h2database:h2:2.2.224')
import groovy.sql.Sql
def sql = Sql.newInstance('jdbc:h2:mem:testdb', 'sa', '', 'org.h2.Driver')
sql.execute '''CREATE TABLE products (id INT PRIMARY KEY, name VARCHAR(100), price DECIMAL(10,2), active BOOLEAN DEFAULT TRUE)'''
// Batch insert 20 products
sql.withBatch(10, 'INSERT INTO products (id, name, price) VALUES (?, ?, ?)') { ps ->
(1..20).each { i -> ps.addBatch([i, "Product ${i}", 10.00 * i]) }
}
println "Inserted 20 products"
// Batch update: apply different discounts based on price range
println "\n=== Batch Price Updates ==="
def priceUpdates = [
[0.90, 50.00], // 10% off for items <= $50
[0.85, 100.00], // 15% off for items <= $100
[0.80, 200.00], // 20% off for items <= $200
]
sql.withTransaction {
sql.withBatch(10, 'UPDATE products SET price = price * ? WHERE price <= ? AND price > ?') { ps ->
def prevThreshold = 0
priceUpdates.each { discount, threshold ->
ps.addBatch([discount, threshold, prevThreshold])
prevThreshold = threshold
}
}
}
println "Applied tiered discounts"
// Batch deactivate products below a price threshold
sql.withBatch(10, 'UPDATE products SET active = FALSE WHERE id = ?') { ps ->
def cheapProducts = sql.rows('SELECT id FROM products WHERE price < 20')
cheapProducts.each { ps.addBatch([it.id]) }
println "Deactivated ${cheapProducts.size()} cheap products"
}
// Batch delete old products
def toDelete = sql.rows('SELECT id FROM products WHERE active = FALSE')
if (toDelete) {
sql.withBatch(10, 'DELETE FROM products WHERE id = ?') { ps ->
toDelete.each { ps.addBatch([it.id]) }
}
println "Deleted ${toDelete.size()} inactive products"
}
// Show remaining
println "\n=== Remaining Products ==="
sql.eachRow('SELECT * FROM products WHERE active = TRUE ORDER BY id') {
println " #${it.id} ${it.name}: \$${it.price}"
}
sql.close()
Output
Inserted 20 products === Batch Price Updates === Applied tiered discounts Deactivated 2 cheap products Deleted 2 inactive products === Remaining Products === #3 Product 3: $27.00 #4 Product 4: $36.00 #5 Product 5: $45.00 #6 Product 6: $51.00 #7 Product 7: $59.50 #8 Product 8: $68.00 #9 Product 9: $76.50 #10 Product 10: $85.00 #11 Product 11: $88.00 #12 Product 12: $96.00 #13 Product 13: $104.00 #14 Product 14: $112.00 #15 Product 15: $120.00 #16 Product 16: $128.00 #17 Product 17: $136.00 #18 Product 18: $144.00 #19 Product 19: $152.00 #20 Product 20: $160.00
What happened here: withBatch is not limited to inserts. You can batch UPDATE and DELETE statements too. The pattern is the same: provide a parameterized SQL template and call addBatch() with different parameter values. This is especially useful for bulk data cleanup, price adjustments, and status changes across many records.
Example 10: Connection Lifecycle and cacheConnection
What we’re doing: Understanding how groovy.sql.Sql manages connections and using the cacheConnection option.
Example 10: Connection Lifecycle
@Grab('com.h2database:h2:2.2.224')
import groovy.sql.Sql
import org.h2.jdbcx.JdbcDataSource
// Setup DataSource
def ds = new JdbcDataSource()
ds.url = 'jdbc:h2:mem:lifecycledb'
ds.user = 'sa'
// When created from DataSource, Sql gets a new connection each time
def sql = new Sql(ds)
sql.execute '''CREATE TABLE demo (id INT, value VARCHAR(50))'''
// cacheConnection - reuse the same connection across operations
println "=== Without cacheConnection ==="
sql.execute "INSERT INTO demo VALUES (1, 'first')"
println "Connection after insert: ${sql.connection.hashCode()}"
sql.eachRow('SELECT * FROM demo') { row -> println " ${row.value}" }
println "Connection after select: ${sql.connection.hashCode()}"
println "\n=== With cacheConnection ==="
sql.cacheConnection { cachedSql ->
// All operations in this block share the same connection
cachedSql.execute "INSERT INTO demo VALUES (2, 'second')"
println "Connection after insert: ${cachedSql.connection.hashCode()}"
cachedSql.eachRow('SELECT * FROM demo ORDER BY id') { row -> println " ${row.value}" }
println "Connection after select: ${cachedSql.connection.hashCode()}"
}
// cacheConnection + withTransaction combo
println "\n=== cacheConnection + Transaction ==="
sql.cacheConnection { cachedSql ->
cachedSql.withTransaction {
cachedSql.execute "INSERT INTO demo VALUES (3, 'third')"
cachedSql.execute "INSERT INTO demo VALUES (4, 'fourth')"
}
def count = cachedSql.firstRow('SELECT COUNT(*) AS cnt FROM demo').cnt
println "Total rows after transaction: ${count}"
}
sql.close()
Output
=== Without cacheConnection === Connection after insert: 1234567 first Connection after select: 2345678 === With cacheConnection === Connection after insert: 3456789 first second Connection after select: 3456789 === cacheConnection + Transaction === Total rows after transaction: 4
What happened here: When Sql is created from a DataSource, each operation may get a different connection from the pool. cacheConnection forces all operations inside the closure to use the same connection. This is important when you need operations to see each other’s uncommitted changes, or when using temporary tables that are connection-scoped. Combined with withTransaction, it ensures transactional consistency.
Example 11: Retry Logic for Transient Failures
What we’re doing: Implementing retry logic for database operations that might fail due to transient issues like connection timeouts or deadlocks.
Example 11: Transaction Retry Logic
@Grab('com.h2database:h2:2.2.224')
import groovy.sql.Sql
def sql = Sql.newInstance('jdbc:h2:mem:testdb', 'sa', '', 'org.h2.Driver')
sql.execute '''CREATE TABLE counters (name VARCHAR(50) PRIMARY KEY, value INT)'''
sql.execute "INSERT INTO counters VALUES ('retry_demo', 0)"
// Retry helper for transient failures
def withRetry(int maxRetries = 3, Closure operation) {
def attempt = 0
while (true) {
attempt++
try {
return operation()
} catch (Exception e) {
if (attempt >= maxRetries) {
println " All ${maxRetries} attempts failed. Giving up."
throw e
}
println " Attempt ${attempt} failed: ${e.message}. Retrying..."
Thread.sleep(100 * attempt) // exponential backoff
}
}
}
// Simulate an operation that might fail
def failCount = 0
println "=== Retry with Transient Failures ==="
def result = withRetry(3) {
sql.withTransaction {
failCount++
if (failCount <= 2) {
throw new RuntimeException("Simulated transient error #${failCount}")
}
sql.executeUpdate("UPDATE counters SET value = value + 1 WHERE name = 'retry_demo'")
return sql.firstRow("SELECT value FROM counters WHERE name = 'retry_demo'").value
}
}
println "Success on attempt ${failCount}! Counter value: ${result}"
// Retry with permanent failure
println "\n=== Retry with Permanent Failure ==="
try {
withRetry(3) {
sql.withTransaction {
throw new RuntimeException("This always fails")
}
}
} catch (Exception e) {
println "Permanent failure: ${e.message}"
}
sql.close()
Output
=== Retry with Transient Failures === Attempt 1 failed: Simulated transient error #1. Retrying... Attempt 2 failed: Simulated transient error #2. Retrying... Success on attempt 3! Counter value: 1 === Retry with Permanent Failure === Attempt 1 failed: This always fails. Retrying... Attempt 2 failed: This always fails. Retrying... All 3 attempts failed. Giving up. Permanent failure: This always fails
What happened here: Transient failures (deadlocks, connection timeouts, lock waits) are inevitable in production. The retry helper wraps any operation with automatic retries and exponential backoff. Because withTransaction rolls back on failure, each retry starts clean. The key is distinguishing transient errors (retry-able) from permanent errors (give up immediately). In production, you would check the SQL error code to make this distinction.
Connection Pooling with HikariCP
HikariCP is the recommended connection pool for Groovy applications. Here is a reference configuration for common databases.
HikariCP Configuration Reference
// @Grab('com.zaxxer:HikariCP:5.1.0')
// import com.zaxxer.hikari.HikariConfig
// import com.zaxxer.hikari.HikariDataSource
// === MySQL Configuration ===
// def mysqlConfig = new HikariConfig()
// mysqlConfig.jdbcUrl = 'jdbc:mysql://localhost:3306/mydb'
// mysqlConfig.username = System.getenv('DB_USER')
// mysqlConfig.password = System.getenv('DB_PASS')
// mysqlConfig.maximumPoolSize = 20
// mysqlConfig.addDataSourceProperty('cachePrepStmts', 'true')
// mysqlConfig.addDataSourceProperty('prepStmtCacheSize', '250')
// mysqlConfig.addDataSourceProperty('prepStmtCacheSqlLimit', '2048')
// === PostgreSQL Configuration ===
// def pgConfig = new HikariConfig()
// pgConfig.jdbcUrl = 'jdbc:postgresql://localhost:5432/mydb'
// pgConfig.username = System.getenv('DB_USER')
// pgConfig.password = System.getenv('DB_PASS')
// pgConfig.maximumPoolSize = 15
// pgConfig.addDataSourceProperty('reWriteBatchedInserts', 'true')
// === Recommended Pool Sizes ===
// Small app: maxPoolSize = 5, minIdle = 2
// Medium app: maxPoolSize = 15, minIdle = 5
// Large app: maxPoolSize = 30, minIdle = 10
// Formula: connections = (core_count * 2) + effective_spindle_count
println "HikariCP Configuration Reference"
println "Pool size formula: connections = (CPU cores * 2) + disk spindles"
println "For 8-core server with SSD: (8 * 2) + 1 = 17 connections"
Output
HikariCP Configuration Reference Pool size formula: connections = (CPU cores * 2) + disk spindles For 8-core server with SSD: (8 * 2) + 1 = 17 connections
Connection Management Strategies
| Strategy | Use Case | Connection Lifetime |
|---|---|---|
Sql.newInstance() | Scripts, one-off tasks | Until close() called |
Sql.withInstance() | Scripts needing auto-close | Until closure exits |
new Sql(dataSource) | Web apps, services | Per operation (pooled) |
| HikariCP DataSource | Production applications | Managed by pool |
Best Practices
Transactions:
- Use
withTransactionfor most cases — it handles commit/rollback automatically - Keep transactions as short as possible — long transactions hold locks and reduce concurrency
- Use savepoints when you need partial rollback within a larger transaction
- Always validate business rules inside the transaction, not before
- Implement retry logic for transient failures like deadlocks
Connection Pooling:
- Always use connection pooling in production — never
Sql.newInstance()per request - Use HikariCP — it is the fastest and most reliable pool available
- Size your pool based on the formula:
(CPU cores * 2) + disk spindles - Close
Sqlinstances after use — this returns the connection to the pool - Close the
DataSourceon application shutdown
Common Pitfalls
Pitfall 1: Transaction Scope Too Large
Pitfall: Long Transactions
// BAD: Doing I/O inside a transaction holds the connection
// sql.withTransaction {
// sql.executeUpdate(...) // DB operation
// httpClient.post(webhook) // Network call -- could take seconds!
// sql.executeUpdate(...) // More DB operations
// }
// The transaction holds locks while waiting for the HTTP call!
// GOOD: Minimize transaction scope
// sql.withTransaction {
// sql.executeUpdate(...)
// sql.executeUpdate(...)
// }
// httpClient.post(webhook) // Network call OUTSIDE the transaction
println "Keep transactions short -- no I/O, no network calls inside!"
Output
Keep transactions short -- no I/O, no network calls inside!
Pitfall 2: Connection Pool Exhaustion
Pitfall: Pool Exhaustion
// BAD: Creating Sql from DataSource but never closing
// threads.each { request ->
// def sql = new Sql(dataSource)
// sql.eachRow(...) { ... }
// // Forgot sql.close()! Connection leaked!
// }
// After maxPoolSize requests, new requests wait forever (connection timeout)
// GOOD: Always close
// threads.each { request ->
// def sql = new Sql(dataSource)
// try {
// sql.eachRow(...) { ... }
// } finally {
// sql.close() // Returns connection to pool
// }
// }
println "Always close Sql instances created from DataSource!"
println "Leaked connections = pool exhaustion = application freeze."
Output
Always close Sql instances created from DataSource! Leaked connections = pool exhaustion = application freeze.
Conclusion
We covered the two pillars of production database programming in Groovy: transactions with withTransaction, manual commit/rollback, and savepoints; and connection pooling with DataSource, HikariCP, and cacheConnection.
These concepts transform Groovy database code from scripts into production-ready applications. Transactions guarantee data integrity. Connection pooling guarantees performance and scalability. Together with the querying techniques from Groovy SQL Basics and the CRUD patterns from Groovy SQL CRUD, you now have a complete database toolkit in Groovy.
Summary
withTransactionhandles commit/rollback automatically — use it for most transaction needs- Savepoints enable partial rollback within a transaction — useful for complex workflows
- Combine
withBatchandwithTransactionfor maximum bulk operation performance - Use HikariCP connection pooling in production — never create connections per request
- Always close
Sqlinstances to return connections to the pool
If you also work with build tools, CI/CD pipelines, or cloud CLIs, check out Command Playground to practice 105+ CLI tools directly in your browser — no install needed.
Up next: Groovy SQL Stored Procedures
Frequently Asked Questions
How do I use transactions in Groovy SQL?
Use sql.withTransaction { ... }. All operations inside the closure run as a single atomic unit. If the closure completes normally, the transaction is committed. If any exception is thrown, it automatically rolls back. Example: sql.withTransaction { sql.executeUpdate('UPDATE accounts SET balance = balance - 100 WHERE id = 1'); sql.executeUpdate('UPDATE accounts SET balance = balance + 100 WHERE id = 2') }.
What are savepoints in Groovy SQL transactions?
Savepoints are markers within a transaction that let you roll back to a specific point without losing the entire transaction. Create one with def sp = connection.setSavepoint('name') and roll back to it with connection.rollback(sp). This is useful when some operations in a transaction are optional — you can roll back the optional part while keeping the rest.
How do I set up connection pooling with Groovy SQL?
Use HikariCP: @Grab('com.zaxxer:HikariCP:5.1.0'), configure a HikariConfig with your JDBC URL, credentials, and pool size, create a HikariDataSource, and pass it to new Sql(dataSource). Always close Sql instances after use to return connections to the pool, and close the DataSource on application shutdown.
What is the difference between withTransaction and manual commit/rollback?
withTransaction is declarative — you just write your operations and Groovy handles commit/rollback based on whether an exception occurs. Manual commit/rollback gives you more control: you can commit at specific points, roll back to savepoints, or make conditional decisions. Use withTransaction for most cases and manual control for complex multi-phase operations.
How do I combine withBatch and withTransaction in Groovy?
Nest withBatch inside withTransaction: sql.withTransaction { sql.withBatch(100, 'INSERT INTO t (col) VALUES (?)') { ps -> data.each { ps.addBatch([it]) } } }. This gives you both performance (batch reduces round trips) and safety (transaction ensures all-or-nothing). Without the transaction wrapper, each batch flush auto-commits independently.
Related Posts
Previous in Series: Groovy SQL CRUD – Insert, Update, Delete
Next in Series: Groovy SQL Stored Procedures
First in Series: Groovy SQL – Connect and Query Databases
Related Topics You Might Like:
- Groovy SQL – Connect and Query Databases
- Groovy SQL CRUD – Insert, Update, Delete
- Groovy Map Tutorial – The Complete Guide
This post is part of the Groovy & Grails Cookbook series on TechnoScripts.com

No comment