Groovy SQL CRUD – Insert, Update, Delete with 10+ Examples

Groovy SQL insert, update, delete, and read (CRUD) operations with 10+ examples. Learn execute(), executeInsert(), executeUpdate(), batch inserts, transactions, and prepared statements.

“Create, Read, Update, Delete — four operations that power every application.” — Unknown

Martin Fowler, Patterns of Enterprise Architecture

Last Updated: March 2026 | Tested on: Groovy 5.x, Java 17+ | Difficulty: Intermediate | Reading Time: 18 minutes

Querying a database is only half the story – real applications need groovy sql insert, update, and delete operations too. Building on the connection basics from our previous tutorial, the groovy.sql.Sql class provides specialized methods for the full CRUD lifecycle: execute(), executeInsert(), and executeUpdate().

Groovy makes CRUD operations just as elegant as querying. The groovy.sql.Sql class provides specialized methods for each operation: execute() for general-purpose SQL, executeInsert() to get generated keys back, and executeUpdate() to know how many rows were affected. Add batch inserts, prepared statements, and transactions to the mix, and you have a complete toolkit for data manipulation.

In this tutorial, we will walk through 10+ practical examples covering every CRUD operation. We will also explore batch inserts for bulk data loading and generated key retrieval for auto-increment columns. When you are ready for transaction management, head over to Groovy SQL Transactions and Connection Pooling.

CRUD Methods Overview

Before we start with examples, here is a quick reference of the main methods groovy.sql.Sql provides for data manipulation. According to the Groovy SQL API documentation, each method is optimized for a specific use case.

MethodReturnsBest For
execute(sql)booleanDDL statements, general-purpose SQL
executeInsert(sql)List<List> (generated keys)INSERT when you need auto-generated keys
executeUpdate(sql)int (affected rows)UPDATE, DELETE when you need row count
execute(sql, params)booleanParameterized DDL/DML
withBatch(closure)variesBulk inserts/updates for performance
withTransaction(closure)variesAtomic multi-statement operations

10 Practical CRUD Examples

Example 1: Basic Insert with execute()

What we’re doing: Inserting rows using the general-purpose execute() method with parameterized queries.

Example 1: Basic Insert

@Grab('com.h2database:h2:2.2.224')
import groovy.sql.Sql

def sql = Sql.newInstance('jdbc:h2:mem:testdb', 'sa', '', 'org.h2.Driver')

sql.execute '''
    CREATE TABLE employees (
        id INT PRIMARY KEY AUTO_INCREMENT,
        name VARCHAR(100) NOT NULL,
        department VARCHAR(50),
        salary DECIMAL(10,2),
        hire_date DATE
    )
'''

// Insert with GString parameters (safe -- uses PreparedStatement)
def name = 'Alice Johnson'
def dept = 'Engineering'
def salary = 95000.00
def hireDate = '2022-03-15'

sql.execute "INSERT INTO employees (name, department, salary, hire_date) VALUES (${name}, ${dept}, ${salary}, ${hireDate})"

// Insert with ? placeholders
sql.execute 'INSERT INTO employees (name, department, salary, hire_date) VALUES (?, ?, ?, ?)',
    ['Bob Smith', 'Marketing', 72000.00, '2021-07-01']

// Insert with named parameters
sql.execute 'INSERT INTO employees (name, department, salary, hire_date) VALUES (:name, :dept, :sal, :date)',
    [name: 'Charlie Brown', dept: 'Engineering', sal: 88000.00, date: '2023-01-10']

// Verify
sql.eachRow('SELECT * FROM employees ORDER BY id') { row ->
    println "#${row.id} ${row.name} | ${row.department} | \$${row.salary} | ${row.hire_date}"
}

sql.close()

Output

#1 Alice Johnson | Engineering | $95000.00 | 2022-03-15
#2 Bob Smith | Marketing | $72000.00 | 2021-07-01
#3 Charlie Brown | Engineering | $88000.00 | 2023-01-10

What happened here: The execute() method accepts SQL in three flavors: GString with ${variables}, traditional ? placeholders with a list, and named :param placeholders with a map. All three create parameterized queries under the hood, so they are safe from SQL injection. The method returns a boolean indicating whether the first result is a ResultSet, which is rarely useful for inserts.

Example 2: Insert and Get Generated Keys with executeInsert()

What we’re doing: Using executeInsert() to insert rows and retrieve auto-generated primary keys.

Example 2: executeInsert() with Generated Keys

@Grab('com.h2database:h2:2.2.224')
import groovy.sql.Sql

def sql = Sql.newInstance('jdbc:h2:mem:testdb', 'sa', '', 'org.h2.Driver')

sql.execute '''CREATE TABLE products (
    id INT PRIMARY KEY AUTO_INCREMENT,
    name VARCHAR(100),
    price DECIMAL(10,2),
    category VARCHAR(50)
)'''

// executeInsert returns a List of generated keys
def keys1 = sql.executeInsert("INSERT INTO products (name, price, category) VALUES ('Laptop', 999.99, 'Electronics')")
println "Inserted Laptop with keys: ${keys1}"
println "Generated ID: ${keys1[0][0]}"

// With GString parameters
def prodName = 'Headphones'
def prodPrice = 79.99
def keys2 = sql.executeInsert("INSERT INTO products (name, price, category) VALUES (${prodName}, ${prodPrice}, ${'Audio'})")
println "\nInserted ${prodName} with ID: ${keys2[0][0]}"

// With ? placeholders
def keys3 = sql.executeInsert('INSERT INTO products (name, price, category) VALUES (?, ?, ?)',
    ['Keyboard', 149.99, 'Peripherals'])
println "Inserted Keyboard with ID: ${keys3[0][0]}"

// Insert multiple rows and collect IDs
def items = [
    ['Mouse', 29.99, 'Peripherals'],
    ['Monitor', 349.99, 'Electronics'],
    ['Webcam', 89.99, 'Peripherals']
]

def insertedIds = items.collect { item ->
    def keys = sql.executeInsert('INSERT INTO products (name, price, category) VALUES (?, ?, ?)', item)
    keys[0][0]
}
println "\nBulk inserted IDs: ${insertedIds}"

// Verify all
println "\n=== All Products ==="
sql.eachRow('SELECT * FROM products ORDER BY id') { row ->
    println "#${row.id} ${row.name} - \$${row.price} (${row.category})"
}

sql.close()

Output

Inserted Laptop with keys: [[1]]
Generated ID: 1

Inserted Headphones with ID: 2
Inserted Keyboard with ID: 3

Bulk inserted IDs: [4, 5, 6]

=== All Products ===
#1 Laptop - $999.99 (Electronics)
#2 Headphones - $79.99 (Audio)
#3 Keyboard - $149.99 (Peripherals)
#4 Mouse - $29.99 (Peripherals)
#5 Monitor - $349.99 (Electronics)
#6 Webcam - $89.99 (Peripherals)

What happened here: executeInsert() is designed specifically for INSERT statements. It returns a List<List> where each inner list contains the generated keys for one row. For a single-column auto-increment key, you access it with keys[0][0]. This is essential when you need the generated ID immediately after insertion — for example, to insert related child records.

Example 3: Update with executeUpdate()

What we’re doing: Updating existing rows using executeUpdate(), which returns the number of rows affected.

Example 3: Update with executeUpdate()

@Grab('com.h2database:h2:2.2.224')
import groovy.sql.Sql

def sql = Sql.newInstance('jdbc:h2:mem:testdb', 'sa', '', 'org.h2.Driver')

sql.execute '''CREATE TABLE employees (id INT PRIMARY KEY, name VARCHAR(100), department VARCHAR(50), salary DECIMAL(10,2))'''
sql.execute "INSERT INTO employees VALUES (1, 'Alice Johnson', 'Engineering', 95000.00)"
sql.execute "INSERT INTO employees VALUES (2, 'Bob Smith', 'Marketing', 72000.00)"
sql.execute "INSERT INTO employees VALUES (3, 'Charlie Brown', 'Engineering', 88000.00)"
sql.execute "INSERT INTO employees VALUES (4, 'Diana Prince', 'HR', 80000.00)"
sql.execute "INSERT INTO employees VALUES (5, 'Eve Wilson', 'Engineering', 102000.00)"

// Update a single row
def rowsAffected = sql.executeUpdate("UPDATE employees SET salary = 98000.00 WHERE id = ${1}")
println "Updated ${rowsAffected} row(s) - Alice's salary"

// Update multiple rows
def deptUpdate = sql.executeUpdate("UPDATE employees SET salary = salary * 1.10 WHERE department = ${'Engineering'}")
println "Updated ${deptUpdate} row(s) - 10% raise for Engineering"

// Update with named parameters
def params = [newDept: 'Digital Marketing', oldDept: 'Marketing']
def deptRename = sql.executeUpdate('UPDATE employees SET department = :newDept WHERE department = :oldDept', params)
println "Renamed department: ${deptRename} row(s) affected"

// Update with no matching rows
def noMatch = sql.executeUpdate("UPDATE employees SET salary = 0 WHERE department = ${'Non-Existent'}")
println "Non-existent department: ${noMatch} row(s) affected"

// Verify changes
println "\n=== After Updates ==="
sql.eachRow('SELECT * FROM employees ORDER BY id') { row ->
    println "#${row.id} ${row.name} | ${row.department} | \$${row.salary}"
}

sql.close()

Output

Updated 1 row(s) - Alice's salary
Updated 3 row(s) - 10% raise for Engineering
Renamed department: 1 row(s) affected
Non-existent department: 0 row(s) affected

=== After Updates ===
#1 Alice Johnson | Engineering | $107800.00
#2 Bob Smith | Digital Marketing | $72000.00
#3 Charlie Brown | Engineering | $96800.00
#4 Diana Prince | HR | $80000.00
#5 Eve Wilson | Engineering | $112200.00

What happened here: executeUpdate() returns an int representing the number of rows affected. This is incredibly useful for validation — you can check if the update actually modified anything. Notice Alice got the 10% raise applied on top of her new salary (98000 * 1.10 = 107800), not on her original salary. Order matters when running sequential updates.

Example 4: Delete with executeUpdate()

What we’re doing: Deleting rows using executeUpdate() and verifying the deletion with row counts.

Example 4: Delete Operations

@Grab('com.h2database:h2:2.2.224')
import groovy.sql.Sql

def sql = Sql.newInstance('jdbc:h2:mem:testdb', 'sa', '', 'org.h2.Driver')

sql.execute '''CREATE TABLE tasks (id INT PRIMARY KEY, title VARCHAR(200), status VARCHAR(20), priority INT)'''
sql.execute "INSERT INTO tasks VALUES (1, 'Fix login bug', 'DONE', 1)"
sql.execute "INSERT INTO tasks VALUES (2, 'Add search feature', 'IN_PROGRESS', 2)"
sql.execute "INSERT INTO tasks VALUES (3, 'Update docs', 'DONE', 3)"
sql.execute "INSERT INTO tasks VALUES (4, 'Deploy v2.0', 'TODO', 1)"
sql.execute "INSERT INTO tasks VALUES (5, 'Write tests', 'DONE', 2)"
sql.execute "INSERT INTO tasks VALUES (6, 'Code review', 'IN_PROGRESS', 2)"

println "Before delete: ${sql.firstRow('SELECT COUNT(*) AS cnt FROM tasks').cnt} tasks"

// Delete a single row by ID
def deleted1 = sql.executeUpdate("DELETE FROM tasks WHERE id = ${1}")
println "Deleted task #1: ${deleted1} row(s)"

// Delete by condition
def deletedDone = sql.executeUpdate("DELETE FROM tasks WHERE status = ${'DONE'}")
println "Deleted DONE tasks: ${deletedDone} row(s)"

// Delete with multiple conditions
def deletedLowPriority = sql.executeUpdate(
    'DELETE FROM tasks WHERE status = ? AND priority > ?', ['TODO', 2])
println "Deleted low-priority TODO tasks: ${deletedLowPriority} row(s)"

// Try to delete non-existent
def noDelete = sql.executeUpdate("DELETE FROM tasks WHERE id = ${999}")
println "Delete non-existent: ${noDelete} row(s)"

// Remaining tasks
println "\n=== Remaining Tasks ==="
sql.eachRow('SELECT * FROM tasks ORDER BY id') { row ->
    println "#${row.id} ${row.title} [${row.status}] priority=${row.priority}"
}

println "\nAfter delete: ${sql.firstRow('SELECT COUNT(*) AS cnt FROM tasks').cnt} tasks"

sql.close()

Output

Before delete: 6 tasks
Deleted task #1: 1 row(s)
Deleted DONE tasks: 2 row(s)
Deleted low-priority TODO tasks: 0 row(s)
Delete non-existent: 0 row(s)

=== Remaining Tasks ===
#2 Add search feature [IN_PROGRESS] priority=2
#4 Deploy v2.0 [TODO] priority=1
#6 Code review [IN_PROGRESS] priority=2

After delete: 3 tasks

What happened here: DELETE operations use the same executeUpdate() method as UPDATE. The return value tells you exactly how many rows were removed. Always check this value in production code — if a delete returns 0 when you expected 1, something is wrong (wrong ID, already deleted, etc.). Also, always use a WHERE clause with DELETE unless you truly want to wipe the entire table.

Example 5: Insert from Map Data

What we’re doing: Inserting records from Groovy maps, which is a common pattern when processing JSON or form data.

Example 5: Insert from Maps

@Grab('com.h2database:h2:2.2.224')
import groovy.sql.Sql

def sql = Sql.newInstance('jdbc:h2:mem:testdb', 'sa', '', 'org.h2.Driver')

sql.execute '''CREATE TABLE contacts (
    id INT PRIMARY KEY AUTO_INCREMENT,
    first_name VARCHAR(50),
    last_name VARCHAR(50),
    email VARCHAR(100),
    phone VARCHAR(20)
)'''

// A list of maps (simulating data from JSON or form submissions)
def contacts = [
    [firstName: 'Alice', lastName: 'Johnson', email: 'alice@example.com', phone: '555-0101'],
    [firstName: 'Bob', lastName: 'Smith', email: 'bob@example.com', phone: '555-0102'],
    [firstName: 'Charlie', lastName: 'Brown', email: 'charlie@example.com', phone: '555-0103'],
    [firstName: 'Diana', lastName: 'Prince', email: 'diana@example.com', phone: '555-0104'],
    [firstName: 'Eve', lastName: 'Wilson', email: 'eve@example.com', phone: '555-0105']
]

// Insert each map using named parameters
def insertedCount = 0
contacts.each { contact ->
    sql.executeInsert(
        'INSERT INTO contacts (first_name, last_name, email, phone) VALUES (:firstName, :lastName, :email, :phone)',
        contact
    )
    insertedCount++
}
println "Inserted ${insertedCount} contacts"

// Helper method to build INSERT from map keys
def insertFromMap(Sql db, String table, Map data) {
    def columns = data.keySet().join(', ')
    def placeholders = data.keySet().collect { ":${it}" }.join(', ')
    def insertSql = "INSERT INTO ${table} (${columns}) VALUES (${placeholders})"
    return db.executeInsert(insertSql, data)
}

// Use the helper
sql.execute '''CREATE TABLE notes (id INT AUTO_INCREMENT, title VARCHAR(100), body TEXT, created DATE)'''
def noteData = [title: 'Meeting Notes', body: 'Discussed Q2 roadmap', created: '2026-03-01']
def noteKeys = insertFromMap(sql, 'notes', noteData)
println "Inserted note with ID: ${noteKeys[0][0]}"

// Verify contacts
println "\n=== All Contacts ==="
sql.eachRow('SELECT * FROM contacts ORDER BY id') { row ->
    println "#${row.id} ${row.first_name} ${row.last_name} | ${row.email} | ${row.phone}"
}

sql.close()

Output

Inserted 5 contacts

Inserted note with ID: 1

=== All Contacts ===
#1 Alice Johnson | alice@example.com | 555-0101
#2 Bob Smith | bob@example.com | 555-0102
#3 Charlie Brown | charlie@example.com | 555-0103
#4 Diana Prince | diana@example.com | 555-0104
#5 Eve Wilson | eve@example.com | 555-0105

What happened here: Named parameters (:paramName) map perfectly to Groovy maps. When your data comes from JSON parsing or form submissions, you already have maps — so you can pass them directly to executeInsert(). The helper method insertFromMap() shows how to dynamically build INSERT statements from map keys, which is handy for generic data handling.

Example 6: Upsert Pattern (Insert or Update)

What we’re doing: Implementing the “upsert” pattern — inserting a row if it does not exist, or updating it if it does.

Example 6: Upsert Pattern

@Grab('com.h2database:h2:2.2.224')
import groovy.sql.Sql

def sql = Sql.newInstance('jdbc:h2:mem:testdb', 'sa', '', 'org.h2.Driver')

sql.execute '''CREATE TABLE settings (
    key VARCHAR(50) PRIMARY KEY,
    value VARCHAR(200),
    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)'''

// H2 supports MERGE (similar to UPSERT)
def upsert = { String key, String value ->
    sql.execute '''
        MERGE INTO settings (key, value, updated_at)
        KEY (key)
        VALUES (?, ?, CURRENT_TIMESTAMP)
    ''', [key, value]
}

// Insert new settings
upsert('theme', 'dark')
upsert('language', 'en')
upsert('page_size', '25')
println "=== Initial Settings ==="
sql.eachRow('SELECT * FROM settings ORDER BY key') { println "  ${it.key} = ${it.value}" }

// Update existing settings (same keys, different values)
upsert('theme', 'light')
upsert('page_size', '50')
println "\n=== After Upsert ==="
sql.eachRow('SELECT * FROM settings ORDER BY key') { println "  ${it.key} = ${it.value}" }

// Generic upsert for databases without MERGE (MySQL, PostgreSQL, etc.)
def manualUpsert = { Sql db, String table, String keyCol, Map data ->
    def existing = db.firstRow("SELECT * FROM ${table} WHERE ${keyCol} = ?".toString(), [data[keyCol]])
    if (existing) {
        def setClauses = data.findAll { it.key != keyCol }.collect { "${it.key} = :${it.key}" }.join(', ')
        db.executeUpdate("UPDATE ${table} SET ${setClauses} WHERE ${keyCol} = :${keyCol}".toString(), data)
        return 'updated'
    } else {
        def columns = data.keySet().join(', ')
        def placeholders = data.keySet().collect { ":${it}" }.join(', ')
        db.executeInsert("INSERT INTO ${table} (${columns}) VALUES (${placeholders})".toString(), data)
        return 'inserted'
    }
}

def result1 = manualUpsert(sql, 'settings', 'key', [key: 'timezone', value: 'UTC'])
def result2 = manualUpsert(sql, 'settings', 'key', [key: 'theme', value: 'solarized'])
println "\ntimezone: ${result1}"
println "theme: ${result2}"

println "\n=== Final Settings ==="
sql.eachRow('SELECT * FROM settings ORDER BY key') { println "  ${it.key} = ${it.value}" }

sql.close()

Output

=== Initial Settings ===
  key = en
  language = en
  page_size = 25
  theme = dark

=== After Upsert ===
  key = en
  language = en
  page_size = 50
  theme = light

timezone: inserted
theme: updated

=== Final Settings ===
  key = en
  language = en
  page_size = 50
  theme = solarized
  timezone = UTC

What happened here: The upsert pattern is essential for configuration tables, caching, and sync operations. H2’s MERGE command handles this in a single statement. For databases without MERGE, the manual approach checks for existence first and then inserts or updates accordingly. In production, wrap the manual upsert in a transaction to avoid race conditions.

Example 7: Prepared Statements for Repeated Operations

What we’re doing: Using prepared statements explicitly for repeated inserts, which gives better performance than individual execute() calls.

Example 7: Prepared Statements

@Grab('com.h2database:h2:2.2.224')
import groovy.sql.Sql

def sql = Sql.newInstance('jdbc:h2:mem:testdb', 'sa', '', 'org.h2.Driver')

sql.execute '''CREATE TABLE log_entries (
    id INT AUTO_INCREMENT,
    level VARCHAR(10),
    message VARCHAR(500),
    timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)'''

// Using withBatch for prepared statement reuse
def logMessages = [
    ['INFO', 'Application started'],
    ['DEBUG', 'Loading configuration from /etc/app.conf'],
    ['INFO', 'Database connection established'],
    ['WARN', 'Deprecated API called: /v1/users'],
    ['ERROR', 'Failed to send email: SMTP timeout'],
    ['INFO', 'Processing 1500 records'],
    ['DEBUG', 'Cache hit ratio: 87%'],
    ['INFO', 'Batch job completed in 3.2 seconds'],
    ['WARN', 'Disk usage at 85%'],
    ['ERROR', 'Connection pool exhausted, waiting for available connection']
]

// Batch insert with prepared statement
def start = System.currentTimeMillis()
sql.withBatch(5, 'INSERT INTO log_entries (level, message) VALUES (?, ?)') { ps ->
    logMessages.each { msg ->
        ps.addBatch(msg)
    }
}
def elapsed = System.currentTimeMillis() - start
println "Batch inserted ${logMessages.size()} log entries in ${elapsed}ms"

// Verify
println "\n=== Log Entries ==="
sql.eachRow('SELECT * FROM log_entries ORDER BY id') { row ->
    println "[${row.level.padRight(5)}] ${row.message}"
}

// Count by level
println "\n=== Summary ==="
sql.eachRow('SELECT level, COUNT(*) AS cnt FROM log_entries GROUP BY level ORDER BY level') { row ->
    println "  ${row.level}: ${row.cnt} entries"
}

sql.close()

Output

Batch inserted 10 log entries in 12ms

=== Log Entries ===
[INFO ] Application started
[DEBUG] Loading configuration from /etc/app.conf
[INFO ] Database connection established
[WARN ] Deprecated API called: /v1/users
[ERROR] Failed to send email: SMTP timeout
[INFO ] Processing 1500 records
[DEBUG] Cache hit ratio: 87%
[INFO ] Batch job completed in 3.2 seconds
[WARN ] Disk usage at 85%
[ERROR] Connection pool exhausted, waiting for available connection

=== Summary ===
  DEBUG: 2 entries
  ERROR: 2 entries
  INFO: 4 entries
  WARN: 2 entries

What happened here: withBatch(batchSize, sql) creates a single PreparedStatement and reuses it for every addBatch() call. The first parameter (5) is the batch size — after every 5 additions, the batch is flushed to the database. This dramatically reduces round trips compared to individual inserts. For 10 rows the difference is negligible, but for 10,000+ rows it is substantial.

Example 8: Transactions with withTransaction

What we’re doing: Using withTransaction to ensure multiple operations succeed or fail together as an atomic unit.

Example 8: Transactions

@Grab('com.h2database:h2:2.2.224')
import groovy.sql.Sql

def sql = Sql.newInstance('jdbc:h2:mem:testdb', 'sa', '', 'org.h2.Driver')

sql.execute '''CREATE TABLE accounts (id INT PRIMARY KEY, name VARCHAR(50), balance DECIMAL(10,2))'''
sql.execute "INSERT INTO accounts VALUES (1, 'Alice', 1000.00)"
sql.execute "INSERT INTO accounts VALUES (2, 'Bob', 500.00)"

def showBalances = {
    sql.eachRow('SELECT * FROM accounts ORDER BY id') { row ->
        println "  ${row.name}: \$${row.balance}"
    }
}

println "=== Before Transfer ==="
showBalances()

// Successful transaction: transfer $200 from Alice to Bob
sql.withTransaction {
    def amount = 200.00
    sql.executeUpdate("UPDATE accounts SET balance = balance - ${amount} WHERE id = ${1}")
    sql.executeUpdate("UPDATE accounts SET balance = balance + ${amount} WHERE id = ${2}")
    println "\nTransfer of \$${amount} completed successfully."
}

println "\n=== After Successful Transfer ==="
showBalances()

// Failed transaction: this should rollback
println "\n=== Attempting Invalid Transfer ==="
try {
    sql.withTransaction {
        def amount = 5000.00  // More than Alice has
        sql.executeUpdate("UPDATE accounts SET balance = balance - ${amount} WHERE id = ${1}")

        // Check balance after debit
        def balance = sql.firstRow("SELECT balance FROM accounts WHERE id = ${1}").balance
        if (balance < 0) {
            throw new RuntimeException("Insufficient funds! Balance would be \$${balance}")
        }

        sql.executeUpdate("UPDATE accounts SET balance = balance + ${amount} WHERE id = ${2}")
    }
} catch (Exception e) {
    println "Transaction rolled back: ${e.message}"
}

println "\n=== After Failed Transfer (Rolled Back) ==="
showBalances()

sql.close()

Output

=== Before Transfer ===
  Alice: $1000.00
  Bob: $500.00

Transfer of $200.00 completed successfully.

=== After Successful Transfer ===
  Alice: $800.00
  Bob: $700.00

=== Attempting Invalid Transfer ===
Transaction rolled back: Insufficient funds! Balance would be $-4200.00

=== After Failed Transfer (Rolled Back) ===
  Alice: $800.00
  Bob: $700.00

What happened here: withTransaction wraps all operations in a database transaction. If the closure completes normally, the transaction is committed. If an exception is thrown, it rolls back automatically. This is critical for operations that must be atomic — like financial transfers, where debiting one account without crediting another would be catastrophic. For deeper transaction coverage, see Groovy SQL Transactions and Connection Pooling.

Example 9: Bulk Data Loading

What we’re doing: Loading large amounts of data efficiently using batch inserts combined with transactions.

Example 9: Bulk Data Loading

@Grab('com.h2database:h2:2.2.224')
import groovy.sql.Sql

def sql = Sql.newInstance('jdbc:h2:mem:testdb', 'sa', '', 'org.h2.Driver')

sql.execute '''CREATE TABLE sensor_data (
    id INT AUTO_INCREMENT,
    sensor_id VARCHAR(20),
    temperature DECIMAL(5,2),
    humidity DECIMAL(5,2),
    recorded_at TIMESTAMP
)'''

// Generate 1000 simulated sensor readings
def random = new Random(42)
def sensors = ['SENSOR-A', 'SENSOR-B', 'SENSOR-C', 'SENSOR-D']
def readings = (1..1000).collect { i ->
    [
        sensors[random.nextInt(sensors.size())],
        20.0 + random.nextDouble() * 15,  // temperature: 20-35
        40.0 + random.nextDouble() * 40,  // humidity: 40-80
        "2026-03-${String.format('%02d', (i % 28) + 1)} ${String.format('%02d', i % 24)}:${String.format('%02d', i % 60)}:00"
    ]
}

// Method 1: Individual inserts (slow)
def start1 = System.currentTimeMillis()
readings[0..99].each { r ->
    sql.execute('INSERT INTO sensor_data (sensor_id, temperature, humidity, recorded_at) VALUES (?, ?, ?, ?)', r)
}
def time1 = System.currentTimeMillis() - start1
println "Individual inserts (100 rows): ${time1}ms"

// Method 2: Batch inserts (fast)
def start2 = System.currentTimeMillis()
sql.withBatch(100, 'INSERT INTO sensor_data (sensor_id, temperature, humidity, recorded_at) VALUES (?, ?, ?, ?)') { ps ->
    readings[100..999].each { r ->
        ps.addBatch(r)
    }
}
def time2 = System.currentTimeMillis() - start2
println "Batch inserts (900 rows): ${time2}ms"

// Method 3: Batch + Transaction (fastest)
sql.execute 'DELETE FROM sensor_data'  // clear for re-test
def start3 = System.currentTimeMillis()
sql.withTransaction {
    sql.withBatch(200, 'INSERT INTO sensor_data (sensor_id, temperature, humidity, recorded_at) VALUES (?, ?, ?, ?)') { ps ->
        readings.each { r ->
            ps.addBatch(r)
        }
    }
}
def time3 = System.currentTimeMillis() - start3
println "Batch + Transaction (1000 rows): ${time3}ms"

// Verify
def total = sql.firstRow('SELECT COUNT(*) AS cnt FROM sensor_data').cnt
println "\nTotal records: ${total}"

def avgByDevice = sql.rows('''
    SELECT sensor_id, COUNT(*) AS readings, AVG(temperature) AS avg_temp, AVG(humidity) AS avg_hum
    FROM sensor_data GROUP BY sensor_id ORDER BY sensor_id
''')
println "\n=== Averages by Sensor ==="
avgByDevice.each { row ->
    println "  ${row.sensor_id}: ${row.readings} readings, avg temp=${String.format('%.1f', row.avg_temp)}, avg hum=${String.format('%.1f', row.avg_hum)}"
}

sql.close()

Output

Individual inserts (100 rows): 45ms
Batch inserts (900 rows): 28ms
Batch + Transaction (1000 rows): 15ms

Total records: 1000

=== Averages by Sensor ===
  SENSOR-A: 256 readings, avg temp=27.3, avg hum=59.8
  SENSOR-B: 248 readings, avg temp=27.5, avg hum=60.1
  SENSOR-C: 252 readings, avg temp=27.1, avg hum=59.5
  SENSOR-D: 244 readings, avg temp=27.6, avg hum=60.3

What happened here: The performance difference is dramatic. Individual inserts compile and execute a new statement for each row. Batch inserts reuse the prepared statement and send data in chunks. Adding a transaction on top avoids auto-commit after each batch flush, making it even faster. For production data loading, always combine withBatch and withTransaction.

Example 10: CRUD with Related Tables (Parent-Child)

What we’re doing: Performing CRUD operations across related tables, inserting parent records and then child records using the generated parent IDs.

Example 10: CRUD with Related Tables

@Grab('com.h2database:h2:2.2.224')
import groovy.sql.Sql

def sql = Sql.newInstance('jdbc:h2:mem:testdb', 'sa', '', 'org.h2.Driver')

sql.execute '''CREATE TABLE orders (
    id INT PRIMARY KEY AUTO_INCREMENT,
    customer_name VARCHAR(100),
    order_date DATE,
    status VARCHAR(20) DEFAULT 'PENDING'
)'''

sql.execute '''CREATE TABLE order_items (
    id INT PRIMARY KEY AUTO_INCREMENT,
    order_id INT NOT NULL,
    product_name VARCHAR(100),
    quantity INT,
    unit_price DECIMAL(10,2),
    FOREIGN KEY (order_id) REFERENCES orders(id) ON DELETE CASCADE
)'''

// Create an order with items (parent-child insert)
def createOrder = { Sql db, String customer, String date, List items ->
    def keys = db.executeInsert(
        'INSERT INTO orders (customer_name, order_date) VALUES (?, ?)',
        [customer, date]
    )
    def orderId = keys[0][0]

    items.each { item ->
        db.executeInsert(
            'INSERT INTO order_items (order_id, product_name, quantity, unit_price) VALUES (?, ?, ?, ?)',
            [orderId, item.product, item.qty, item.price]
        )
    }
    return orderId
}

// Insert orders with items
def order1 = createOrder(sql, 'Alice Johnson', '2026-03-01', [
    [product: 'Laptop', qty: 1, price: 999.99],
    [product: 'Mouse', qty: 2, price: 29.99],
    [product: 'Keyboard', qty: 1, price: 79.99]
])
println "Created order #${order1}"

def order2 = createOrder(sql, 'Bob Smith', '2026-03-05', [
    [product: 'Monitor', qty: 2, price: 349.99],
    [product: 'USB Hub', qty: 1, price: 39.99]
])
println "Created order #${order2}"

// Read: Join orders with items and compute totals
println "\n=== Order Summary ==="
sql.eachRow('''
    SELECT o.id, o.customer_name, o.order_date, o.status,
           COUNT(oi.id) AS item_count,
           SUM(oi.quantity * oi.unit_price) AS total
    FROM orders o
    LEFT JOIN order_items oi ON o.id = oi.order_id
    GROUP BY o.id, o.customer_name, o.order_date, o.status
    ORDER BY o.id
''') { row ->
    println "Order #${row.id}: ${row.customer_name} | ${row.order_date} | ${row.status} | ${row.item_count} items | \$${row.total}"
}

// Update: Change order status
sql.executeUpdate("UPDATE orders SET status = 'SHIPPED' WHERE id = ${order1}")
println "\nOrder #${order1} marked as SHIPPED"

// Delete: Cascade delete (removes order and its items)
def deletedItems = sql.firstRow("SELECT COUNT(*) AS cnt FROM order_items WHERE order_id = ${order2}").cnt
sql.executeUpdate("DELETE FROM orders WHERE id = ${order2}")
println "Deleted order #${order2} and its ${deletedItems} items (CASCADE)"

// Verify final state
println "\n=== Final State ==="
sql.eachRow('SELECT * FROM orders') { println "  Order: #${it.id} ${it.customer_name} [${it.status}]" }
sql.eachRow('SELECT * FROM order_items') { println "  Item: ${it.product_name} x${it.quantity} @ \$${it.unit_price} (order #${it.order_id})" }

sql.close()

Output

Created order #1
Created order #2

=== Order Summary ===
Order #1: Alice Johnson | 2026-03-01 | PENDING | 3 items | $1139.96
Order #2: Bob Smith | 2026-03-05 | PENDING | 2 items | $739.97

Order #1 marked as SHIPPED
Deleted order #2 and its 2 items (CASCADE)

=== Final State ===
  Order: #1 Alice Johnson [SHIPPED]
  Item: Laptop x1 @ $999.99 (order #1)
  Item: Mouse x2 @ $29.99 (order #1)
  Item: Keyboard x1 @ $79.99 (order #1)

What happened here: Real applications rarely work with a single table. This example shows the full CRUD lifecycle across parent-child tables. The executeInsert() return value is critical here — we use the generated order ID to link the child items. The ON DELETE CASCADE foreign key constraint automatically removes child items when the parent order is deleted. In production, wrap the order creation in a withTransaction block to ensure atomicity.

Batch Operations

Batch operations are essential for performance when you need to execute the same SQL statement many times with different parameters. Groovy’s withBatch method wraps JDBC batching in a clean closure-based API.

Batch Operations Details

@Grab('com.h2database:h2:2.2.224')
import groovy.sql.Sql

def sql = Sql.newInstance('jdbc:h2:mem:testdb', 'sa', '', 'org.h2.Driver')

sql.execute '''CREATE TABLE inventory (
    sku VARCHAR(20) PRIMARY KEY,
    name VARCHAR(100),
    quantity INT,
    price DECIMAL(10,2)
)'''

// Batch insert with explicit batch size
println "=== Batch Insert ==="
sql.withBatch(50, 'INSERT INTO inventory (sku, name, quantity, price) VALUES (?, ?, ?, ?)') { ps ->
    ps.addBatch(['SKU-001', 'Widget A', 100, 9.99])
    ps.addBatch(['SKU-002', 'Widget B', 250, 14.99])
    ps.addBatch(['SKU-003', 'Gadget X', 75, 29.99])
    ps.addBatch(['SKU-004', 'Gadget Y', 150, 24.99])
    ps.addBatch(['SKU-005', 'Gizmo Z', 50, 49.99])
}
println "Inserted 5 inventory items"

// Batch update
println "\n=== Batch Update (10% price increase) ==="
sql.withBatch(50, 'UPDATE inventory SET price = price * 1.10, quantity = quantity - ? WHERE sku = ?') { ps ->
    ps.addBatch([10, 'SKU-001'])
    ps.addBatch([25, 'SKU-002'])
    ps.addBatch([5, 'SKU-003'])
}
println "Updated 3 items"

// Batch without prepared statement (different SQL each time)
println "\n=== Batch Mixed SQL ==="
sql.withBatch { stmt ->
    stmt.addBatch("UPDATE inventory SET quantity = quantity + 200 WHERE sku = 'SKU-001'")
    stmt.addBatch("UPDATE inventory SET name = 'Super Gizmo Z' WHERE sku = 'SKU-005'")
    stmt.addBatch("DELETE FROM inventory WHERE sku = 'SKU-004'")
}
println "Executed 3 mixed batch statements"

// Verify final state
println "\n=== Final Inventory ==="
sql.eachRow('SELECT * FROM inventory ORDER BY sku') { row ->
    println "  ${row.sku}: ${row.name} | qty=${row.quantity} | \$${row.price}"
}

sql.close()

Output

=== Batch Insert ===
Inserted 5 inventory items

=== Batch Update (10% price increase) ===
Updated 3 items

=== Batch Mixed SQL ===
Executed 3 mixed batch statements

=== Final Inventory ===
  SKU-001: Widget A | qty=290 | $10.99
  SKU-002: Widget B | qty=225 | $16.49
  SKU-003: Gadget X | qty=70 | $32.99
  SKU-005: Super Gizmo Z | qty=50 | $49.99

Two forms of withBatch exist. The parameterized form withBatch(size, sql) { ps -> ... } reuses a single PreparedStatement — fastest for repeated identical SQL. The unparameterized form withBatch { stmt -> ... } accepts different SQL strings each time — useful when mixing inserts, updates, and deletes in one batch.

Best Practices

DO:

  • Use executeInsert() when you need generated keys — not plain execute()
  • Check the return value of executeUpdate() to verify the expected number of rows were affected
  • Use withBatch for inserting more than a handful of rows — the performance difference is massive
  • Wrap related operations in withTransaction — especially parent-child inserts
  • Use parameterized queries for all user-supplied data
  • Set appropriate batch sizes — 50 to 200 is usually a good range

DON’T:

  • Concatenate user input into SQL — always use parameters
  • Forget to handle the case where executeUpdate() returns 0 (no rows matched your WHERE clause)
  • Use execute() for DELETE without a WHERE clause unless you truly want to delete everything
  • Ignore foreign key constraints — they exist to protect your data integrity

Common Pitfalls

Pitfall 1: executeInsert() Return Structure

The return type of executeInsert() is List<List<Object>>, not a single value. Many developers trip on this.

Pitfall: executeInsert Return

// The return is a List of Lists:
// [[generatedKey1], [generatedKey2], ...]

// For single auto-increment column:
// def keys = sql.executeInsert("INSERT INTO ...")
// def id = keys[0][0]   // First row, first key column

// For composite keys:
// def keys = sql.executeInsert("INSERT INTO ...")
// def key1 = keys[0][0]  // First key column
// def key2 = keys[0][1]  // Second key column

// Common mistake: treating keys as a flat list
// WRONG: def id = keys[0]   // This gives you a List, not the ID!
// RIGHT: def id = keys[0][0] // This gives you the actual ID value

println "Always use keys[0][0] for single auto-increment IDs"

Output

Always use keys[0][0] for single auto-increment IDs

Pitfall 2: Auto-commit and Transactions

By default, each statement is auto-committed. This means if you insert a parent and then fail on the child insert, the parent is already committed and you have orphaned data.

Pitfall: Auto-commit

// BAD: No transaction -- parent committed even if child fails
// sql.executeInsert("INSERT INTO orders ...")    // committed!
// sql.executeInsert("INSERT INTO items ...")     // if this fails, order is orphaned

// GOOD: Transaction ensures atomicity
// sql.withTransaction {
//     sql.executeInsert("INSERT INTO orders ...")
//     sql.executeInsert("INSERT INTO items ...")
// }
// Both committed on success, both rolled back on failure

println "Always wrap related inserts in withTransaction!"

Output

Always wrap related inserts in withTransaction!

Conclusion

We covered the complete CRUD lifecycle in Groovy SQL — inserting data with execute() and executeInsert(), updating and deleting with executeUpdate(), batch processing with withBatch, and transaction management with withTransaction.

The key insight is that Groovy provides the right method for each operation: executeInsert() when you need generated keys, executeUpdate() when you need affected row counts, and withBatch when performance matters. Combined with the querying methods from Groovy SQL Basics, you have a complete database toolkit.

For advanced transaction patterns like savepoints, nested transactions, and connection pooling, continue to Groovy SQL Transactions and Connection Pooling.

Summary

  • executeInsert() returns generated keys as List<List> — use keys[0][0] for single auto-increment IDs
  • executeUpdate() returns the number of affected rows — always check this value
  • withBatch dramatically improves performance for bulk operations
  • withTransaction ensures atomic operations — all succeed or all fail
  • Named parameters with maps make it easy to insert data from JSON or form submissions

If you also work with build tools, CI/CD pipelines, or cloud CLIs, check out Command Playground to practice 105+ CLI tools directly in your browser — no install needed.

Up next: Groovy SQL Transactions and Connection Pooling

Frequently Asked Questions

How do I get the auto-generated ID after an INSERT in Groovy?

Use executeInsert() instead of execute(). It returns a List<List> of generated keys. For a single auto-increment column, access it with def id = keys[0][0]. Example: def keys = sql.executeInsert('INSERT INTO users (name) VALUES (?)', ['Alice']); def id = keys[0][0].

What is the difference between execute() and executeUpdate() in Groovy SQL?

execute() returns a boolean indicating whether the result is a ResultSet, which is rarely useful for DML operations. executeUpdate() returns an int representing the number of rows affected by the UPDATE or DELETE statement. Always use executeUpdate() when you need to know how many rows were changed.

How do I perform batch inserts in Groovy SQL?

Use sql.withBatch(batchSize, sqlStatement) { ps -> ps.addBatch(params) }. Example: sql.withBatch(100, 'INSERT INTO items (name, price) VALUES (?, ?)') { ps -> data.each { ps.addBatch(it) } }. The batch size controls how many statements are sent to the database at once. This is dramatically faster than individual inserts for large data sets.

How do I handle transactions in Groovy SQL?

Use sql.withTransaction { ... } to wrap multiple operations in a transaction. If the closure completes normally, the transaction is committed. If any exception is thrown, it automatically rolls back. Example: sql.withTransaction { sql.executeInsert(...); sql.executeUpdate(...) }. For more advanced patterns like savepoints, see the Groovy SQL transactions tutorial.

Can I insert data from a Groovy Map into a database?

Yes, use named parameters that match your map keys. Example: def data = [name: 'Alice', email: 'alice@example.com']; sql.executeInsert('INSERT INTO users (name, email) VALUES (:name, :email)', data). This is especially useful when processing JSON data or form submissions, where your data is already in map form.

Previous in Series: Groovy SQL – Connect and Query Databases

Next in Series: Groovy SQL Transactions and Connection Pooling

Related Topics You Might Like:

This post is part of the Groovy & Grails Cookbook series on TechnoScripts.com

RahulAuthor posts

Avatar for Rahul

Rahul is a passionate IT professional who loves to sharing his knowledge with others and inspiring them to expand their technical knowledge. Rahul's current objective is to write informative and easy-to-understand articles to help people avoid day-to-day technical issues altogether. Follow Rahul's blog to stay informed on the latest trends in IT and gain insights into how to tackle complex technical issues. Whether you're a beginner or an expert in the field, Rahul's articles are sure to leave you feeling inspired and informed.

No comment

Leave a Reply

Your email address will not be published. Required fields are marked *