Skip to main content

3 posts tagged with "DB Migration"

View All Tags

Language-Agnostic Database Change Management with Sqitch

· 7 min read
Huseyin BABAL
Software Developer

Introduction

Managing database schema changes effectively is crucial in modern software development, especially when working with multiple teams or languages. Sqitch provides a powerful, language-agnostic approach to database change management that prioritizes flexibility and traceability. This article will guide you through setting up and using Sqitch for database migrations.

Getting Started

Sqitch is an open-source tool for managing database changes. Unlike many tools tied to specific programming languages or frameworks, Sqitch focuses purely on migrations and version control, making it suitable for projects in any tech stack. It integrates well with various databases, including PostgreSQL, MySQL, SQLite, and more. Key features of Sqitch include:

  • Language independence
  • Deployment planning and verification
  • Robust dependency management
  • Integration with Git for version control

Why Sqitch?

Sqitch stands out for its:

  • Language Agnosticism: Unlike ORM-based tools, Sqitch doesn't tie migrations to a specific language or framework.
  • Fine-grained Control: You can define dependencies between migrations, ensuring a logical order of execution.
  • Reversible Changes: Sqitch supports revert scripts to roll back changes, aiding in error recovery.

Prerequisites

Before diving into Sqitch, ensure the following:

  • A version control system like Git is installed.
  • A database of your choice is set up (e.g., PostgreSQL, MySQL, or SQLite).
  • Sqitch is installed. You can follow the official installation guide.

In this article, we will be using PostgreSQL as our database. You can maintain your database in any database management system. For a convenient deployment option, consider cloud-based solutions like Rapidapp, which offers managed PostgreSQL databases, simplifying setup and maintenance.

tip

Create a free database in Rapidapp in seconds here

Setting Up Sqitch for Your Project

Initialize Sqitch

sqitch init my_project --target db:pg://localhost/my_database

This creates a sqitch.conf file and a migrations directory for your project.

If you are using Rapidapp, you can add a target as follows

sqitch target add prod "db:pg://<user>:<pw>@pg.rapidapp.io:5433/<db>?ssl=true&application_name=sqitch"

followed by;

sqitch deploy prod

Define Your First Change

sqitch add add_users_table -n "Add users table"

This creates three scripts:

  • deploy/add_users_table.sql for applying changes
  • revert/add_users_table.sql for rolling back changes
  • verify/add_users_table.sql for verifying the applied changes

Writing and Executing Migrations

Writing Deployment Scripts

Edit deploy/add_users_table.sql to include SQL for creating a table:

CREATE TABLE users (
id SERIAL PRIMARY KEY,
name VARCHAR(100) NOT NULL,
email VARCHAR(150) UNIQUE NOT NULL
);

Defining Revert Scripts

Edit revert/add_users_table.sql to include SQL for dropping the table.

DROP TABLE IF EXISTS users;

Verification Scripts

Edit verify/add_users_table.sql to ensure the table exists.

SELECT 1 FROM information_schema.tables WHERE table_name = 'users';

Deploying Changes

sqitch deploy

Sqitch ensures changes are applied in order, respecting dependencies.

Version Control and Collaboration

With Sqitch, migration scripts are stored in the repository, making it easy to track changes and collaborate. The sqitch.plan file acts as a roadmap for your migrations.

Example Git workflow

  • Add your changes:
git add deploy/ revert/ verify/ sqitch.plan
  • Commit the changes:
git commit -m "Add users table migration"

Advanced Features

Dependency Management

Specify dependencies between migrations in the sqitch.plan file:

add_users_table
add_roles_table [add_users_table]

This ensures the add_users_table migration runs before add_roles_table.

Database Targets

Sqitch supports multiple deployment targets. Update the sqitch.conf file to define targets for staging, production, etc.

Rolling Back Changes

sqitch revert to:add_users_table

Understanding Sqitch’s Internal Tracking Table

When you deploy migrations with Sqitch, it creates and uses an internal tracking table in your database to manage the state of migrations. This table ensures migrations are executed in the correct order, tracks their deployment status, and prevents reapplying or skipping steps.

projects Table

This table tracks information about different projects managed by Sqitch. Example:

projecturicreated_atcreator_namecreator_email
my_projecthttps://github.com/my_project2024-12-12 09:00:00Alice Developer[email protected]
another_projhttps://gitlab.com/another_proj2024-11-30 14:45:00Bob Engineer[email protected]
test_projecthttps://bitbucket.org/test_proj2024-10-01 08:20:00Charlie Tester[email protected]

changes Table

The changes table is where Sqitch records every migration or database change applied to the project. Each row represents a specific change, with associated metadata to ensure traceability, accountability, and reproducibility.

Example:

change_idscript_hashchangeprojectnotecommitted_atcommitter_namecommitter_emailplanned_atplanner_nameplanner_email
e29b82fa-4d65-11e3-86e0-425861b86ab64c1a7e5379f2430c22f2d4e5b65f92dd9edc800badd_users_tablemy_projectAdds a users table to the schema2024-12-12 10:15:00Alice Developer[email protected]2024-12-11 08:00:00Bob Planner[email protected]
fbcf9d76-4d65-11e3-86e0-425861b86ab6a3f9c6e5379f2430c22f2d4e5b65f92dd9edc900aadd_orders_tablemy_projectAdds an orders table to the schema2024-12-12 11:30:00Charlie Engineer[email protected]2024-12-10 09:30:00Eve Designer[email protected]
d86e02fa-4d65-11e3-86e0-425861b86ab65d6f8e5379f2430c22f2d4e5b65f92dd9edc700cupdate_usersanother_projUpdates users with a new column2024-12-12 14:00:00Dave Maintainer[email protected]2024-12-09 12:45:00Fiona Strategist[email protected]

dependencies Table

The dependencies table in Sqitch tracks the relationships between changes, indicating which changes must be applied before others. This ensures that database migrations are executed in the correct order and that all prerequisites are satisfied before applying a change.

Example:

change_idtypedependencydependency_id
e29b82fa-4d65-11e3-86e0-425861b86ab6requireadd_users_tabled86e02fa-4d65-11e3-86e0-425861b86ab6
e29b82fa-4d65-11e3-86e0-425861b86ab6requireadd_roles_tablefbcf9d76-4d65-11e3-86e0-425861b86ab6
fbcf9d76-4d65-11e3-86e0-425861b86ab6conflictdrop_roles_tableg12f03fa-4d65-11e3-86e0-425861b86ab6
d86e02fa-4d65-11e3-86e0-425861b86ab6requireinitialize_schemah34f84fa-4d65-11e3-86e0-425861b86ab6

events Table

The events table records significant actions or milestones in the Sqitch migration process. This includes when a change is planned, executed, or reverted, providing a detailed audit trail of all operations.

Example:

eventchange_idchangeprojectnoterequiresconflictstagscommitted_atcommitter_namecommitter_emailplanned_atplanner_nameplanner_email
deploye29b82fa-4d65-11e3-86e0-425861b86ab6add_users_tablemy_projectInitial deployment of the users...initialize_schemav1.0.02024-12-12 10:30:00Alice Developer[email protected]2024-12-10 09:00:00Bob Planner[email protected]
revertfbcf9d76-4d65-11e3-86e0-425861b86ab6drop_roles_tablemy_projectReverted the roles table changes.add_roles_tablev1.0.12024-12-12 11:45:00Dave Maintainer[email protected]2024-12-10 10:00:00Eve Planner[email protected]
faild86e02fa-4d65-11e3-86e0-425861b86ab6add_orders_tablemy_projectDeployment failed due to conflict.add_users_tabledrop_orders_tablev1.0.22024-12-12 12:15:00Carol Engineer[email protected]2024-12-10 11:00:00Frank Planner[email protected]

releases Table

The releases table tracks major version releases of a project, recording metadata about when a release was installed and who performed the installation. This provides a clear audit trail for version management.

Example:

versioninstalled_atinstaller_nameinstaller_email
1.02024-12-12 14:00:00Alice Developer[email protected]
1.12024-12-15 10:30:00Bob Engineer[email protected]
2.02025-01-10 09:45:00Carol Maintainer[email protected]

tags Table

The tags table records version tags applied to a project, along with metadata about their creation, application, and associated changes. Tags act as meaningful labels (e.g., versions or milestones) for database states.

Example:

tag_idtagprojectchange_idnotecommitted_atcommitter_namecommitter_emailplanned_atplanner_nameplanner_email
abc123xyzv1.0.0my_projectdef456uvwInitial release2024-12-12 15:00:00.123456Alice Developer[email protected]2024-12-10 10:00:00.123456Bob Planner[email protected]
ghi789lmnv1.1.0my_projectxyz987qrsFeature update2024-12-15 12:00:00.654321Carol Engineer[email protected]2024-12-13 14:00:00.654321Dave Planner[email protected]
opq246stuv2.0.0my_projectklm123nopMajor upgrade2025-01-10 11:30:00.321456Frank Maintainer[email protected]2025-01-08 16:00:00.321456Eve Planner[email protected]

Conclusion

Sqitch is a robust, language-agnostic tool that simplifies database change management, making it ideal for diverse tech stacks. Its emphasis on dependencies, reversibility, and version control ensures seamless collaboration and reliable deployments. Whether you’re a seasoned developer or new to database migrations, Sqitch is worth exploring.

Database Change Management with Spring Boot Liquibase and PostgreSQL

· 8 min read
Huseyin BABAL
Software Developer

Introduction: Why Database Migrations Matter?

Database migrations are critical for maintaining a consistent and reliable schema across environments in modern application development. If you’re working with Spring Boot and PostgreSQL, Liquibase offers a powerful, flexible solution for managing database changes. In this guide, we’ll explore setting up Liquibase with Spring Boot and PostgreSQL to simplify schema updates, version control, and rollbacks.

Why Use Liquibase?

Liquibase is a widely used open-source tool that provides a structured way to track, manage, and apply database schema changes. It offers a variety of features to support developers in complex environments:

  • Changelog Files: Liquibase organizes migrations into changelog files that support XML, YAML, JSON, or SQL formats.
  • Rollback Support: Liquibase allows you to define rollback logic, making it easy to revert changes if needed.
  • Database Independence: With support for over 30 databases, Liquibase offers flexibility across projects. By integrating Liquibase with Spring Boot, you can automate migrations, manage schema versioning, and improve collaboration across teams.

In this article, we will be using PostgreSQL as our database. You can maintain your database in any database management system. For a convenient deployment option, consider cloud-based solutions like Rapidapp, which offers managed PostgreSQL databases, simplifying setup and maintenance.

tip

Create a free database in Rapidapp in seconds here

Step-by-Step Guide to Use Liquibase Migrations

Project Initialization and Dependencies

We will be using Spring Boot and PostgreSQL to build a todo application. You can initialize a spring boot project by using Spring Boot CLI. Once installed, you can use following command to initialize a project with required dependencies.

spring init \
--dependencies=liquibase,data-jpa,postgresql \
--type=maven-project \
--javaVersion=21 \
liquibase-migrations-demo

Line 2: liquibase for Liquibase integration, data-jpa for database persistence, and postgresql for PostgreSQL driver.

Line 3: --type=maven-project for creating a Maven project.

Line 4: --javaVersion=21 we will use Java 21 in Google Cloud Run environment.

Now that we initialized the project, go to the folder liquibase-migrations-demo and open it with your favourite IDE.

Application Configuration

Next, configure Liquibase in your application.properties file. At a minimum, you need to specify the database connection details:

application.properties showLineNumbers
spring.datasource.url=jdbc:postgresql://<host>:<port>/<db>
spring.datasource.username=<user>
spring.datasource.password=<password>

spring.liquibase.enabled=true
spring.liquibase.change-log=classpath:db/changelog/db.changelog-master.yaml

The spring.liquibase.change-logs property specifies the location where Liquibase will look for changelogs which contains actual schema changes.

Create Your First Changelog File

Liquibase migrations are structured into changelog files that record each schema change as an individual "changeSet."

  1. In src/main/resources/db/changelog/, create a new file named db.changelog-master.yaml. This file will be the main changelog file that references individual changes.
databaseChangeLog:
- include:
file: classpath:/db/changelog/changes/01-create-accounts-table.sql
  1. Now, create the referenced file 01-create-accounts-table.sql to define your first changeSet:
01-create-accounts-table.sql
CREATE TABLE accounts (
user_id SERIAL PRIMARY KEY,
username VARCHAR (50) UNIQUE NOT NULL,
email VARCHAR (50) UNIQUE NOT NULL,
password VARCHAR (50) NOT NULL,
created_at TIMESTAMP NOT NULL,
last_login TIMESTAMP
);

When it comes to naming the migration files, it's a good practice to follow a consistent naming convention. A common approach is to prefix the file with a version number and a brief description, like 01-create-accounts-table.sql.

  • Version number: There is no convention for the version number, but it helps on ordering the migration files.
  • Description: Provide a brief description of the migration.

SQL File Features in Liquibase

When using SQL, Liquibase offers some additional features within comments, such as:

  • Changesets: Define each migration step using --changeset author:id in comments. This keeps Liquibase’s tracking in place, similar to other formats.
  • Rollback Commands: Optionally add rollback commands with --rollback <SQL command>, allowing automatic rollbacks when needed. You can see a simple example below;
--liquibase formatted sql
--changeset huseyinbabal:1
CREATE TABLE accounts (
user_id SERIAL PRIMARY KEY,
username VARCHAR (50) UNIQUE NOT NULL,
email VARCHAR (50) UNIQUE NOT NULL,
password VARCHAR (50) NOT NULL,
created_at TIMESTAMP NOT NULL,
last_login TIMESTAMP
);

--rollback DROP TABLE accounts;

Running Application

When you run your Spring Boot application, Liquibase will automatically detect and apply any pending migrations to your PostgreSQL database. You’ll see output in the console indicating that the migrations have been successfully applied. You can run application as follows.

./mvnw spring-boot:run

Rollback

If you need to rollback a migration, you can use the rollback command. For example, to rollback the last changeSet, you can run:

./mvnw liquibase:rollback \
-Dliquibase.rollbackCount=1 \
-Dliquibase.url="jdbc:postgresql://<host>:<port>/<db>" \
-Dliquibase.username="<username>" \
-Dliquibase.password="<password>" \
-Dliquibase.changeLogFile=src/main/resources/db/changelog/db.changelog-master.yaml \
-Dliquibase.driver=org.postgresql.Driver

This time we used liquibase:rollback command to rollback the last changeSet. You can also rollback to a specific changeSet by providing liquibase.rollbackTag parameter. For more information, you can check the official documentation.

Best Practices for Migration Files

To ensure smooth database migrations, follow these best practices:

  • Organize Changelogs: Use a master changelog file to keep a well-organized list of all changes.

  • Use Descriptive Names: Name changelog files descriptively to make it clear what each migration does.

  • Version Control Changelogs: Commit changelog files to version control to ensure that schema changes are applied consistently across environments.

Understanding the Liquibase Metadata Table

When you run migrations with Liquibase, it automatically creates two metadata tables in your database: DATABASECHANGELOG and DATABASECHANGELOGLOCK. These tables keep track of migration status and control, ensuring changes are applied in a consistent and non-conflicting manner. Let’s look at each table in detail.

DATABASECHANGELOG Table

The DATABASECHANGELOG table records every changeSet that has been successfully applied to the database, allowing Liquibase to know which changes have already been executed. Here’s an overview of its key columns:

  • ID: A unique identifier for each changeSet, as specified in the changelog file. This ID is combined with the AUTHOR and FILENAME columns to uniquely identify a changeSet.
  • AUTHOR: The author of the changeSet, which is specified in the changelog file.
  • FILENAME: The file path of the changelog file containing the changeSet. This helps differentiate between changeSets if multiple changelog files are used.
  • DATEEXECUTED: The timestamp indicating when the changeSet was applied. This helps you track when each change was made.
  • ORDEREXECUTED: A number indicating the order in which the changeSets were applied, starting from 1. This can help troubleshoot issues by showing the order of migrations.
  • EXECTYPE: Specifies how the changeSet was executed. Common values include EXECUTED (successful), RERAN, and FAILED, which indicate the status of each migration.
  • MD5SUM: A hash value representing the changeSet’s content. Liquibase uses this hash to detect if a changeSet has been modified after it was initially applied.
  • DESCRIPTION: A brief description of the change, based on the change type (e.g., createTable, addColumn).
  • COMMENTS: Any comments associated with the changeSet, as specified in the changelog.
  • TAG: Optional; a tag that can be assigned to a particular changeSet, often used to mark significant points in the schema history (e.g., a major release).
  • LIQUIBASE: The version of Liquibase used to apply the changeSet. This can be useful for tracking compatibility.
  • CONTEXT: An optional field that can be used to specify a context for the changeSet. This can help control which changeSets are applied based on the context.
  • LABELS: Optional; a comma-separated list of labels that can be assigned to a changeSet. Labels can be used to group changeSets based on common characteristics.
  • DEPLOYMENT_ID: An identifier that can be used to track deployments. This can be useful for auditing and tracking changes across environments. The DATABASECHANGELOG table essentially serves as a ledger for Liquibase, ensuring it applies only new changeSets, avoiding duplications, and maintaining the history of changes.

DATABASECHANGELOGLOCK Table

The DATABASECHANGELOGLOCK table controls the concurrency of migrations to prevent multiple processes from applying changes at the same time, which can lead to conflicts or corruption. It has fewer columns but plays a critical role in database integrity.

  • ID: A unique identifier for the lock. This is generally 1 because only one lock record is required.
  • LOCKED: A boolean value indicating if a Liquibase process currently holds the lock. When set to TRUE, it prevents other instances from running migrations simultaneously.
  • LOCKGRANTED: The timestamp when the lock was acquired. This can be useful for diagnosing if a lock has been held too long (e.g., if a migration process was interrupted).
  • LOCKEDBY: Information on the host or process that acquired the lock, often containing the hostname and IP address. This can help identify which process holds the lock if issues arise.

The DATABASECHANGELOGLOCK table ensures that only one Liquibase instance modifies the schema at a time, which is particularly useful in distributed systems or CI/CD environments where migrations might be triggered concurrently.

How These Tables Support Safe and Consistent Migrations

  • The DATABASECHANGELOG table ensures migrations run in a consistent sequence and that changes are not re-applied or skipped.
  • The DATABASECHANGELOGLOCK table prevents race conditions, helping avoid potential conflicts from simultaneous migrations. Understanding these tables can help you troubleshoot migration issues and ensure Liquibase runs smoothly. Both tables are essential for Liquibase’s reliability and ability to manage complex database schemas effectively.

Conclusion

By adopting Liquibase in your Spring Boot project, you can efficiently manage database changes, reduce deployment friction, and allow your team to focus more on innovation. Whether you’re deploying new features or managing complex database versions, Liquibase streamlines the process, making database management simpler, safer, and more collaborative.

Streamlining Database Migrations with Spring Boot Flyway and PostgreSQL

· 5 min read
Huseyin BABAL
Software Developer

Introduction: Why Database Migrations Matter?

In the fast-paced world of software development, change is inevitable. As applications evolve, so too must the databases that support them. Whether you’re adding new features, optimizing performance, or fixing bugs, database schema changes are a critical part of the process. However, managing these changes manually can quickly become complex and error-prone, especially as your team grows or your application scales.

This is where database migrations come into play. Migrations allow you to apply consistent, repeatable changes to your database schema across all environments—development, testing, staging, and production. By automating this process, you reduce the risk of human error, ensure consistency, and maintain a clear history of how your database has evolved over time.

What is Flyway?

Flyway is an open-source database migration tool that makes it easy to manage and track schema changes. It works by applying incremental SQL or Java-based migrations in a controlled manner, ensuring that your database schema is always in sync with your application’s needs. Flyway integrates seamlessly with popular databases like PostgreSQL and frameworks like Spring Boot, making it a powerful tool for modern application development.

In this article, we will be using PostgreSQL as our database. You can maintain your database in any database management system. For a convenient deployment option, consider cloud-based solutions like Rapidapp, which offers managed PostgreSQL databases, simplifying setup and maintenance.

tip

Create a free database in Rapidapp in seconds here

Step-by-Step Guide to Use Flyway Migrations

Project Initialization and Dependencies

We will be using Spring Boot and PostgreSQL to build a todo application. You can initialize a spring boot project by using Spring Boot CLI. Once installed, you can use following command to initialize a project with required dependencies.

spring init \
--dependencies=flyway,data-jpa,postgresql \
--type=maven-project \
--javaVersion=21 \
flyway-migrations-demo

Line 2: flyway for Flyway integration, data-jpa for database persistence, and postgresql for PostgreSQL driver.

Line 3: --type=maven-project for creating a Maven project.

Line 4: --javaVersion=21 we will use Java 21 in Google Cloud Run environment.

Now that we initialized the project, go to the folder flyway-migrations-demo and open it with your favourite IDE.

Application Configuration

Next, configure Flyway in your application.properties file. At a minimum, you need to specify the database connection details:

application.properties
spring.datasource.url=jdbc:postgresql://<host>:<port>/<db>
spring.datasource.username=<user>
spring.datasource.password=<password>
spring.flyway.locations=classpath:db/migration

The spring.flyway.locations property specifies the location where Flyway will look for migration files. By default, this is classpath:db/migration.

Create Migration Files

Migration files are where you define the changes to your database schema. Each migration file has a unique version number and a descriptive name. For example:

V1__Add_user_table.sql
V2__Alter_user_table_add_email.sql
  • Version number: Start with "V" followed by a version number (V1, V2, etc.). This helps Flyway determine the order in which migrations should be applied.
  • Separator: Use double underscores __ to separate the version number from the description.
  • Description: Provide a brief description of the migration.

Here’s an example of a simple migration file that creates a user table:

V1__Add_user_table.sql
CREATE TABLE user (
id SERIAL PRIMARY KEY,
username VARCHAR(50) NOT NULL,
password VARCHAR(100) NOT NULL,
email VARCHAR(100) NOT NULL
);

Running Application

When you run your Spring Boot application, Flyway will automatically detect and apply any pending migrations to your PostgreSQL database. You’ll see output in the console indicating that the migrations have been successfully applied. You can run application as follows.

./mvnw spring-boot:run

Best Practices for Migration Files

To ensure smooth database migrations, follow these best practices:

  • Keep Migrations Small and Incremental: Break down complex changes into smaller, manageable steps. This makes it easier to troubleshoot issues and roll back changes if necessary.

  • Use Descriptive Names: The name of each migration should clearly describe its purpose. This makes it easier to understand the history of changes at a glance.

  • Test Migrations Thoroughly: Before applying migrations to production, test them in a staging environment that closely mirrors production. This helps catch any issues early.

  • Avoid Direct Modifications in Production: Always use migrations to make changes to the database schema. Direct modifications can lead to inconsistencies and make it difficult to track changes.

  • Version Control Your Migrations: Store your migration files in version control along with your application code. This ensures that schema changes are tracked and can be rolled back if needed.

Understanding the Flyway Metadata Table

Flyway maintains a metadata table in your database, typically named flyway_schema_history, to track which migrations have been applied. This table contains information such as:

Version: The version number of the migration. Description: The description of the migration. Script: The name of the migration file. Execution Time: How long the migration took to apply. Status: Whether the migration was successful. This table is crucial for managing and auditing your database schema. It ensures that migrations are only applied once and provides a clear history of changes.

Conclusion

Database migrations are a vital part of modern application development, enabling you to manage schema changes in a consistent, repeatable way. By integrating Flyway with Spring Boot and PostgreSQL, you can automate this process and reduce the risk of errors, ensuring that your database schema evolves alongside your application.