Skip to main content

Language-Agnostic Database Change Management with Sqitch

· 7 min read
Huseyin BABAL
Software Developer

Introduction

Managing database schema changes effectively is crucial in modern software development, especially when working with multiple teams or languages. Sqitch provides a powerful, language-agnostic approach to database change management that prioritizes flexibility and traceability. This article will guide you through setting up and using Sqitch for database migrations.

Getting Started

Sqitch is an open-source tool for managing database changes. Unlike many tools tied to specific programming languages or frameworks, Sqitch focuses purely on migrations and version control, making it suitable for projects in any tech stack. It integrates well with various databases, including PostgreSQL, MySQL, SQLite, and more. Key features of Sqitch include:

  • Language independence
  • Deployment planning and verification
  • Robust dependency management
  • Integration with Git for version control

Why Sqitch?

Sqitch stands out for its:

  • Language Agnosticism: Unlike ORM-based tools, Sqitch doesn't tie migrations to a specific language or framework.
  • Fine-grained Control: You can define dependencies between migrations, ensuring a logical order of execution.
  • Reversible Changes: Sqitch supports revert scripts to roll back changes, aiding in error recovery.

Prerequisites

Before diving into Sqitch, ensure the following:

  • A version control system like Git is installed.
  • A database of your choice is set up (e.g., PostgreSQL, MySQL, or SQLite).
  • Sqitch is installed. You can follow the official installation guide.

In this article, we will be using PostgreSQL as our database. You can maintain your database in any database management system. For a convenient deployment option, consider cloud-based solutions like Rapidapp, which offers managed PostgreSQL databases, simplifying setup and maintenance.

tip

Create a free database in Rapidapp in seconds here

Setting Up Sqitch for Your Project

Initialize Sqitch

sqitch init my_project --target db:pg://localhost/my_database

This creates a sqitch.conf file and a migrations directory for your project.

If you are using Rapidapp, you can add a target as follows

sqitch target add prod "db:pg://<user>:<pw>@pg.rapidapp.io:5433/<db>?ssl=true&application_name=sqitch"

followed by;

sqitch deploy prod

Define Your First Change

sqitch add add_users_table -n "Add users table"

This creates three scripts:

  • deploy/add_users_table.sql for applying changes
  • revert/add_users_table.sql for rolling back changes
  • verify/add_users_table.sql for verifying the applied changes

Writing and Executing Migrations

Writing Deployment Scripts

Edit deploy/add_users_table.sql to include SQL for creating a table:

CREATE TABLE users (
id SERIAL PRIMARY KEY,
name VARCHAR(100) NOT NULL,
email VARCHAR(150) UNIQUE NOT NULL
);

Defining Revert Scripts

Edit revert/add_users_table.sql to include SQL for dropping the table.

DROP TABLE IF EXISTS users;

Verification Scripts

Edit verify/add_users_table.sql to ensure the table exists.

SELECT 1 FROM information_schema.tables WHERE table_name = 'users';

Deploying Changes

sqitch deploy

Sqitch ensures changes are applied in order, respecting dependencies.

Version Control and Collaboration

With Sqitch, migration scripts are stored in the repository, making it easy to track changes and collaborate. The sqitch.plan file acts as a roadmap for your migrations.

Example Git workflow

  • Add your changes:
git add deploy/ revert/ verify/ sqitch.plan
  • Commit the changes:
git commit -m "Add users table migration"

Advanced Features

Dependency Management

Specify dependencies between migrations in the sqitch.plan file:

add_users_table
add_roles_table [add_users_table]

This ensures the add_users_table migration runs before add_roles_table.

Database Targets

Sqitch supports multiple deployment targets. Update the sqitch.conf file to define targets for staging, production, etc.

Rolling Back Changes

sqitch revert to:add_users_table

Understanding Sqitch’s Internal Tracking Table

When you deploy migrations with Sqitch, it creates and uses an internal tracking table in your database to manage the state of migrations. This table ensures migrations are executed in the correct order, tracks their deployment status, and prevents reapplying or skipping steps.

projects Table

This table tracks information about different projects managed by Sqitch. Example:

projecturicreated_atcreator_namecreator_email
my_projecthttps://github.com/my_project2024-12-12 09:00:00Alice Developer[email protected]
another_projhttps://gitlab.com/another_proj2024-11-30 14:45:00Bob Engineer[email protected]
test_projecthttps://bitbucket.org/test_proj2024-10-01 08:20:00Charlie Tester[email protected]

changes Table

The changes table is where Sqitch records every migration or database change applied to the project. Each row represents a specific change, with associated metadata to ensure traceability, accountability, and reproducibility.

Example:

change_idscript_hashchangeprojectnotecommitted_atcommitter_namecommitter_emailplanned_atplanner_nameplanner_email
e29b82fa-4d65-11e3-86e0-425861b86ab64c1a7e5379f2430c22f2d4e5b65f92dd9edc800badd_users_tablemy_projectAdds a users table to the schema2024-12-12 10:15:00Alice Developer[email protected]2024-12-11 08:00:00Bob Planner[email protected]
fbcf9d76-4d65-11e3-86e0-425861b86ab6a3f9c6e5379f2430c22f2d4e5b65f92dd9edc900aadd_orders_tablemy_projectAdds an orders table to the schema2024-12-12 11:30:00Charlie Engineer[email protected]2024-12-10 09:30:00Eve Designer[email protected]
d86e02fa-4d65-11e3-86e0-425861b86ab65d6f8e5379f2430c22f2d4e5b65f92dd9edc700cupdate_usersanother_projUpdates users with a new column2024-12-12 14:00:00Dave Maintainer[email protected]2024-12-09 12:45:00Fiona Strategist[email protected]

dependencies Table

The dependencies table in Sqitch tracks the relationships between changes, indicating which changes must be applied before others. This ensures that database migrations are executed in the correct order and that all prerequisites are satisfied before applying a change.

Example:

change_idtypedependencydependency_id
e29b82fa-4d65-11e3-86e0-425861b86ab6requireadd_users_tabled86e02fa-4d65-11e3-86e0-425861b86ab6
e29b82fa-4d65-11e3-86e0-425861b86ab6requireadd_roles_tablefbcf9d76-4d65-11e3-86e0-425861b86ab6
fbcf9d76-4d65-11e3-86e0-425861b86ab6conflictdrop_roles_tableg12f03fa-4d65-11e3-86e0-425861b86ab6
d86e02fa-4d65-11e3-86e0-425861b86ab6requireinitialize_schemah34f84fa-4d65-11e3-86e0-425861b86ab6

events Table

The events table records significant actions or milestones in the Sqitch migration process. This includes when a change is planned, executed, or reverted, providing a detailed audit trail of all operations.

Example:

eventchange_idchangeprojectnoterequiresconflictstagscommitted_atcommitter_namecommitter_emailplanned_atplanner_nameplanner_email
deploye29b82fa-4d65-11e3-86e0-425861b86ab6add_users_tablemy_projectInitial deployment of the users...initialize_schemav1.0.02024-12-12 10:30:00Alice Developer[email protected]2024-12-10 09:00:00Bob Planner[email protected]
revertfbcf9d76-4d65-11e3-86e0-425861b86ab6drop_roles_tablemy_projectReverted the roles table changes.add_roles_tablev1.0.12024-12-12 11:45:00Dave Maintainer[email protected]2024-12-10 10:00:00Eve Planner[email protected]
faild86e02fa-4d65-11e3-86e0-425861b86ab6add_orders_tablemy_projectDeployment failed due to conflict.add_users_tabledrop_orders_tablev1.0.22024-12-12 12:15:00Carol Engineer[email protected]2024-12-10 11:00:00Frank Planner[email protected]

releases Table

The releases table tracks major version releases of a project, recording metadata about when a release was installed and who performed the installation. This provides a clear audit trail for version management.

Example:

versioninstalled_atinstaller_nameinstaller_email
1.02024-12-12 14:00:00Alice Developer[email protected]
1.12024-12-15 10:30:00Bob Engineer[email protected]
2.02025-01-10 09:45:00Carol Maintainer[email protected]

tags Table

The tags table records version tags applied to a project, along with metadata about their creation, application, and associated changes. Tags act as meaningful labels (e.g., versions or milestones) for database states.

Example:

tag_idtagprojectchange_idnotecommitted_atcommitter_namecommitter_emailplanned_atplanner_nameplanner_email
abc123xyzv1.0.0my_projectdef456uvwInitial release2024-12-12 15:00:00.123456Alice Developer[email protected]2024-12-10 10:00:00.123456Bob Planner[email protected]
ghi789lmnv1.1.0my_projectxyz987qrsFeature update2024-12-15 12:00:00.654321Carol Engineer[email protected]2024-12-13 14:00:00.654321Dave Planner[email protected]
opq246stuv2.0.0my_projectklm123nopMajor upgrade2025-01-10 11:30:00.321456Frank Maintainer[email protected]2025-01-08 16:00:00.321456Eve Planner[email protected]

Conclusion

Sqitch is a robust, language-agnostic tool that simplifies database change management, making it ideal for diverse tech stacks. Its emphasis on dependencies, reversibility, and version control ensures seamless collaboration and reliable deployments. Whether you’re a seasoned developer or new to database migrations, Sqitch is worth exploring.

Securing Your Express REST API with Passport.js

· 6 min read
Huseyin BABAL
Software Developer

As web applications grow, secure authentication becomes essential to protect sensitive data and prevent unauthorized access. In this article, we’ll explore how to secure a Node.js API using Passport.js and JSON Web Tokens (JWT) for stateless, token-based authentication. We’ll also use PostgreSQL for persistent storage of user data, providing a robust, scalable setup ideal for modern applications.

Why Use Passport.js and JWT for Node.js?

Passport.js is a powerful, flexible middleware for handling authentication in Node.js applications. When paired with JWTs, it enables scalable, stateless authentication without the need to manage session data. JWTs are particularly useful in mobile-friendly applications where maintaining server-side sessions is impractical.

Advantages

Using Passport.js, PostgreSQL and JWTs offers several key benefits:

  • Passport.js simplifies integration of various authentication strategies.
  • JWTs allow for stateless authentication, meaning no session management overhead.
  • PostgreSQL offers a reliable, ACID-compliant database for securely storing user credentials.

In this article, we will be using PostgreSQL as our database. You can maintain your database in any database management system. For a convenient deployment option, consider cloud-based solutions like Rapidapp, which offers managed PostgreSQL databases, simplifying setup and maintenance.

tip

Create a free database in Rapidapp in seconds here

Step-by-Step Guide to Securing Your Express API

Project Initialization and Dependencies

Before diving into the code, ensure you have:

  • Node.js and npm installed.
  • PostgreSQL database ready for storing user data.
  • Basic understanding of JavaScript and Node.js.

To start, initialize a new Node.js project and install required dependencies:

mkdir express-rest-api-jwt && cd express-rest-api-jwt
npm init -y
npm install express passport passport-jwt jsonwebtoken pg bcrypt dotenv

We’re using:

  • express: Web framework for Node.js.
  • passport and passport-jwt: Middleware and JWT strategy for authentication.
  • jsonwebtoken: For generating and verifying JWTs.
  • pg: PostgreSQL client for Node.js.
  • bcrypt: For securely hashing passwords.
  • dotenv: For environment variable management.

Configuring PostgreSQL for User Data

Set up a PostgreSQL database to store user information. Connect to your PostgreSQL instance and create a new database and table. If you are using Rapidapp, the database is already created there.

CREATE TABLE users (
id SERIAL PRIMARY KEY,
email VARCHAR(255) UNIQUE NOT NULL,
password_hash VARCHAR(255) NOT NULL
);

In this setup, the users table stores a unique email and a hashed password. Next, create a .env file to manage sensitive configuration:

DATABASE_URL=postgresql://username:password@host:5432/secure_app
JWT_SECRET=your_jwt_secret_key

Remember to replace username, password, and other values with your actual credentials.

Building the User Authentication Logic

Password Hashing

To securely store user passwords, use bcrypt to hash them before saving to the database. This prevents storing plaintext passwords.

server.js
const bcrypt = require('bcrypt');
const saltRounds = 10;

// Hashing function
async function hashPassword(password) {
return await bcrypt.hash(password, saltRounds);
}

User Registration Endpoint

Create a registration endpoint to handle new user signups. Hash the user’s password and save it in the database.

server.js
const express = require('express');
const app = express();
const { Pool } = require('pg');
const pool = new Pool({ connectionString: process.env.DATABASE_URL });

app.use(express.json());

app.post('/register', async (req, res) => {
const { email, password } = req.body;
const passwordHash = await hashPassword(password);

try {
await pool.query('INSERT INTO users (email, password_hash) VALUES ($1, $2)', [email, passwordHash]);
res.status(201).json({ message: 'User registered successfully' });
} catch (error) {
res.status(500).json({ error: 'User registration failed' });
}
});

User Login and JWT Generation

Create a login endpoint to validate credentials. If valid, generate a JWT for the user.

server.js
const jwt = require('jsonwebtoken');

app.post('/login', async (req, res) => {
const { email, password } = req.body;

const result = await pool.query('SELECT * FROM users WHERE email = $1', [email]);
const user = result.rows[0];

if (user && await bcrypt.compare(password, user.password_hash)) {
const token = jwt.sign({ id: user.id, email: user.email }, process.env.JWT_SECRET, { expiresIn: '1h' });
res.json({ token });
} else {
res.status(401).json({ error: 'Invalid credentials' });
}
});

Implementing Passport.js with JWT Strategy

Configure Passport.js to use the JWT strategy. Create a Passport configuration file and define the JWT strategy using passport-jwt.

server.js
const passport = require('passport');
const { Strategy, ExtractJwt } = require('passport-jwt');

passport.use(new Strategy({
jwtFromRequest: ExtractJwt.fromAuthHeaderAsBearerToken(),
secretOrKey: process.env.JWT_SECRET
}, async (jwtPayload, done) => {
const result = await pool.query('SELECT * FROM users WHERE id = $1', [jwtPayload.id]);
const user = result.rows[0];
return user ? done(null, user) : done(null, false);
}));

app.use(passport.initialize());

With this configuration, Passport extracts the JWT from the Authorization header and verifies it using our secret key.

Creating Protected Routes

Now that Passport is set up, you can protect specific routes by requiring authentication. Passport will verify the JWT before allowing access to these routes.

server.js
app.get('/profile', passport.authenticate('jwt', { session: false }), (req, res) => {
res.json({ message: `Welcome ${req.user.email}` });
});

In this example, the /profile route requires a valid JWT. If authentication succeeds, the request proceeds; otherwise, it’s rejected.

Serving the API

Finally, start the Express server to serve the API:

server.js
const PORT = process.env.PORT || 3000;

app.listen(PORT, () => {
console.log(`Server is running on http://localhost:${PORT}`);
});

Testing the Authentication Workflow

Register a New User

To create a new user, make a POST request to the /register endpoint with the user’s email and password.

curl -X POST http://localhost:3000/register \
-H "Content-Type: application/json" \
-d '{"email": "[email protected]", "password": "securepassword"}'

If successful, this will return:

{
"message": "User registered successfully"
}

Login with the Registered User

To log in, make a POST request to the /login endpoint with the same email and password. This will return a JWT if the credentials are correct.

curl -X POST http://localhost:3000/login \
-H "Content-Type: application/json" \
-d '{"email": "[email protected]", "password": "securepassword"}'

If successful, you’ll receive a response similar to this:

{
"token": "your_jwt_token_here"
}

Access the Protected Profile Endpoint

To access the protected /profile endpoint, you’ll need to include the JWT in the Authorization header as a Bearer token.

Replace your_jwt_token_here with the actual token you received from the login step:

curl -X GET http://localhost:3000/profile \
-H "Authorization: Bearer your_jwt_token_here"

If the JWT is valid, you should receive a response like:

{
"message": "Welcome [email protected]"
}

If the token is missing or invalid, you’ll likely get a 401 Unauthorized response:

{
"error": "Unauthorized"
}

Conclusion

Using Passport.js and JWT for authentication in a Node.js application provides a secure, stateless setup ideal for scaling. Combined with PostgreSQL, this setup efficiently handles user management while maintaining security best practices. With these foundations, you’re well-equipped to build secure, scalable applications.

tip

You can find the complete source code for this project on GitHub.

Database Change Management with Spring Boot Liquibase and PostgreSQL

· 8 min read
Huseyin BABAL
Software Developer

Introduction: Why Database Migrations Matter?

Database migrations are critical for maintaining a consistent and reliable schema across environments in modern application development. If you’re working with Spring Boot and PostgreSQL, Liquibase offers a powerful, flexible solution for managing database changes. In this guide, we’ll explore setting up Liquibase with Spring Boot and PostgreSQL to simplify schema updates, version control, and rollbacks.

Why Use Liquibase?

Liquibase is a widely used open-source tool that provides a structured way to track, manage, and apply database schema changes. It offers a variety of features to support developers in complex environments:

  • Changelog Files: Liquibase organizes migrations into changelog files that support XML, YAML, JSON, or SQL formats.
  • Rollback Support: Liquibase allows you to define rollback logic, making it easy to revert changes if needed.
  • Database Independence: With support for over 30 databases, Liquibase offers flexibility across projects. By integrating Liquibase with Spring Boot, you can automate migrations, manage schema versioning, and improve collaboration across teams.

In this article, we will be using PostgreSQL as our database. You can maintain your database in any database management system. For a convenient deployment option, consider cloud-based solutions like Rapidapp, which offers managed PostgreSQL databases, simplifying setup and maintenance.

tip

Create a free database in Rapidapp in seconds here

Step-by-Step Guide to Use Liquibase Migrations

Project Initialization and Dependencies

We will be using Spring Boot and PostgreSQL to build a todo application. You can initialize a spring boot project by using Spring Boot CLI. Once installed, you can use following command to initialize a project with required dependencies.

spring init \
--dependencies=liquibase,data-jpa,postgresql \
--type=maven-project \
--javaVersion=21 \
liquibase-migrations-demo

Line 2: liquibase for Liquibase integration, data-jpa for database persistence, and postgresql for PostgreSQL driver.

Line 3: --type=maven-project for creating a Maven project.

Line 4: --javaVersion=21 we will use Java 21 in Google Cloud Run environment.

Now that we initialized the project, go to the folder liquibase-migrations-demo and open it with your favourite IDE.

Application Configuration

Next, configure Liquibase in your application.properties file. At a minimum, you need to specify the database connection details:

application.properties showLineNumbers
spring.datasource.url=jdbc:postgresql://<host>:<port>/<db>
spring.datasource.username=<user>
spring.datasource.password=<password>

spring.liquibase.enabled=true
spring.liquibase.change-log=classpath:db/changelog/db.changelog-master.yaml

The spring.liquibase.change-logs property specifies the location where Liquibase will look for changelogs which contains actual schema changes.

Create Your First Changelog File

Liquibase migrations are structured into changelog files that record each schema change as an individual "changeSet."

  1. In src/main/resources/db/changelog/, create a new file named db.changelog-master.yaml. This file will be the main changelog file that references individual changes.
databaseChangeLog:
- include:
file: classpath:/db/changelog/changes/01-create-accounts-table.sql
  1. Now, create the referenced file 01-create-accounts-table.sql to define your first changeSet:
01-create-accounts-table.sql
CREATE TABLE accounts (
user_id SERIAL PRIMARY KEY,
username VARCHAR (50) UNIQUE NOT NULL,
email VARCHAR (50) UNIQUE NOT NULL,
password VARCHAR (50) NOT NULL,
created_at TIMESTAMP NOT NULL,
last_login TIMESTAMP
);

When it comes to naming the migration files, it's a good practice to follow a consistent naming convention. A common approach is to prefix the file with a version number and a brief description, like 01-create-accounts-table.sql.

  • Version number: There is no convention for the version number, but it helps on ordering the migration files.
  • Description: Provide a brief description of the migration.

SQL File Features in Liquibase

When using SQL, Liquibase offers some additional features within comments, such as:

  • Changesets: Define each migration step using --changeset author:id in comments. This keeps Liquibase’s tracking in place, similar to other formats.
  • Rollback Commands: Optionally add rollback commands with --rollback <SQL command>, allowing automatic rollbacks when needed. You can see a simple example below;
--liquibase formatted sql
--changeset huseyinbabal:1
CREATE TABLE accounts (
user_id SERIAL PRIMARY KEY,
username VARCHAR (50) UNIQUE NOT NULL,
email VARCHAR (50) UNIQUE NOT NULL,
password VARCHAR (50) NOT NULL,
created_at TIMESTAMP NOT NULL,
last_login TIMESTAMP
);

--rollback DROP TABLE accounts;

Running Application

When you run your Spring Boot application, Liquibase will automatically detect and apply any pending migrations to your PostgreSQL database. You’ll see output in the console indicating that the migrations have been successfully applied. You can run application as follows.

./mvnw spring-boot:run

Rollback

If you need to rollback a migration, you can use the rollback command. For example, to rollback the last changeSet, you can run:

./mvnw liquibase:rollback \
-Dliquibase.rollbackCount=1 \
-Dliquibase.url="jdbc:postgresql://<host>:<port>/<db>" \
-Dliquibase.username="<username>" \
-Dliquibase.password="<password>" \
-Dliquibase.changeLogFile=src/main/resources/db/changelog/db.changelog-master.yaml \
-Dliquibase.driver=org.postgresql.Driver

This time we used liquibase:rollback command to rollback the last changeSet. You can also rollback to a specific changeSet by providing liquibase.rollbackTag parameter. For more information, you can check the official documentation.

Best Practices for Migration Files

To ensure smooth database migrations, follow these best practices:

  • Organize Changelogs: Use a master changelog file to keep a well-organized list of all changes.

  • Use Descriptive Names: Name changelog files descriptively to make it clear what each migration does.

  • Version Control Changelogs: Commit changelog files to version control to ensure that schema changes are applied consistently across environments.

Understanding the Liquibase Metadata Table

When you run migrations with Liquibase, it automatically creates two metadata tables in your database: DATABASECHANGELOG and DATABASECHANGELOGLOCK. These tables keep track of migration status and control, ensuring changes are applied in a consistent and non-conflicting manner. Let’s look at each table in detail.

DATABASECHANGELOG Table

The DATABASECHANGELOG table records every changeSet that has been successfully applied to the database, allowing Liquibase to know which changes have already been executed. Here’s an overview of its key columns:

  • ID: A unique identifier for each changeSet, as specified in the changelog file. This ID is combined with the AUTHOR and FILENAME columns to uniquely identify a changeSet.
  • AUTHOR: The author of the changeSet, which is specified in the changelog file.
  • FILENAME: The file path of the changelog file containing the changeSet. This helps differentiate between changeSets if multiple changelog files are used.
  • DATEEXECUTED: The timestamp indicating when the changeSet was applied. This helps you track when each change was made.
  • ORDEREXECUTED: A number indicating the order in which the changeSets were applied, starting from 1. This can help troubleshoot issues by showing the order of migrations.
  • EXECTYPE: Specifies how the changeSet was executed. Common values include EXECUTED (successful), RERAN, and FAILED, which indicate the status of each migration.
  • MD5SUM: A hash value representing the changeSet’s content. Liquibase uses this hash to detect if a changeSet has been modified after it was initially applied.
  • DESCRIPTION: A brief description of the change, based on the change type (e.g., createTable, addColumn).
  • COMMENTS: Any comments associated with the changeSet, as specified in the changelog.
  • TAG: Optional; a tag that can be assigned to a particular changeSet, often used to mark significant points in the schema history (e.g., a major release).
  • LIQUIBASE: The version of Liquibase used to apply the changeSet. This can be useful for tracking compatibility.
  • CONTEXT: An optional field that can be used to specify a context for the changeSet. This can help control which changeSets are applied based on the context.
  • LABELS: Optional; a comma-separated list of labels that can be assigned to a changeSet. Labels can be used to group changeSets based on common characteristics.
  • DEPLOYMENT_ID: An identifier that can be used to track deployments. This can be useful for auditing and tracking changes across environments. The DATABASECHANGELOG table essentially serves as a ledger for Liquibase, ensuring it applies only new changeSets, avoiding duplications, and maintaining the history of changes.

DATABASECHANGELOGLOCK Table

The DATABASECHANGELOGLOCK table controls the concurrency of migrations to prevent multiple processes from applying changes at the same time, which can lead to conflicts or corruption. It has fewer columns but plays a critical role in database integrity.

  • ID: A unique identifier for the lock. This is generally 1 because only one lock record is required.
  • LOCKED: A boolean value indicating if a Liquibase process currently holds the lock. When set to TRUE, it prevents other instances from running migrations simultaneously.
  • LOCKGRANTED: The timestamp when the lock was acquired. This can be useful for diagnosing if a lock has been held too long (e.g., if a migration process was interrupted).
  • LOCKEDBY: Information on the host or process that acquired the lock, often containing the hostname and IP address. This can help identify which process holds the lock if issues arise.

The DATABASECHANGELOGLOCK table ensures that only one Liquibase instance modifies the schema at a time, which is particularly useful in distributed systems or CI/CD environments where migrations might be triggered concurrently.

How These Tables Support Safe and Consistent Migrations

  • The DATABASECHANGELOG table ensures migrations run in a consistent sequence and that changes are not re-applied or skipped.
  • The DATABASECHANGELOGLOCK table prevents race conditions, helping avoid potential conflicts from simultaneous migrations. Understanding these tables can help you troubleshoot migration issues and ensure Liquibase runs smoothly. Both tables are essential for Liquibase’s reliability and ability to manage complex database schemas effectively.

Conclusion

By adopting Liquibase in your Spring Boot project, you can efficiently manage database changes, reduce deployment friction, and allow your team to focus more on innovation. Whether you’re deploying new features or managing complex database versions, Liquibase streamlines the process, making database management simpler, safer, and more collaborative.

Building RESTful API with Express, Sequelize, and PostgreSQL

· 5 min read
Huseyin BABAL
Software Developer

Introduction

RESTful APIs are essential for enabling smooth communication between client applications and databases, allowing you to perform actions like creating, reading, updating, and deleting records. Using Node.js and Express for API development offers flexibility and scalability. Sequelize, an ORM, simplifies database interactions and also provides migration tools to ensure your PostgreSQL schema stays in sync with the API’s needs as the database evolves.

What is Sequelize?

Sequelize is a Node.js ORM that simplifies working with SQL databases like PostgreSQL. It supports migrations, making schema changes manageable, and integrates well with RESTful APIs.

Persistence Layer

In this article, we will be using PostgreSQL as our database. You can maintain your database in any database management system. For a convenient deployment option, consider cloud-based solutions like Rapidapp, which offers managed PostgreSQL databases, simplifying setup and maintenance.

tip

Create a free database in Rapidapp in seconds here

Step-by-Step Guide: Sequelize Migrations and REST API

Project Setup and Dependencies

To start, create a Node.js project and install the required packages:

mkdir node-movies-api
cd node-movies-api
npm init -y
npm install express sequelize pg pg-hstore sequelize-cli body-parser --save

Initialize Sequelize with:

npx sequelize-cli init

Now that we initialized the project, go to the folder node-movies-api and open it with your favourite IDE.

Configuring the Database

Edit config/config.json to include your database connection settings:

config/config.json
{
"development": {
"username": "your_db_user",
"password": "your_db_password",
"database": "database_name",
"host": "pg_host",
"port": "port",
"dialect": "postgres",
"dialectOptions": {
"ssl": {
"require": true,
"rejectUnauthorized": false

}
}
}
}

Creating a Migration for Movies

Generate a migration file for the Movies table:

npx sequelize-cli migration:generate --name create-movies-table

Edit the generated migration file in migrations folder:

module.exports = {
up: async (queryInterface, Sequelize) => {
await queryInterface.createTable('Movies', {
id: {
type: Sequelize.INTEGER,
autoIncrement: true,
primaryKey: true,
},
title: {
type: Sequelize.STRING,
allowNull: false,
},
genre: {
type: Sequelize.STRING,
allowNull: false,
},
releaseDate: {
type: Sequelize.DATE,
allowNull: false,
},
rating: {
type: Sequelize.FLOAT,
},
createdAt: Sequelize.DATE,
updatedAt: Sequelize.DATE,
});
},
down: async (queryInterface) => {
await queryInterface.dropTable('Movies');
}
};

As you can see, we have up and down functions in the migration file. The up function is used to create the table, and the down function is used to drop the table.

Run the migration to create the table:

npx sequelize-cli db:migrate

Defining the Movie Model

Create a models/movie.js file to define the Movie model:

module.exports = (sequelize, DataTypes) => {
const Movie = sequelize.define('Movie', {
title: {
type: DataTypes.STRING,
allowNull: false,
},
genre: {
type: DataTypes.STRING,
allowNull: false,
},
releaseDate: {
type: DataTypes.DATE,
allowNull: false,
},
rating: {
type: DataTypes.FLOAT,
},
});
return Movie;
};

Setting Up Express and Routes

Create an index.js file to set up the Express server:

const express = require('express');
const bodyParser = require('body-parser');
const { Sequelize } = require('sequelize');
const db = require('./models');

const app = express();
const PORT = process.env.PORT || 3000;

app.use(bodyParser.json());

db.sequelize.sync().then(() => {
console.log('Database connected');
});

// Define routes here

app.listen(PORT, () => {
console.log(`Server is running on port ${PORT}`);
});

Implementing the /movies Endpoint

Add a routes/movies.js file to define the CRUD operations for the /movies endpoint:

const express = require('express');
const router = express.Router();
const { Movie } = require('../models');

// Create a new movie
router.post('/', async (req, res) => {
try {
const movie = await Movie.create(req.body);
res.status(201).json(movie);
} catch (error) {
res.status(400).json({ error: error.message });
}
});

// Get all movies
router.get('/', async (req, res) => {
try {
const movies = await Movie.findAll();
res.json(movies);
} catch (error) {
res.status(500).json({ error: error.message });
}
});

// Get a movie by ID
router.get('/:id', async (req, res) => {
try {
const movie = await Movie.findByPk(req.params.id);
if (movie) {
res.json(movie);
} else {
res.status(404).json({ error: 'Movie not found' });
}
} catch (error) {
res.status(500).json({ error: error.message });
}
});

// Update a movie
router.put('/:id', async (req, res) => {
try {
const movie = await Movie.findByPk(req.params.id);
if (movie) {
await movie.update(req.body);
res.json(movie);
} else {
res.status(404).json({ error: 'Movie not found' });
}
} catch (error) {
res.status(400).json({ error: error.message });
}
});

// Delete a movie
router.delete('/:id', async (req, res) => {
try {
const movie = await Movie.findByPk(req.params.id);
if (movie) {
await movie.destroy();
res.status(204).end();
} else {
res.status(404).json({ error: 'Movie not found' });
}
} catch (error) {
res.status(500).json({ error: error.message });
}
});

module.exports = router;

In index.js, include this new route:

const movieRoutes = require('./routes/movies');
app.use('/movies', movieRoutes);

Now you can run the application with:

node index.js

Demo

Create a new movie

curl -X POST http://localhost:3000/movies \
-H "Content-Type: application/json" \
-d '{
"title": "Inception",
"genre": "Sci-Fi",
"releaseDate": "2010-07-16",
"rating": 8.8
}'

Retrieve all movies

curl -X GET http://localhost:3000/movies

Get a movie by ID

curl -X GET http://localhost:3000/movies/1

Update a movie

curl -X PUT http://localhost:3000/movies/1 \
-H "Content-Type: application/json" \
-d '{
"rating": 9.0
}'

Delete a movie

curl -X DELETE http://localhost:3000/movies/1

Best Practices for REST API with Sequelize

  • Validation: Use Sequelize’s validation features for better data integrity.
  • Error Handling: Include error messages to make debugging easier.
  • Environment Configuration: : Use environment variables for sensitive data like database credentials.

Conclusion

Integrating Sequelize migrations with a RESTful API allows for powerful database management and user interaction. This guide provides a blueprint for creating and maintaining a Node.js API that interacts seamlessly with a PostgreSQL database.

tip

You can find the complete source code for this project on GitHub.

Building Reactive CRUD APIs with Spring Boot, R2DBC, and PostgreSQL

· 7 min read
Huseyin BABAL
Software Developer

Introduction

In the world of modern web applications, responsiveness and scalability are paramount. Enter R2DBC (Reactive Relational Database Connectivity) - a game-changing paradigm that brings the power of reactive programming to your database operations. In this article, we'll explore how to build a high-performance, non-blocking CRUD API using Spring Boot 3, R2DBC, and PostgreSQL.

What is R2DBC?

R2DBC is a specification that defines a reactive API for relational databases. Unlike traditional JDBC, which uses a blocking approach, R2DBC enables non-blocking database operations, allowing your application to handle more concurrent connections with fewer threads. This makes it an excellent choice for building reactive applications that can scale efficiently.

Persistence Layer

In this article, we will be using PostgreSQL as our database. You can maintain your database in any database management system. For a convenient deployment option, consider cloud-based solutions like Rapidapp, which offers managed PostgreSQL databases, simplifying setup and maintenance.

tip

Create a free database in Rapidapp in seconds here

Step-by-Step Guide to Creating Project

Project Initialization and Dependencies

We will be using Spring Boot and PostgreSQL to build a todo application. You can initialize a spring boot project by using Spring Boot CLI. Once installed, you can use following command to initialize a project with required dependencies.

spring init \
--dependencies=web,data-jpa,postgresql,data-r2dbc,webflux \
--type=maven-project \
--javaVersion=21 \
spring-reactive-api

Line 2: web for implementing REST endpoints, data-r2dbc for database persistence, webflux for reactive endpoints, and postgresql for PostgreSQL driver.

Line 3: --type=maven-project for creating a Maven project.

Line 4: --javaVersion=21 we will use Java 21 to compile and run project.

Now that we initialized the project, go to the folder spring-reative-api and open it with your favourite IDE.

Implementing Entity and Repository

We have only one entity here, Car, which will be used to store our cars. Let's create a new entity called Car as follows.

@Data
@Table
@AllArgsConstructor
@NoArgsConstructor
class Car {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
private String make;
private String model;
private Integer year;
private String color;
}

In order to manage Car entity in database, we will use following repository interface.

interface CarRepository extends R2dbcRepository<Car, Long> {}

Be sure that we are using R2dbcRepository instead of JpaRepository.

Application Configuration

This section contains application level configurations such as the application name and r2dbc as shown below:

application.yaml
spring:
application:
name: spring-reactive-api
r2dbc:
url: <connection-string-from-rapidapp|or your own managed postgres url>
username: <username>
password: <password>
sql:
init: always

Line 5: Connection URL for the PostgreSQL database. You can obtain this from Rapidapp or your own managed PostgreSQL service. It should have a format like jdbc:postgresql://<host>:<port>/<database>?sslmode=require.

Implementing Services for CRUD Operations

We will create a service class to handle CRUD operations for the Car entity. This class will interact with the repository

@Service
class CarService {
private final CarRepository carRepository;

public CarService(CarRepository carRepository) {
this.carRepository = carRepository;
}

public Flux<Car> getAllCars() {
return carRepository.findAll();
}

public Mono<Car> getCarById(Long id) {
return carRepository.findById(id);
}

public Mono<Car> createCar(Car car) {
return carRepository.save(car);
}

public Mono<Car> updateCar(Long id, Car car) {
return carRepository.findById(id)
.flatMap(existingCar -> {
car.setId(id);
return carRepository.save(car);
});
}

public Mono<Void> deleteCar(Long id) {
return carRepository.deleteById(id);
}
}

Most probably you are familiar with business service classes, but this case we have Mono and Flux keywords, let's explain them.

Understanding Reactive Types: Mono and Flux

When working with reactive programming in Spring, you'll encounter two fundamental types: Mono and Flux. These are implementations of the Reactive Streams specification and are crucial for handling non-blocking operations.

Mono

A Mono<T> represents a stream that emits at most one item and then completes (successfully or with an error). Think of it as an asynchronous equivalent to:

  • A single value
  • No value (empty)
  • An error
// Example of different Mono scenarios
Mono<Car> carMono = carRepository.findById(1L); // 0 or 1 car
Mono<Car> newCarMono = carRepository.save(newCar); // Created car
Mono<Void> deleteMono = carRepository.deleteById(1L); // No return value

Flux

A Flux<T> represents a stream that can emit 0 to N items and then completes (successfully or with an error). It's ideal for handling:

  • Multiple values
  • Streams of data
  • Continuous updates
// Example of different Flux scenarios
Flux<Car> allCars = carRepository.findAll(); // 0 to N cars
Flux<Car> toyotaCars = carRepository.findByMake("Toyota"); // Filtered stream

How Spring Handles Non-Blocking Requests

Traditional (Blocking) Approach:

Client Request → Thread assigned → Database Operation → Thread waits → Response → Thread released

Reactive (Non-Blocking) Approach:

Client Request → Event Loop registers callback → Thread released →
Database Operation (async) → Callback triggered → Response

Event Loop Model

Spring WebFlux uses an event loop model powered by Project Reactor:

  • Request Acceptance: When a request arrives, it's accepted by a small number of threads (typically one per CPU core).

  • Non-Blocking Processing: Instead of waiting for operations to complete, the thread registers callbacks and moves on to handle other requests.

@GetMapping("/{id}")
public Mono<Car> getCarById(@PathVariable Long id) {
return carService.getCarById(id);
// Thread doesn't wait here! It's free to handle other requests
}
  • Asynchronous Execution: Database operations occur asynchronously:
public Mono<Car> getCarById(Long id) {
return carRepository.findById(id)
.map(car -> {
// This runs when data is available, not immediately
log.info("Car found: {}", car);
return car;
})
.defaultIfEmpty(/* handle not found case */);
}

Now that we have more insights about Mono and Flux, let's continue with our controller class.

Implementing Controller for REST Endpoints

We will create a controller class to handle REST endpoints for the Car entity. This class will interact with the service class.

@RestController
@RequestMapping("/api/cars")
public class CarController {
private final CarService carService;

public CarController(CarService carService) {
this.carService = carService;
}

@GetMapping
public Flux<Car> getAllCars() {
return carService.getAllCars();
}

@GetMapping("/{id}")
public Mono<Car> getCarById(@PathVariable Long id) {
return carService.getCarById(id);
}

@PostMapping
public Mono<Car> createCar(@RequestBody Car car) {
return carService.createCar(car);
}

@PutMapping("/{id}")
public Mono<Car> updateCar(@PathVariable Long id, @RequestBody Car car) {
return carService.updateCar(id, car);
}

@DeleteMapping("/{id}")
public Mono<Void> deleteCar(@PathVariable Long id) {
return carService.deleteCar(id);
}
}

Schema Preparation

Before running the application, you need to create a table in your PostgreSQL database to store the Car entity. You can use the following schema.sql in the resources folder to create the table.

src/main/resources/schema.sql
CREATE TABLE IF NOT EXISTS car (
id SERIAL PRIMARY KEY,
make VARCHAR(255) NOT NULL,
model VARCHAR(255) NOT NULL,
year INTEGER NOT NULL,
color VARCHAR(255)
);

That's it! You have successfully created a Spring Reactive API with PostgreSQL as the database. You can now run the application as follows;

mvn spring-boot:run

Demo

Create a new car

curl -X POST http://localhost:8080/api/cars \
-H "Content-Type: application/json" \
-d '{"make":"Toyota","model":"Camry","year":2023,"color":"Silver"}'

Get all cars

curl http://localhost:8080/api/cars

Get a car by id

curl http://localhost:8080/api/cars/1

Update a car

curl -X PUT http://localhost:8080/api/cars/1 \
-H "Content-Type: application/json" \
-d '{"make":"Toyota","model":"Camry","year":2023,"color":"Blue"}'

Delete a car

curl -X DELETE http://localhost:8080/api/cars/1

Key Benefits of Using R2DBC

  • Non-blocking Operations: R2DBC enables fully non-blocking database interactions, improving application responsiveness.
  • Scalability: Handle more concurrent connections with fewer threads.
  • Backpressure Support: Built-in mechanisms to handle scenarios where producers are faster than consumers.
  • Integration with Reactive Streams: Seamlessly works with Spring WebFlux and other reactive components.

Performance Considerations

While R2DBC offers significant advantages for reactive applications, it's important to note that it may not always outperform traditional JDBC in terms of raw throughput. The real benefits come in scenarios with:

  • High concurrency
  • Long-running queries
  • Applications that already use reactive programming models

Conclusion

R2DBC with Spring Boot 3 provides a powerful foundation for building reactive, scalable applications. By leveraging non-blocking database operations, you can create more responsive and resource-efficient services. As the R2DBC ecosystem continues to grow, it's becoming an increasingly attractive option for modern application development.

tip

You can find the complete source code for this project on GitHub.

Effortless REST API Development with Spring Data REST and PostgreSQL

· 5 min read
Huseyin BABAL
Software Developer

Introduction

In today’s fast-paced software development world, building efficient and flexible REST APIs is essential. Spring Data REST simplifies this process significantly by allowing developers to quickly expose repository-based data models as RESTful services with minimal configuration. In this article, we’ll explore how easy it is to create a REST API using Spring Data REST and PostgreSQL, allowing you to focus on your business logic instead of boilerplate code.

Why Spring Data REST?

Spring Data REST is built on top of Spring Data repositories and automatically exposes them as RESTful endpoints. Instead of manually writing controller logic to handle CRUD operations, Spring Data REST provides out-of-the-box support for:

  • Automatic generation of REST endpoints.
  • Hypermedia as the Engine of Application State (HATEOAS) support.
  • Pagination, sorting, and filtering.
  • Integration with Spring Data repositories for database interaction.

By leveraging Spring Data REST, you can rapidly create a fully functional REST API with minimal code, and focus on business rules, rather than implementing boilerplate REST operations.

Persistence Layer

In this article, we will be using PostgreSQL as our database. You can maintain your database in any database management system. For a convenient deployment option, consider cloud-based solutions like Rapidapp, which offers managed PostgreSQL databases, simplifying setup and maintenance.

tip

Create a free database in Rapidapp in seconds here

Step-by-Step Guide to Creating Project

Project Initialization and Dependencies

We will be using Spring Boot and PostgreSQL to build a todo application. You can initialize a spring boot project by using Spring Boot CLI. Once installed, you can use following command to initialize a project with required dependencies.

spring init \
--dependencies=web,data-jpa,data-rest,postgresql \
--type=maven-project \
--javaVersion=21 \
spring-data-rest-example

Line 2: web for implementing REST endpoints, data-jpa for database persistence, data-rest for automatically expose your data as fully qualified REST endpoints, and postgresql for PostgreSQL driver.

Line 3: --type=maven-project for creating a Maven project.

Line 4: --javaVersion=21 we will use Java 21 to compile and run project.

Now that we initialized the project, go to the folder spring-data-rest-example and open it with your favourite IDE.

Implementing Entity and Repository

We have only one entity here, Product, which will be used to store our products. Let's create a new entity called Product as follows.

@Entity
@NoArgsConstructor
@AllArgsConstructor
@Data
class Product {

@Id
@GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
private String title;
private BigDecimal price;
}

In order to manage Product entity in database, we will use following repository interface.


interface ProductRepository extends CrudRepository<Product, Integer>{}

Application Configuration

This section contains application level configurations such as the application name, datasource, and jpa as shown below:

application.yaml
spring:
application:
name: spring-data-rest-example
datasource:
url: <connection-string-from-rapidapp|or your own managed postgres url>
username: <username>
password: <password>
jpa:
database-platform: org.hibernate.dialect.PostgreSQLDialect
hibernate:
ddl-auto: update

Line 5: Connection URL for the PostgreSQL database. You can obtain this from Rapidapp or your own managed PostgreSQL service. It should have a format like jdbc:postgresql://<host>:<port>/<database>?sslmode=require.

That's it! You have successfully created a Spring Data REST API with PostgreSQL as the database. One second, we don't have controllers,services, etc... Yes, Spring Data REST will take care of all these for you. It simply analyzes your entity with the help of @RepositoryRestResource annotation you put in repository interface above and exposes it as REST endpoints.

You can now run the application as follows;

mvn spring-boot:run

If everything goes well, you can verify app by visiting http://localhost:8080/products in your browser.

{
"_embedded" : {
"products" : [ ]
},
"_links" : {
"self" : {
"href" : "http://localhost:8080/products"
},
"profile" : {
"href" : "http://localhost:8080/profile/products"
}
}
}

Rest Endpoints

Once the repository is created, Spring Data REST will automatically expose CRUD endpoints for the Product entity. These include:

  • GET /products: List all products.
  • GET /products/{id}: Retrieve a specific product.
  • POST /products: Create a new product.
  • PUT /products/{id}: Update an existing product.
  • DELETE /products/{id}: Delete a product.

There’s no need to write additional controller classes; the endpoints are provided automatically based on your repository definition.

Customizing the REST Behavior

Although Spring Data REST provides default behavior, you can customize the endpoints to suit your needs. For instance, you can modify the base path, limit the exposed operations, or add custom validation.

  • Customizing base paths: You can change the default endpoint path by adding the @RepositoryRestResource annotation to the repository:
@RepositoryRestResource(path = "product")
public interface ProductRepository extends JpaRepository<Product, Long> {
}

Now, the base URL for product-related endpoints will be /product instead of /products.

  • Exposing specific fields: You can expose or hide fields by using Spring Data REST’s projection feature:
@Projection(name = "customProduct", types = { Product.class })
interface CustomProduct {
String getTitle();
}

This will limit the fields exposed in the API response for products and show only title field.

Demo

Create Product

curl -XPOST -H "Content-Type: application/json" \
http://localhost:8080/products -d \
'{"title": "Laptop", "price": 1000}'

List Products

curl -XGET http://localhost:8080/products

Update Product

curl -XPUT -H "Content-Type: application/json" \
http://localhost:8080/products/1 -d \
'{"title": "Laptop", "price": 1200}'

Delete Product

curl -XDELETE http://localhost:8080/products/1

Get Product

curl -XGET http://localhost:8080/products/1

Conclusion

Spring Data REST, combined with PostgreSQL, provides a fast and simple way to create fully functional REST APIs without the need for writing repetitive boilerplate code. With automatic CRUD operations, pagination, and HATEOAS support, you can focus on the unique aspects of your application and leave the rest to Spring Data REST. By adopting this approach, you can significantly reduce development time and quickly deliver high-quality APIs.

tip

You can find the complete source code for this project on GitHub.

Efficient Database Migrations with .NET Core, Entity Framework, and PostgreSQL

· 5 min read
Huseyin BABAL
Software Developer

Introduction

In modern software development, managing database schema changes effectively is crucial. A smooth database migration process helps maintain consistency across environments (development, staging, production) and enables teams to iterate quickly. In this article, we’ll explore how to streamline database migrations in a .NET Core application using Entity Framework Core (EF Core) with PostgreSQL.

Why Migrations Matter?

Like Flyway in Java, EF Core migrations provide a way to evolve the database schema without losing data. It allows you to apply incremental changes to your schema while keeping track of the historical state of the database.

Persistence Layer

In this article, we will be using PostgreSQL as our database. You can maintain your database in any database management system. For a convenient deployment option, consider cloud-based solutions like Rapidapp, which offers managed PostgreSQL databases, simplifying setup and maintenance.

tip

Create a free database in Rapidapp in seconds here

Step-by-Step Implementation

We’ll be working with a simple .NET Core project using EF Core to interact with a PostgreSQL database. Below are the steps to get started.

Project Initialization

We will use .NET CLI to create a new .NET Core project. Let's start by creating a new project:

dotnet new webapi -n net-migration

Above command will initialize a new .NET Core Web API project named net-migration.

Installing Dependencies

We need to install the following packages:

  • Npgsql.EntityFrameworkCore.PostgreSQL: The PostgreSQL provider for Entity Framework Core.
  • Microsoft.EntityFrameworkCore.Design: The EF Core design package for migrations.
dotnet add package Npgsql.EntityFrameworkCore.PostgreSQL
dotnet add package Microsoft.EntityFrameworkCore.Design

Datasource Configuration

Configure PostgreSQL Connection in appsettings.json, add the PostgreSQL connection string:

{
"ConnectionStrings": {
"PostgresConnection": "Host=<host>;Port=<port>;Database=<database>;Username=<user>;Password=<password>;SSL Mode=require;Trust Server Certificate=true"
}
}

If you are using Rapidapp for PostgreSQL, you can find the connection string in the Rapidapp console and it supports SSL by default.

If you are using a local PostgreSQL instance, you can replace the placeholders with your PostgreSQL credentials and you may need to disable SSL if you haven't configured it in your database server.

Creating the Database Context

Create a ApplicationDbContext class that inherits from DbContext, configure it to use PostgreSQL.

ApplicationDbContext.cs
using Microsoft.EntityFrameworkCore;

namespace net_migration;

public class ApplicationDbContext : DbContext
{
public DbSet<Product> Products { get; set; }

public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options)
: base(options) { }

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
base.OnModelCreating(modelBuilder);
}
}

This is a generic database context configuration and the connection parameters will be passed in Program.cs soon.

Creating the Product Model

Create a Product class to represent the product entity in the database:

Product.cs
namespace net_migration;

public class Product
{
public int Id { get; set; }
public string Name { get; set; }
public decimal Price { get; set; }
public int Stock { get; set; }
}

Registering the Database Context

In the Program.cs file, register the ApplicationDbContext with the PostgreSQL connection string as follows.

Program.cs
...
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDbContext<ApplicationDbContext>(options =>
options.UseNpgsql(builder.Configuration.GetConnectionString("DefaultConnection")));
...

Program.cs contains a couple of more boilerplate for the controllers, swagger, etc. However, to keep it simple, we are only showing the relevant part about registering the database context. The connection string is fetched from appsettings.json and passed to the UseNpgsql method.

Running Migrations

The basic skeleton of the project is ready. Now, let's create the database schema by running the migrations:

dotnet ef migrations add InitialCreate
dotnet ef database update

The InitialCreate migration will create the Products table in the database. The database update command will apply the migration to the database. If you haven't ef sub command for .NET CLI tool, you can install dotnet-ef extension as follows.

dotnet tool install --global dotnet-ef

The migration history is stored in the __EFMigrationsHistory table, which keeps track of the applied migrations. Assume that you added a new field called Description field to your Product entity. You can create a new migration and update the database as follows:

Product.cs
namespace net_migration;

public class Product
{
public int Id { get; set; }
public string Name { get; set; }
public decimal Price { get; set; }
public int Stock { get; set; }

// New column
public string Description { get; set; } // This is the new column you're adding.
}
dotnet ef migrations add AddDescriptionColumn
dotnet ef database update

You can see it will add a new column to Products table. You can also see the history in the __EFMigrationsHistory table.

Automate Migration on Startup

You can configure your application to automatically apply migrations at startup as follows.

using (var scope = app.ApplicationServices.CreateScope())
{
var dbContext = scope.ServiceProvider.GetRequiredService<ApplicationDbContext>();
dbContext.Database.Migrate();
}

Conclusion

Entity Framework Core makes database migrations straightforward in a .NET Core project. By managing schema changes through migrations, you ensure that your database remains in sync with your application's models across all environments. PostgreSQL, when combined with EF Core, provides a robust, scalable solution for modern web applications.

tip

You can find the complete source code for this project on GitHub.

Building Simple Product API with Apollo GraphQL and PostgreSQL

· 7 min read
Huseyin BABAL
Software Developer

Introduction

GraphQL has transformed how we build APIs by offering a flexible and powerful alternative to REST. It enables clients to request exactly the data they need, making it a great fit for modern applications. In this article, we'll explore GraphQL, its use cases, and key concepts like queries, mutations, and resolvers. We'll also walk you through building a simple product API using Apollo GraphQL with RapidApp PostgreSQL as your datasource.

What is GraphQL?

GraphQL is a query language for your API that allows clients to request specific data, avoiding over-fetching or under-fetching. Unlike REST, where multiple endpoints might be needed, GraphQL provides a single endpoint through which clients can access multiple data sources and types with one request.

Use Cases

GraphQL is widely used for:

  • Applications with complex data relationships
  • APIs that need to return different data to different clients
  • Projects aiming to optimize network usage
  • Microservices architecture integration

For example, you can use GraphQL to retrieve only the required fields from a product database, improving efficiency in e-commerce applications.

What Are Queries, Mutations, and Resolvers?

  • Query: Used to read or fetch data. In our case, we will create a query to retrieve products.
  • Mutation: Used to create, update, or delete data. We will create mutations to add new products to our datasource.
  • Resolvers: Functions that handle the execution of a query or mutation and interact with the datasource, such as a PostgreSQL database.

Persistence Layer

In this article, we will be using PostgreSQL as our database. You can maintain your database in any database management system. For a convenient deployment option, consider cloud-based solutions like Rapidapp, which offers managed PostgreSQL databases, simplifying setup and maintenance.

tip

Create a free database in Rapidapp in seconds here

Step-by-Step Implementation

Project Initialization

We will use Node.js and it already has a package manager called npm. Let's start by creating a new Node.js project:

mkdir pg-graphql
cd pg-graphql
npm init -y

Installing Dependencies

We need to install the following packages:

  • apollo-server: A GraphQL server library for Node.js.
  • graphql: The JavaScript reference implementation for GraphQL.
  • pg: A PostgreSQL client for Node.js.
  • datasource-sql: A SQL datasource for Apollo Server.
npm install apollo-server graphql pg datasource-sql

Creating Schema

Create a new file named schema.js in the root directory of your project and define the schema for your product API:

schema.js
const { gql } = require('apollo-server');

const typeDefs = gql`
type Product {
id: ID!
name: String!
price: Float!
description: String
}

type Query {
products: [Product]
product(id: ID!): Product
}

type Mutation {
createProduct(name: String!, price: Float!, description: String): Product
updateProduct(id: ID!, name: String, price: Float, description: String): Product
}
`;

module.exports = typeDefs;

Line 4: Defines the Product type with fields id, name, price, and description.

Line 11: Defines the Query type with two queries: products to fetch all products and product to fetch a single product by ID.

Line 16: Defines the Mutation type with two mutations: createProduct to add a new product and updateProduct to update an existing product.

This is kind of a contract of our API, and we will be using this in the server definition soon.

Implementing Product API

Create a new file named product_api.js in the root directory of your project and implement the product API:

product_api.js
const { SQLDataSource } = require('datasource-sql');

class ProductAPI extends SQLDataSource {
getProducts() {
return this.knex.select('*').from('products');
}

getProductById(id) {
return this.knex('products').where({ id }).first();
}

updateProduct(id, product) {
return this.knex('products')
.where({ id })
.update(product)
.returning('*')
.then(rows => rows[0]);
}

createProduct({ name, price, description }) {
return this.knex('products')
.insert({ name, price, description })
.returning('*');
}
}

module.exports = ProductAPI;

Knex is a SQL query builder for Node.js that we will use to interact with our PostgreSQL database.

Line 5: Retrieves all products from the products table.

Line 9: Retrieves a single product by ID.

Line 13: Updates an existing product by ID.

Line 21: Adds a new product to the products table.

Now that we have our API, let's use it in resolvers to handle queries and mutations.

Implementing Resolvers

Create a new file named resolvers.js in the root directory of your project and implement the resolvers:

resolvers.js
const resolvers = {
Query: {
products: async (_, __, { dataSources }) => {
return dataSources.productAPI.getProducts();
},
product: async (_, { id }, { dataSources }) => {
return dataSources.productAPI.getProductById(id);
},
},
Mutation: {
createProduct: async (_, { name, price, description }, { dataSources }) => {
return dataSources.productAPI.createProduct({ name, price, description });
},
updateProduct: async (_, { id, name, price, description }, { dataSources }) => {
const updatedProduct = { name, price, description };
return dataSources.productAPI.updateProduct(id, updatedProduct);
},
},
};

module.exports = resolvers;

Resolvers are functions that execute queries and mutations. They interact with the datasource to fetch or update data. In our case, it uses our Product API for better abstraction. With the help of resolvers, queries and mutations will be available in our GraphQL API.

Setting Up Apollo Server

Create a new file named server.js in the root directory of your project and set up the Apollo Server:

server.js
const { ApolloServer } = require('apollo-server');
const typeDefs = require('./schema');
const resolvers = require('./resolvers');
const ProductAPI = require('./product_api');

// PostgreSQL connection setup
const knexConfig = {
client: 'pg',
connection: {
host: '<host>',
user: '<user>',
password: '<password>',
database: '<database>',
ssl: {
rejectUnauthorized: false
},
application_name: 'apollo'
},
pool: { min: 0, max: 3 },
};

const server = new ApolloServer({
typeDefs,
resolvers,
dataSources: () => ({
productAPI: new ProductAPI(knexConfig),
}),
introspection: true,
});

server.listen().then(({ url }) => {
console.log(`🚀 Server ready at ${url}`);
});

Line 22: ApolloServer is initialized with the schema, resolvers, and data sources. With the instrospection option set to true, you can explore the schema using tools like GraphQL Playground.

Running Application

Run the following command to start your Apollo Server:

node server.js

Testing the API

Once the server is ready, navigate to http://localhost:4000 in your browser to access the GraphQL Playground. You will see a button to go to the Apollo Studio page where you can explore your schema, run queries, and test mutations.

Creating a New Product

To create a new product, run the following mutation in the Playground:

mutation {
createProduct(name: "New Product", price: 99.99, description: "description") {
id
name
price
description
}
}

Fetching All Products

To fetch all products, run the following query:

query {
products {
id
name
price
description
}
}

Getting a Single Product

query {
product(id: 1) {
id
name
price
description
}
}

Updating a Product

mutation {
updateProduct(id: 1, name: "Updated Product", price: 199.99, description: "New Description") {
id
name
price
description
}
}

Conclusion

In this article, we covered how to set up a simple product API using Apollo GraphQL and PostgreSQL as a service. GraphQL’s flexibility with queries and mutations, combined with a powerful PostgreSQL datasource, makes it a great solution for building efficient APIs. If you're looking for a managed PostgreSQL solution, consider using RapidApp PostgreSQL to get started quickly.

tip

You can find the complete source code for this project on GitHub.

Simplifying DB Auditing with Spring Boot and Javers

· 8 min read
Huseyin BABAL
Software Developer

Introduction

Database auditing is essential for tracking changes to data over time, providing a clear history of who changed what and when. It helps ensure compliance with security and legal standards, allowing developers to detect anomalies, recover data, or conduct audits efficiently. In this article, we will explore how to implement automatic database auditing in a Spring Boot application using Javers, a powerful Java library that simplifies object auditing and diffing.

What is Javers?

Javers is a Java library designed for auditing object changes, often in databases. By comparing versions of objects, Javers creates an audit trail of differences between them. Unlike traditional database auditing methods that rely on triggers or manually logging changes, Javers operates at the application layer, making it easier to track object changes while maintaining flexibility.

Use Cases for Javers

  • Tracking Entity Changes: Javers can record changes to your domain objects and persist those changes, enabling easy retrieval of past object versions.

  • Data Synchronization: It can be used in distributed systems to ensure data consistency across multiple systems by comparing snapshots of the same object.

  • Data Governance and Compliance: For businesses that need to comply with regulations like GDPR, Javers provides an efficient way to track and monitor changes.

Understanding Javers' Tables

Javers uses a set of database tables to store the change history. These tables include:

  • jv_commit: Stores information about who made changes, the timestamp, and the commit ID.
  • jv_snapshot: Records object state at the time of a commit.
  • jv_global_id: Manages global identifiers for entities.
  • jv_commit_property: Stores metadata related to the commit, like environment details. These tables are automatically created when Javers is integrated with your Spring Boot project, allowing you to persist audit information without any manual setup.

In Javers, the type column is used to indicate the type of change captured in a snapshot. This column helps distinguish between various operations on an entity. Here's an explanation of the most common type values:

  • INITIAL: Indicates the first time an object is added to the database or tracked by Javers. This value represents the creation of the entity, meaning that there was no prior state to compare against.

  • UPDATE: Represents changes made to an existing object. When any field within the entity is modified, Javers captures the updated state and marks the change with this type. It indicates that this is not a new entity but an updated one.

  • TERMINAL: This value signifies that the entity has been deleted. When an object is removed from the system, Javers marks this event with a TERMINAL entry to signify the final state of the object before deletion.

  • INITIALIZED: This type appears in some cases to signify that a value was initialized or restored from a default state.

Persistence Layer

In this article, we will be using PostgreSQL as our database. You can maintain your database in any database management system. For a convenient deployment option, consider cloud-based solutions like Rapidapp, which offers managed PostgreSQL databases, simplifying setup and maintenance.

tip

Create a free database in Rapidapp in seconds here

Step-by-Step Implementation

We will implement a REST endpoint to create and update products to see db audit history by using Javers. It contains implementing entity, repository interface and a controller to create/update product.

Dependencies

Be sure you have the following dependencies installed by using your favourite dependency management tool e.g. maven, gradle.

pom.xml
 <dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>42.7.3</version>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<version>1.18.32</version>
</dependency>
<!-- This is for automatic db auiditing with the integration to spring security -->
<dependency>
<groupId>org.javers</groupId>
<artifactId>javers-spring-boot-starter-sql</artifactId>
<version>7.6.1</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>

Spring Web Security Configuration

In order to enable Spring Web Security and to be able to send request to REST endpoints, you need to configure it in your SecurityConfig.java file as shown below. By default, Spring expects to see csrf tokens in every request. With the following configuration, you can disable csrf protection.

SecurityConfig.java
@Configuration
@EnableWebSecurity
class WebSecurityConfiguration {

@Bean
public SecurityFilterChain filterChain(HttpSecurity http) throws Exception {
http.csrf(AbstractHttpConfigurer::disable)
.authorizeRequests(authorize -> authorize
.anyRequest().authenticated()
)
.httpBasic(withDefaults())
.sessionManagement(session -> session.sessionCreationPolicy(SessionCreationPolicy.STATELESS));
return http.build();
}
}

Line 2: Add @EnableWebSecurity to the SecurityConfig class to protect the API endpoints.

Line 7: Disable CSRF protection.

Line 12: Set the session creation policy to STATELESS to ensure sessions are not maintained.

Add the Product Entity

Create a simple Product entity that will be audited by Javers.

Product.java
@Entity
@NoArgsConstructor
@AllArgsConstructor
@Data
class Product {

@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;

private String name;
}

Implement the Product Repository

Create a simple ProductRepository interface that will be audited by Javers.

ProductRepository.java
@JaversSpringDataAuditable
interface ProductRepository extends CrudRepository<Product, Long> {}

Line 1: Add @JaversSpringDataAuditable to the ProductRepository interface to enable Javers auditing for the product entity automatically.

Implement the Product Controller

Create a simple ProductController class for creating and updating products. We use POST method for creating, and PATCH method for updating single field of Product.

ProductController.java
@RestController
@RequestMapping("/products")
@RequiredArgsConstructor
class ProductController {

private final ProductRepository productRepository;

@PostMapping
public Product createProduct() {
Product product = new Product();
product.setName("Product " + System.currentTimeMillis());
return productRepository.save(product);
}

@PatchMapping
public void updateProduct() {
productRepository.findById(1L).ifPresent(product -> {
product.setName("Product " + System.currentTimeMillis());
productRepository.save(product);
});
}
}

Application Configuration

Next, configure database access in your application.yaml file. At a minimum, you need to specify the database connection details:

application.yaml
spring:
application:
name: spring-db-audit
datasource:
driver-class-name: org.postgresql.Driver
url: jdbc:postgresql://<host>:<port>/<user>?sslmode=require&application_name=spring-db-audit&user=<user>&password=<password>
jpa:
database-platform: org.hibernate.dialect.PostgreSQLDialect
hibernate:
ddl-auto: update
security:
user:
name: developer
password: s3cr3t

Line 13: Set the user property to developer and password to s3cr3t. We will use those credentials in curl requests to test the application.

Demo

Create Product

curl -XPOST \
-u developer:s3cr3t -H \
"Content-Type: application/json" \
http://localhost:8080/products -v

Javers will detect the following changes in the Product entity and save them in the database as follows.

> select *from jv_commit;
commit_pk | author | commit_date | commit_date_instant | commit_id
-----------+--------+-------------------------+-----------------------------+-----------
100 | user | 2024-09-08 01:58:49.728 | 2024-09-07T22:58:49.728964Z | 1.00
> select *from jv_snapshot;
snapshot_pk | type | version | state | changed_properties | managed_type | global_id_fk | commit_fk
-------------+---------+---------+------------------------------------+--------------------+------------------------------------+--------------+-----------
100 | INITIAL | 1 | { +| [ +| com.rapidapp.springdbaudit.Product | 100 | 100
| | | "name": "Product 1725749929156",+| "name", +| | |
| | | "id": 1 +| "id" +| | |
| | | } | ] | | |

Since this is a create operation, it is saved as INITIAL. Let's update product entity to see how Javers maintains its changes.

Update Product

curl -XPATCH \
-u developer:s3cr3t -H \
"Content-Type: application/json" \
http://localhost:8080/products -v

Javers will detect the following changes in the Product entity and save them in the database as follows.

> select *from jv_snapshot WHERE type='UPDATE';
snapshot_pk | type | version | state | changed_properties | managed_type | global_id_fk | commit_fk
-------------+--------+---------+------------------------------------+--------------------+------------------------------------+--------------+-----------
500 | UPDATE | 2 | { +| [ +| com.rapidapp.springdbaudit.Product | 100 | 400
| | | "name": "Product 1725777539405",+| "name" +| | |
| | | "id": 1 +| ] | | |
| | | } | | | |

As you can see;

  • type field is UPDATE
  • changed_properties is name to understand which field(s) are changed.
  • We can follow the history by checking version field. So, who did this change? Let's take a look at jv_commit table.
> select *from jv_commit order by commit_id desc limit 1;
commit_pk | author | commit_date | commit_date_instant | commit_id
-----------+--------+-------------------------+-----------------------------+-----------
400 | john | 2024-09-08 09:38:59.639 | 2024-09-08T06:38:59.639471Z | 5.00

You can see the author field to see the owner of this change. As you can see, john is the owner of this change.

Conclusion

By using Javers with Spring Boot, database auditing becomes simple and flexible. Javers' ability to track object changes over time and store audit data in a structured way makes it a great choice for any application that requires a detailed history of data changes. With this implementation, you can now easily integrate automatic auditing in your Spring Boot applications.

tip

You can find the complete source code for this project on GitHub.

Building Simple Todo REST API with PHP, Laravel, and PostgreSQL

· 4 min read
Huseyin BABAL
Software Developer

Introduction

In this tutorial, we’ll walk through building a basic Todo REST API using Laravel and PostgreSQL. Whether you’re looking to enhance your understanding of Laravel, or you're interested in integrating a PostgreSQL database into your project, this guide will provide you with the necessary steps to get up and running quickly.

Prerequisites

In this article, we will be using PostgreSQL as our database. You can maintain your database in any database management system. For a convenient deployment option, consider cloud-based solutions like Rapidapp, which offers managed PostgreSQL databases, simplifying setup and maintenance.

tip

Create a free database in Rapidapp in seconds here

Step-by-Step Implementation

Setting Up Your Laravel Project

First, let's create a new Laravel project. You can do this using Composer:

composer create-project --prefer-dist laravel/laravel laravel-todo-api

Next, navigate to the project directory:

cd laravel-todo-api

Configuring PostgreSQL Database

Open the .env file in the root directory of your Laravel project and update the database configuration to use PostgreSQL:

DB_CONNECTION=pgsql
DB_HOST=<host>
DB_PORT=<port>
DB_DATABASE=<database>
DB_USERNAME=<user>
DB_PASSWORD=<password>

Creating the Task Model and Migration

Run the following command to generate a model and migration for your tasks:

php artisan make:model Task -m

This command creates a Task model and a migration file located in the database/migrations directory.

Defining the Database Schema

Open the generated migration file in database/migrations and define the schema for the tasks table:

public function up()
{
Schema::create('tasks', function (Blueprint $table) {
$table->id();
$table->string('title');
$table->text('description')->nullable();
$table->boolean('completed')->default(false);
$table->timestamps();
});
}

Run the migration to create the table:

php artisan migrate

Implementing the Task Model

To allow mass assignment, update the Task model (app/Models/Task.php) by adding the title, description, and completed attributes to the fillable array:

namespace App\Models;

use Illuminate\Database\Eloquent\Factories\HasFactory;
use Illuminate\Database\Eloquent\Model;

class Task extends Model
{
use HasFactory;

protected $fillable = [
'title',
'description',
'completed',
];
}

Setting Up the API Routes

Define the routes for your API in the routes/web.php file. Laravel provides a convenient way to create RESTful routes using Route::apiResource:

use App\Http\Controllers\TaskController;

Route::apiResource('tasks', TaskController::class);

Creating the TaskController

Generate a controller for handling API requests:

php artisan make:controller TaskController --api

In the generated TaskController (app/Http/Controllers/TaskController.php), implement the methods to handle CRUD operations:

namespace App\Http\Controllers;

use App\Models\Task;
use Illuminate\Http\Request;

class TaskController extends Controller
{
public function index()
{
return Task::all();
}

public function store(Request $request)
{
$task = Task::create($request->all());
return response()->json($task, 201);
}

public function show($id)
{
return Task::findOrFail($id);
}

public function update(Request $request, $id)
{
$task = Task::findOrFail($id);
$task->update($request->all());
return response()->json($task, 200);
}

public function destroy($id)
{
Task::findOrFail($id)->delete();
return response()->json(null, 204);
}
}

Running the Laravel Application

To start your Laravel application, use the built-in development server:

php artisan serve

Your API will be accessible at http://127.0.0.1:8000/api/tasks.

Disabling CSRF for API Routes

Since this is a REST API, you can disable CSRF protection for API routes. Laravel handles this by default in the api middleware group. To manually configure it, you can exclude the api/* routes from CSRF protection in bootstrap/app.php:

...
->withMiddleware(function (Middleware $middleware) {
$middleware->validateCsrfTokens(
except: ['api/*']
);
})
...

Testing the API

List All Tasks

curl -X GET http://127.0.0.1:8000/api/tasks

Create a New Task

curl -X POST http://127.0.0.1:8000/api/tasks \
-H "Content-Type: application/json" \
-d '{
"title": "Buy groceries",
"description": "Milk, Bread, Cheese",
"completed": false
}'

Get a Specific Task

curl -X GET http://127.0.0.1:8000/api/tasks/1

Update a Task

curl -X PUT http://127.0.0.1:8000/api/tasks/1 \
-H "Content-Type: application/json" \
-d '{
"title": "Buy groceries and snacks",
"description": "Milk, Bread, Cheese, Chips",
"completed": false
}'

Delete a Task

curl -X DELETE http://127.0.0.1:8000/api/tasks/1

Conclusion

Congratulations! You've successfully built a simple Todo List REST API using Laravel and PostgreSQL. This foundation can be expanded with additional features, authentication, and more to create a fully functional web application.

Whether you're learning Laravel or looking to integrate a PostgreSQL database with your projects, this guide serves as a practical starting point.

tip

You can find the complete source code for this project on GitHub.