Skip to main content

11 posts tagged with "Spring Boot"

View All Tags

Database Change Management with Spring Boot Liquibase and PostgreSQL

· 8 min read
Huseyin BABAL
Software Developer

Introduction: Why Database Migrations Matter?

Database migrations are critical for maintaining a consistent and reliable schema across environments in modern application development. If you’re working with Spring Boot and PostgreSQL, Liquibase offers a powerful, flexible solution for managing database changes. In this guide, we’ll explore setting up Liquibase with Spring Boot and PostgreSQL to simplify schema updates, version control, and rollbacks.

Why Use Liquibase?

Liquibase is a widely used open-source tool that provides a structured way to track, manage, and apply database schema changes. It offers a variety of features to support developers in complex environments:

  • Changelog Files: Liquibase organizes migrations into changelog files that support XML, YAML, JSON, or SQL formats.
  • Rollback Support: Liquibase allows you to define rollback logic, making it easy to revert changes if needed.
  • Database Independence: With support for over 30 databases, Liquibase offers flexibility across projects. By integrating Liquibase with Spring Boot, you can automate migrations, manage schema versioning, and improve collaboration across teams.

In this article, we will be using PostgreSQL as our database. You can maintain your database in any database management system. For a convenient deployment option, consider cloud-based solutions like Rapidapp, which offers managed PostgreSQL databases, simplifying setup and maintenance.

tip

Create a free database in Rapidapp in seconds here

Step-by-Step Guide to Use Liquibase Migrations

Project Initialization and Dependencies

We will be using Spring Boot and PostgreSQL to build a todo application. You can initialize a spring boot project by using Spring Boot CLI. Once installed, you can use following command to initialize a project with required dependencies.

spring init \
--dependencies=liquibase,data-jpa,postgresql \
--type=maven-project \
--javaVersion=21 \
liquibase-migrations-demo

Line 2: liquibase for Liquibase integration, data-jpa for database persistence, and postgresql for PostgreSQL driver.

Line 3: --type=maven-project for creating a Maven project.

Line 4: --javaVersion=21 we will use Java 21 in Google Cloud Run environment.

Now that we initialized the project, go to the folder liquibase-migrations-demo and open it with your favourite IDE.

Application Configuration

Next, configure Liquibase in your application.properties file. At a minimum, you need to specify the database connection details:

application.properties showLineNumbers
spring.datasource.url=jdbc:postgresql://<host>:<port>/<db>
spring.datasource.username=<user>
spring.datasource.password=<password>

spring.liquibase.enabled=true
spring.liquibase.change-log=classpath:db/changelog/db.changelog-master.yaml

The spring.liquibase.change-logs property specifies the location where Liquibase will look for changelogs which contains actual schema changes.

Create Your First Changelog File

Liquibase migrations are structured into changelog files that record each schema change as an individual "changeSet."

  1. In src/main/resources/db/changelog/, create a new file named db.changelog-master.yaml. This file will be the main changelog file that references individual changes.
databaseChangeLog:
- include:
file: classpath:/db/changelog/changes/01-create-accounts-table.sql
  1. Now, create the referenced file 01-create-accounts-table.sql to define your first changeSet:
01-create-accounts-table.sql
CREATE TABLE accounts (
user_id SERIAL PRIMARY KEY,
username VARCHAR (50) UNIQUE NOT NULL,
email VARCHAR (50) UNIQUE NOT NULL,
password VARCHAR (50) NOT NULL,
created_at TIMESTAMP NOT NULL,
last_login TIMESTAMP
);

When it comes to naming the migration files, it's a good practice to follow a consistent naming convention. A common approach is to prefix the file with a version number and a brief description, like 01-create-accounts-table.sql.

  • Version number: There is no convention for the version number, but it helps on ordering the migration files.
  • Description: Provide a brief description of the migration.

SQL File Features in Liquibase

When using SQL, Liquibase offers some additional features within comments, such as:

  • Changesets: Define each migration step using --changeset author:id in comments. This keeps Liquibase’s tracking in place, similar to other formats.
  • Rollback Commands: Optionally add rollback commands with --rollback <SQL command>, allowing automatic rollbacks when needed. You can see a simple example below;
--liquibase formatted sql
--changeset huseyinbabal:1
CREATE TABLE accounts (
user_id SERIAL PRIMARY KEY,
username VARCHAR (50) UNIQUE NOT NULL,
email VARCHAR (50) UNIQUE NOT NULL,
password VARCHAR (50) NOT NULL,
created_at TIMESTAMP NOT NULL,
last_login TIMESTAMP
);

--rollback DROP TABLE accounts;

Running Application

When you run your Spring Boot application, Liquibase will automatically detect and apply any pending migrations to your PostgreSQL database. You’ll see output in the console indicating that the migrations have been successfully applied. You can run application as follows.

./mvnw spring-boot:run

Rollback

If you need to rollback a migration, you can use the rollback command. For example, to rollback the last changeSet, you can run:

./mvnw liquibase:rollback \
-Dliquibase.rollbackCount=1 \
-Dliquibase.url="jdbc:postgresql://<host>:<port>/<db>" \
-Dliquibase.username="<username>" \
-Dliquibase.password="<password>" \
-Dliquibase.changeLogFile=src/main/resources/db/changelog/db.changelog-master.yaml \
-Dliquibase.driver=org.postgresql.Driver

This time we used liquibase:rollback command to rollback the last changeSet. You can also rollback to a specific changeSet by providing liquibase.rollbackTag parameter. For more information, you can check the official documentation.

Best Practices for Migration Files

To ensure smooth database migrations, follow these best practices:

  • Organize Changelogs: Use a master changelog file to keep a well-organized list of all changes.

  • Use Descriptive Names: Name changelog files descriptively to make it clear what each migration does.

  • Version Control Changelogs: Commit changelog files to version control to ensure that schema changes are applied consistently across environments.

Understanding the Liquibase Metadata Table

When you run migrations with Liquibase, it automatically creates two metadata tables in your database: DATABASECHANGELOG and DATABASECHANGELOGLOCK. These tables keep track of migration status and control, ensuring changes are applied in a consistent and non-conflicting manner. Let’s look at each table in detail.

DATABASECHANGELOG Table

The DATABASECHANGELOG table records every changeSet that has been successfully applied to the database, allowing Liquibase to know which changes have already been executed. Here’s an overview of its key columns:

  • ID: A unique identifier for each changeSet, as specified in the changelog file. This ID is combined with the AUTHOR and FILENAME columns to uniquely identify a changeSet.
  • AUTHOR: The author of the changeSet, which is specified in the changelog file.
  • FILENAME: The file path of the changelog file containing the changeSet. This helps differentiate between changeSets if multiple changelog files are used.
  • DATEEXECUTED: The timestamp indicating when the changeSet was applied. This helps you track when each change was made.
  • ORDEREXECUTED: A number indicating the order in which the changeSets were applied, starting from 1. This can help troubleshoot issues by showing the order of migrations.
  • EXECTYPE: Specifies how the changeSet was executed. Common values include EXECUTED (successful), RERAN, and FAILED, which indicate the status of each migration.
  • MD5SUM: A hash value representing the changeSet’s content. Liquibase uses this hash to detect if a changeSet has been modified after it was initially applied.
  • DESCRIPTION: A brief description of the change, based on the change type (e.g., createTable, addColumn).
  • COMMENTS: Any comments associated with the changeSet, as specified in the changelog.
  • TAG: Optional; a tag that can be assigned to a particular changeSet, often used to mark significant points in the schema history (e.g., a major release).
  • LIQUIBASE: The version of Liquibase used to apply the changeSet. This can be useful for tracking compatibility.
  • CONTEXT: An optional field that can be used to specify a context for the changeSet. This can help control which changeSets are applied based on the context.
  • LABELS: Optional; a comma-separated list of labels that can be assigned to a changeSet. Labels can be used to group changeSets based on common characteristics.
  • DEPLOYMENT_ID: An identifier that can be used to track deployments. This can be useful for auditing and tracking changes across environments. The DATABASECHANGELOG table essentially serves as a ledger for Liquibase, ensuring it applies only new changeSets, avoiding duplications, and maintaining the history of changes.

DATABASECHANGELOGLOCK Table

The DATABASECHANGELOGLOCK table controls the concurrency of migrations to prevent multiple processes from applying changes at the same time, which can lead to conflicts or corruption. It has fewer columns but plays a critical role in database integrity.

  • ID: A unique identifier for the lock. This is generally 1 because only one lock record is required.
  • LOCKED: A boolean value indicating if a Liquibase process currently holds the lock. When set to TRUE, it prevents other instances from running migrations simultaneously.
  • LOCKGRANTED: The timestamp when the lock was acquired. This can be useful for diagnosing if a lock has been held too long (e.g., if a migration process was interrupted).
  • LOCKEDBY: Information on the host or process that acquired the lock, often containing the hostname and IP address. This can help identify which process holds the lock if issues arise.

The DATABASECHANGELOGLOCK table ensures that only one Liquibase instance modifies the schema at a time, which is particularly useful in distributed systems or CI/CD environments where migrations might be triggered concurrently.

How These Tables Support Safe and Consistent Migrations

  • The DATABASECHANGELOG table ensures migrations run in a consistent sequence and that changes are not re-applied or skipped.
  • The DATABASECHANGELOGLOCK table prevents race conditions, helping avoid potential conflicts from simultaneous migrations. Understanding these tables can help you troubleshoot migration issues and ensure Liquibase runs smoothly. Both tables are essential for Liquibase’s reliability and ability to manage complex database schemas effectively.

Conclusion

By adopting Liquibase in your Spring Boot project, you can efficiently manage database changes, reduce deployment friction, and allow your team to focus more on innovation. Whether you’re deploying new features or managing complex database versions, Liquibase streamlines the process, making database management simpler, safer, and more collaborative.

Building Reactive CRUD APIs with Spring Boot, R2DBC, and PostgreSQL

· 7 min read
Huseyin BABAL
Software Developer

Introduction

In the world of modern web applications, responsiveness and scalability are paramount. Enter R2DBC (Reactive Relational Database Connectivity) - a game-changing paradigm that brings the power of reactive programming to your database operations. In this article, we'll explore how to build a high-performance, non-blocking CRUD API using Spring Boot 3, R2DBC, and PostgreSQL.

What is R2DBC?

R2DBC is a specification that defines a reactive API for relational databases. Unlike traditional JDBC, which uses a blocking approach, R2DBC enables non-blocking database operations, allowing your application to handle more concurrent connections with fewer threads. This makes it an excellent choice for building reactive applications that can scale efficiently.

Persistence Layer

In this article, we will be using PostgreSQL as our database. You can maintain your database in any database management system. For a convenient deployment option, consider cloud-based solutions like Rapidapp, which offers managed PostgreSQL databases, simplifying setup and maintenance.

tip

Create a free database in Rapidapp in seconds here

Step-by-Step Guide to Creating Project

Project Initialization and Dependencies

We will be using Spring Boot and PostgreSQL to build a todo application. You can initialize a spring boot project by using Spring Boot CLI. Once installed, you can use following command to initialize a project with required dependencies.

spring init \
--dependencies=web,data-jpa,postgresql,data-r2dbc,webflux \
--type=maven-project \
--javaVersion=21 \
spring-reactive-api

Line 2: web for implementing REST endpoints, data-r2dbc for database persistence, webflux for reactive endpoints, and postgresql for PostgreSQL driver.

Line 3: --type=maven-project for creating a Maven project.

Line 4: --javaVersion=21 we will use Java 21 to compile and run project.

Now that we initialized the project, go to the folder spring-reative-api and open it with your favourite IDE.

Implementing Entity and Repository

We have only one entity here, Car, which will be used to store our cars. Let's create a new entity called Car as follows.

@Data
@Table
@AllArgsConstructor
@NoArgsConstructor
class Car {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
private String make;
private String model;
private Integer year;
private String color;
}

In order to manage Car entity in database, we will use following repository interface.

interface CarRepository extends R2dbcRepository<Car, Long> {}

Be sure that we are using R2dbcRepository instead of JpaRepository.

Application Configuration

This section contains application level configurations such as the application name and r2dbc as shown below:

application.yaml
spring:
application:
name: spring-reactive-api
r2dbc:
url: <connection-string-from-rapidapp|or your own managed postgres url>
username: <username>
password: <password>
sql:
init: always

Line 5: Connection URL for the PostgreSQL database. You can obtain this from Rapidapp or your own managed PostgreSQL service. It should have a format like jdbc:postgresql://<host>:<port>/<database>?sslmode=require.

Implementing Services for CRUD Operations

We will create a service class to handle CRUD operations for the Car entity. This class will interact with the repository

@Service
class CarService {
private final CarRepository carRepository;

public CarService(CarRepository carRepository) {
this.carRepository = carRepository;
}

public Flux<Car> getAllCars() {
return carRepository.findAll();
}

public Mono<Car> getCarById(Long id) {
return carRepository.findById(id);
}

public Mono<Car> createCar(Car car) {
return carRepository.save(car);
}

public Mono<Car> updateCar(Long id, Car car) {
return carRepository.findById(id)
.flatMap(existingCar -> {
car.setId(id);
return carRepository.save(car);
});
}

public Mono<Void> deleteCar(Long id) {
return carRepository.deleteById(id);
}
}

Most probably you are familiar with business service classes, but this case we have Mono and Flux keywords, let's explain them.

Understanding Reactive Types: Mono and Flux

When working with reactive programming in Spring, you'll encounter two fundamental types: Mono and Flux. These are implementations of the Reactive Streams specification and are crucial for handling non-blocking operations.

Mono

A Mono<T> represents a stream that emits at most one item and then completes (successfully or with an error). Think of it as an asynchronous equivalent to:

  • A single value
  • No value (empty)
  • An error
// Example of different Mono scenarios
Mono<Car> carMono = carRepository.findById(1L); // 0 or 1 car
Mono<Car> newCarMono = carRepository.save(newCar); // Created car
Mono<Void> deleteMono = carRepository.deleteById(1L); // No return value

Flux

A Flux<T> represents a stream that can emit 0 to N items and then completes (successfully or with an error). It's ideal for handling:

  • Multiple values
  • Streams of data
  • Continuous updates
// Example of different Flux scenarios
Flux<Car> allCars = carRepository.findAll(); // 0 to N cars
Flux<Car> toyotaCars = carRepository.findByMake("Toyota"); // Filtered stream

How Spring Handles Non-Blocking Requests

Traditional (Blocking) Approach:

Client Request → Thread assigned → Database Operation → Thread waits → Response → Thread released

Reactive (Non-Blocking) Approach:

Client Request → Event Loop registers callback → Thread released →
Database Operation (async) → Callback triggered → Response

Event Loop Model

Spring WebFlux uses an event loop model powered by Project Reactor:

  • Request Acceptance: When a request arrives, it's accepted by a small number of threads (typically one per CPU core).

  • Non-Blocking Processing: Instead of waiting for operations to complete, the thread registers callbacks and moves on to handle other requests.

@GetMapping("/{id}")
public Mono<Car> getCarById(@PathVariable Long id) {
return carService.getCarById(id);
// Thread doesn't wait here! It's free to handle other requests
}
  • Asynchronous Execution: Database operations occur asynchronously:
public Mono<Car> getCarById(Long id) {
return carRepository.findById(id)
.map(car -> {
// This runs when data is available, not immediately
log.info("Car found: {}", car);
return car;
})
.defaultIfEmpty(/* handle not found case */);
}

Now that we have more insights about Mono and Flux, let's continue with our controller class.

Implementing Controller for REST Endpoints

We will create a controller class to handle REST endpoints for the Car entity. This class will interact with the service class.

@RestController
@RequestMapping("/api/cars")
public class CarController {
private final CarService carService;

public CarController(CarService carService) {
this.carService = carService;
}

@GetMapping
public Flux<Car> getAllCars() {
return carService.getAllCars();
}

@GetMapping("/{id}")
public Mono<Car> getCarById(@PathVariable Long id) {
return carService.getCarById(id);
}

@PostMapping
public Mono<Car> createCar(@RequestBody Car car) {
return carService.createCar(car);
}

@PutMapping("/{id}")
public Mono<Car> updateCar(@PathVariable Long id, @RequestBody Car car) {
return carService.updateCar(id, car);
}

@DeleteMapping("/{id}")
public Mono<Void> deleteCar(@PathVariable Long id) {
return carService.deleteCar(id);
}
}

Schema Preparation

Before running the application, you need to create a table in your PostgreSQL database to store the Car entity. You can use the following schema.sql in the resources folder to create the table.

src/main/resources/schema.sql
CREATE TABLE IF NOT EXISTS car (
id SERIAL PRIMARY KEY,
make VARCHAR(255) NOT NULL,
model VARCHAR(255) NOT NULL,
year INTEGER NOT NULL,
color VARCHAR(255)
);

That's it! You have successfully created a Spring Reactive API with PostgreSQL as the database. You can now run the application as follows;

mvn spring-boot:run

Demo

Create a new car

curl -X POST http://localhost:8080/api/cars \
-H "Content-Type: application/json" \
-d '{"make":"Toyota","model":"Camry","year":2023,"color":"Silver"}'

Get all cars

curl http://localhost:8080/api/cars

Get a car by id

curl http://localhost:8080/api/cars/1

Update a car

curl -X PUT http://localhost:8080/api/cars/1 \
-H "Content-Type: application/json" \
-d '{"make":"Toyota","model":"Camry","year":2023,"color":"Blue"}'

Delete a car

curl -X DELETE http://localhost:8080/api/cars/1

Key Benefits of Using R2DBC

  • Non-blocking Operations: R2DBC enables fully non-blocking database interactions, improving application responsiveness.
  • Scalability: Handle more concurrent connections with fewer threads.
  • Backpressure Support: Built-in mechanisms to handle scenarios where producers are faster than consumers.
  • Integration with Reactive Streams: Seamlessly works with Spring WebFlux and other reactive components.

Performance Considerations

While R2DBC offers significant advantages for reactive applications, it's important to note that it may not always outperform traditional JDBC in terms of raw throughput. The real benefits come in scenarios with:

  • High concurrency
  • Long-running queries
  • Applications that already use reactive programming models

Conclusion

R2DBC with Spring Boot 3 provides a powerful foundation for building reactive, scalable applications. By leveraging non-blocking database operations, you can create more responsive and resource-efficient services. As the R2DBC ecosystem continues to grow, it's becoming an increasingly attractive option for modern application development.

tip

You can find the complete source code for this project on GitHub.

Effortless REST API Development with Spring Data REST and PostgreSQL

· 5 min read
Huseyin BABAL
Software Developer

Introduction

In today’s fast-paced software development world, building efficient and flexible REST APIs is essential. Spring Data REST simplifies this process significantly by allowing developers to quickly expose repository-based data models as RESTful services with minimal configuration. In this article, we’ll explore how easy it is to create a REST API using Spring Data REST and PostgreSQL, allowing you to focus on your business logic instead of boilerplate code.

Why Spring Data REST?

Spring Data REST is built on top of Spring Data repositories and automatically exposes them as RESTful endpoints. Instead of manually writing controller logic to handle CRUD operations, Spring Data REST provides out-of-the-box support for:

  • Automatic generation of REST endpoints.
  • Hypermedia as the Engine of Application State (HATEOAS) support.
  • Pagination, sorting, and filtering.
  • Integration with Spring Data repositories for database interaction.

By leveraging Spring Data REST, you can rapidly create a fully functional REST API with minimal code, and focus on business rules, rather than implementing boilerplate REST operations.

Persistence Layer

In this article, we will be using PostgreSQL as our database. You can maintain your database in any database management system. For a convenient deployment option, consider cloud-based solutions like Rapidapp, which offers managed PostgreSQL databases, simplifying setup and maintenance.

tip

Create a free database in Rapidapp in seconds here

Step-by-Step Guide to Creating Project

Project Initialization and Dependencies

We will be using Spring Boot and PostgreSQL to build a todo application. You can initialize a spring boot project by using Spring Boot CLI. Once installed, you can use following command to initialize a project with required dependencies.

spring init \
--dependencies=web,data-jpa,data-rest,postgresql \
--type=maven-project \
--javaVersion=21 \
spring-data-rest-example

Line 2: web for implementing REST endpoints, data-jpa for database persistence, data-rest for automatically expose your data as fully qualified REST endpoints, and postgresql for PostgreSQL driver.

Line 3: --type=maven-project for creating a Maven project.

Line 4: --javaVersion=21 we will use Java 21 to compile and run project.

Now that we initialized the project, go to the folder spring-data-rest-example and open it with your favourite IDE.

Implementing Entity and Repository

We have only one entity here, Product, which will be used to store our products. Let's create a new entity called Product as follows.

@Entity
@NoArgsConstructor
@AllArgsConstructor
@Data
class Product {

@Id
@GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
private String title;
private BigDecimal price;
}

In order to manage Product entity in database, we will use following repository interface.


interface ProductRepository extends CrudRepository<Product, Integer>{}

Application Configuration

This section contains application level configurations such as the application name, datasource, and jpa as shown below:

application.yaml
spring:
application:
name: spring-data-rest-example
datasource:
url: <connection-string-from-rapidapp|or your own managed postgres url>
username: <username>
password: <password>
jpa:
database-platform: org.hibernate.dialect.PostgreSQLDialect
hibernate:
ddl-auto: update

Line 5: Connection URL for the PostgreSQL database. You can obtain this from Rapidapp or your own managed PostgreSQL service. It should have a format like jdbc:postgresql://<host>:<port>/<database>?sslmode=require.

That's it! You have successfully created a Spring Data REST API with PostgreSQL as the database. One second, we don't have controllers,services, etc... Yes, Spring Data REST will take care of all these for you. It simply analyzes your entity with the help of @RepositoryRestResource annotation you put in repository interface above and exposes it as REST endpoints.

You can now run the application as follows;

mvn spring-boot:run

If everything goes well, you can verify app by visiting http://localhost:8080/products in your browser.

{
"_embedded" : {
"products" : [ ]
},
"_links" : {
"self" : {
"href" : "http://localhost:8080/products"
},
"profile" : {
"href" : "http://localhost:8080/profile/products"
}
}
}

Rest Endpoints

Once the repository is created, Spring Data REST will automatically expose CRUD endpoints for the Product entity. These include:

  • GET /products: List all products.
  • GET /products/{id}: Retrieve a specific product.
  • POST /products: Create a new product.
  • PUT /products/{id}: Update an existing product.
  • DELETE /products/{id}: Delete a product.

There’s no need to write additional controller classes; the endpoints are provided automatically based on your repository definition.

Customizing the REST Behavior

Although Spring Data REST provides default behavior, you can customize the endpoints to suit your needs. For instance, you can modify the base path, limit the exposed operations, or add custom validation.

  • Customizing base paths: You can change the default endpoint path by adding the @RepositoryRestResource annotation to the repository:
@RepositoryRestResource(path = "product")
public interface ProductRepository extends JpaRepository<Product, Long> {
}

Now, the base URL for product-related endpoints will be /product instead of /products.

  • Exposing specific fields: You can expose or hide fields by using Spring Data REST’s projection feature:
@Projection(name = "customProduct", types = { Product.class })
interface CustomProduct {
String getTitle();
}

This will limit the fields exposed in the API response for products and show only title field.

Demo

Create Product

curl -XPOST -H "Content-Type: application/json" \
http://localhost:8080/products -d \
'{"title": "Laptop", "price": 1000}'

List Products

curl -XGET http://localhost:8080/products

Update Product

curl -XPUT -H "Content-Type: application/json" \
http://localhost:8080/products/1 -d \
'{"title": "Laptop", "price": 1200}'

Delete Product

curl -XDELETE http://localhost:8080/products/1

Get Product

curl -XGET http://localhost:8080/products/1

Conclusion

Spring Data REST, combined with PostgreSQL, provides a fast and simple way to create fully functional REST APIs without the need for writing repetitive boilerplate code. With automatic CRUD operations, pagination, and HATEOAS support, you can focus on the unique aspects of your application and leave the rest to Spring Data REST. By adopting this approach, you can significantly reduce development time and quickly deliver high-quality APIs.

tip

You can find the complete source code for this project on GitHub.

Simplifying DB Auditing with Spring Boot and Javers

· 8 min read
Huseyin BABAL
Software Developer

Introduction

Database auditing is essential for tracking changes to data over time, providing a clear history of who changed what and when. It helps ensure compliance with security and legal standards, allowing developers to detect anomalies, recover data, or conduct audits efficiently. In this article, we will explore how to implement automatic database auditing in a Spring Boot application using Javers, a powerful Java library that simplifies object auditing and diffing.

What is Javers?

Javers is a Java library designed for auditing object changes, often in databases. By comparing versions of objects, Javers creates an audit trail of differences between them. Unlike traditional database auditing methods that rely on triggers or manually logging changes, Javers operates at the application layer, making it easier to track object changes while maintaining flexibility.

Use Cases for Javers

  • Tracking Entity Changes: Javers can record changes to your domain objects and persist those changes, enabling easy retrieval of past object versions.

  • Data Synchronization: It can be used in distributed systems to ensure data consistency across multiple systems by comparing snapshots of the same object.

  • Data Governance and Compliance: For businesses that need to comply with regulations like GDPR, Javers provides an efficient way to track and monitor changes.

Understanding Javers' Tables

Javers uses a set of database tables to store the change history. These tables include:

  • jv_commit: Stores information about who made changes, the timestamp, and the commit ID.
  • jv_snapshot: Records object state at the time of a commit.
  • jv_global_id: Manages global identifiers for entities.
  • jv_commit_property: Stores metadata related to the commit, like environment details. These tables are automatically created when Javers is integrated with your Spring Boot project, allowing you to persist audit information without any manual setup.

In Javers, the type column is used to indicate the type of change captured in a snapshot. This column helps distinguish between various operations on an entity. Here's an explanation of the most common type values:

  • INITIAL: Indicates the first time an object is added to the database or tracked by Javers. This value represents the creation of the entity, meaning that there was no prior state to compare against.

  • UPDATE: Represents changes made to an existing object. When any field within the entity is modified, Javers captures the updated state and marks the change with this type. It indicates that this is not a new entity but an updated one.

  • TERMINAL: This value signifies that the entity has been deleted. When an object is removed from the system, Javers marks this event with a TERMINAL entry to signify the final state of the object before deletion.

  • INITIALIZED: This type appears in some cases to signify that a value was initialized or restored from a default state.

Persistence Layer

In this article, we will be using PostgreSQL as our database. You can maintain your database in any database management system. For a convenient deployment option, consider cloud-based solutions like Rapidapp, which offers managed PostgreSQL databases, simplifying setup and maintenance.

tip

Create a free database in Rapidapp in seconds here

Step-by-Step Implementation

We will implement a REST endpoint to create and update products to see db audit history by using Javers. It contains implementing entity, repository interface and a controller to create/update product.

Dependencies

Be sure you have the following dependencies installed by using your favourite dependency management tool e.g. maven, gradle.

pom.xml
 <dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>42.7.3</version>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<version>1.18.32</version>
</dependency>
<!-- This is for automatic db auiditing with the integration to spring security -->
<dependency>
<groupId>org.javers</groupId>
<artifactId>javers-spring-boot-starter-sql</artifactId>
<version>7.6.1</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>

Spring Web Security Configuration

In order to enable Spring Web Security and to be able to send request to REST endpoints, you need to configure it in your SecurityConfig.java file as shown below. By default, Spring expects to see csrf tokens in every request. With the following configuration, you can disable csrf protection.

SecurityConfig.java
@Configuration
@EnableWebSecurity
class WebSecurityConfiguration {

@Bean
public SecurityFilterChain filterChain(HttpSecurity http) throws Exception {
http.csrf(AbstractHttpConfigurer::disable)
.authorizeRequests(authorize -> authorize
.anyRequest().authenticated()
)
.httpBasic(withDefaults())
.sessionManagement(session -> session.sessionCreationPolicy(SessionCreationPolicy.STATELESS));
return http.build();
}
}

Line 2: Add @EnableWebSecurity to the SecurityConfig class to protect the API endpoints.

Line 7: Disable CSRF protection.

Line 12: Set the session creation policy to STATELESS to ensure sessions are not maintained.

Add the Product Entity

Create a simple Product entity that will be audited by Javers.

Product.java
@Entity
@NoArgsConstructor
@AllArgsConstructor
@Data
class Product {

@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;

private String name;
}

Implement the Product Repository

Create a simple ProductRepository interface that will be audited by Javers.

ProductRepository.java
@JaversSpringDataAuditable
interface ProductRepository extends CrudRepository<Product, Long> {}

Line 1: Add @JaversSpringDataAuditable to the ProductRepository interface to enable Javers auditing for the product entity automatically.

Implement the Product Controller

Create a simple ProductController class for creating and updating products. We use POST method for creating, and PATCH method for updating single field of Product.

ProductController.java
@RestController
@RequestMapping("/products")
@RequiredArgsConstructor
class ProductController {

private final ProductRepository productRepository;

@PostMapping
public Product createProduct() {
Product product = new Product();
product.setName("Product " + System.currentTimeMillis());
return productRepository.save(product);
}

@PatchMapping
public void updateProduct() {
productRepository.findById(1L).ifPresent(product -> {
product.setName("Product " + System.currentTimeMillis());
productRepository.save(product);
});
}
}

Application Configuration

Next, configure database access in your application.yaml file. At a minimum, you need to specify the database connection details:

application.yaml
spring:
application:
name: spring-db-audit
datasource:
driver-class-name: org.postgresql.Driver
url: jdbc:postgresql://<host>:<port>/<user>?sslmode=require&application_name=spring-db-audit&user=<user>&password=<password>
jpa:
database-platform: org.hibernate.dialect.PostgreSQLDialect
hibernate:
ddl-auto: update
security:
user:
name: developer
password: s3cr3t

Line 13: Set the user property to developer and password to s3cr3t. We will use those credentials in curl requests to test the application.

Demo

Create Product

curl -XPOST \
-u developer:s3cr3t -H \
"Content-Type: application/json" \
http://localhost:8080/products -v

Javers will detect the following changes in the Product entity and save them in the database as follows.

> select *from jv_commit;
commit_pk | author | commit_date | commit_date_instant | commit_id
-----------+--------+-------------------------+-----------------------------+-----------
100 | user | 2024-09-08 01:58:49.728 | 2024-09-07T22:58:49.728964Z | 1.00
> select *from jv_snapshot;
snapshot_pk | type | version | state | changed_properties | managed_type | global_id_fk | commit_fk
-------------+---------+---------+------------------------------------+--------------------+------------------------------------+--------------+-----------
100 | INITIAL | 1 | { +| [ +| com.rapidapp.springdbaudit.Product | 100 | 100
| | | "name": "Product 1725749929156",+| "name", +| | |
| | | "id": 1 +| "id" +| | |
| | | } | ] | | |

Since this is a create operation, it is saved as INITIAL. Let's update product entity to see how Javers maintains its changes.

Update Product

curl -XPATCH \
-u developer:s3cr3t -H \
"Content-Type: application/json" \
http://localhost:8080/products -v

Javers will detect the following changes in the Product entity and save them in the database as follows.

> select *from jv_snapshot WHERE type='UPDATE';
snapshot_pk | type | version | state | changed_properties | managed_type | global_id_fk | commit_fk
-------------+--------+---------+------------------------------------+--------------------+------------------------------------+--------------+-----------
500 | UPDATE | 2 | { +| [ +| com.rapidapp.springdbaudit.Product | 100 | 400
| | | "name": "Product 1725777539405",+| "name" +| | |
| | | "id": 1 +| ] | | |
| | | } | | | |

As you can see;

  • type field is UPDATE
  • changed_properties is name to understand which field(s) are changed.
  • We can follow the history by checking version field. So, who did this change? Let's take a look at jv_commit table.
> select *from jv_commit order by commit_id desc limit 1;
commit_pk | author | commit_date | commit_date_instant | commit_id
-----------+--------+-------------------------+-----------------------------+-----------
400 | john | 2024-09-08 09:38:59.639 | 2024-09-08T06:38:59.639471Z | 5.00

You can see the author field to see the owner of this change. As you can see, john is the owner of this change.

Conclusion

By using Javers with Spring Boot, database auditing becomes simple and flexible. Javers' ability to track object changes over time and store audit data in a structured way makes it a great choice for any application that requires a detailed history of data changes. With this implementation, you can now easily integrate automatic auditing in your Spring Boot applications.

tip

You can find the complete source code for this project on GitHub.

Streamlining Database Migrations with Spring Boot Flyway and PostgreSQL

· 5 min read
Huseyin BABAL
Software Developer

Introduction: Why Database Migrations Matter?

In the fast-paced world of software development, change is inevitable. As applications evolve, so too must the databases that support them. Whether you’re adding new features, optimizing performance, or fixing bugs, database schema changes are a critical part of the process. However, managing these changes manually can quickly become complex and error-prone, especially as your team grows or your application scales.

This is where database migrations come into play. Migrations allow you to apply consistent, repeatable changes to your database schema across all environments—development, testing, staging, and production. By automating this process, you reduce the risk of human error, ensure consistency, and maintain a clear history of how your database has evolved over time.

What is Flyway?

Flyway is an open-source database migration tool that makes it easy to manage and track schema changes. It works by applying incremental SQL or Java-based migrations in a controlled manner, ensuring that your database schema is always in sync with your application’s needs. Flyway integrates seamlessly with popular databases like PostgreSQL and frameworks like Spring Boot, making it a powerful tool for modern application development.

In this article, we will be using PostgreSQL as our database. You can maintain your database in any database management system. For a convenient deployment option, consider cloud-based solutions like Rapidapp, which offers managed PostgreSQL databases, simplifying setup and maintenance.

tip

Create a free database in Rapidapp in seconds here

Step-by-Step Guide to Use Flyway Migrations

Project Initialization and Dependencies

We will be using Spring Boot and PostgreSQL to build a todo application. You can initialize a spring boot project by using Spring Boot CLI. Once installed, you can use following command to initialize a project with required dependencies.

spring init \
--dependencies=flyway,data-jpa,postgresql \
--type=maven-project \
--javaVersion=21 \
flyway-migrations-demo

Line 2: flyway for Flyway integration, data-jpa for database persistence, and postgresql for PostgreSQL driver.

Line 3: --type=maven-project for creating a Maven project.

Line 4: --javaVersion=21 we will use Java 21 in Google Cloud Run environment.

Now that we initialized the project, go to the folder flyway-migrations-demo and open it with your favourite IDE.

Application Configuration

Next, configure Flyway in your application.properties file. At a minimum, you need to specify the database connection details:

application.properties
spring.datasource.url=jdbc:postgresql://<host>:<port>/<db>
spring.datasource.username=<user>
spring.datasource.password=<password>
spring.flyway.locations=classpath:db/migration

The spring.flyway.locations property specifies the location where Flyway will look for migration files. By default, this is classpath:db/migration.

Create Migration Files

Migration files are where you define the changes to your database schema. Each migration file has a unique version number and a descriptive name. For example:

V1__Add_user_table.sql
V2__Alter_user_table_add_email.sql
  • Version number: Start with "V" followed by a version number (V1, V2, etc.). This helps Flyway determine the order in which migrations should be applied.
  • Separator: Use double underscores __ to separate the version number from the description.
  • Description: Provide a brief description of the migration.

Here’s an example of a simple migration file that creates a user table:

V1__Add_user_table.sql
CREATE TABLE user (
id SERIAL PRIMARY KEY,
username VARCHAR(50) NOT NULL,
password VARCHAR(100) NOT NULL,
email VARCHAR(100) NOT NULL
);

Running Application

When you run your Spring Boot application, Flyway will automatically detect and apply any pending migrations to your PostgreSQL database. You’ll see output in the console indicating that the migrations have been successfully applied. You can run application as follows.

./mvnw spring-boot:run

Best Practices for Migration Files

To ensure smooth database migrations, follow these best practices:

  • Keep Migrations Small and Incremental: Break down complex changes into smaller, manageable steps. This makes it easier to troubleshoot issues and roll back changes if necessary.

  • Use Descriptive Names: The name of each migration should clearly describe its purpose. This makes it easier to understand the history of changes at a glance.

  • Test Migrations Thoroughly: Before applying migrations to production, test them in a staging environment that closely mirrors production. This helps catch any issues early.

  • Avoid Direct Modifications in Production: Always use migrations to make changes to the database schema. Direct modifications can lead to inconsistencies and make it difficult to track changes.

  • Version Control Your Migrations: Store your migration files in version control along with your application code. This ensures that schema changes are tracked and can be rolled back if needed.

Understanding the Flyway Metadata Table

Flyway maintains a metadata table in your database, typically named flyway_schema_history, to track which migrations have been applied. This table contains information such as:

Version: The version number of the migration. Description: The description of the migration. Script: The name of the migration file. Execution Time: How long the migration took to apply. Status: Whether the migration was successful. This table is crucial for managing and auditing your database schema. It ensures that migrations are only applied once and provides a clear history of changes.

Conclusion

Database migrations are a vital part of modern application development, enabling you to manage schema changes in a consistent, repeatable way. By integrating Flyway with Spring Boot and PostgreSQL, you can automate this process and reduce the risk of errors, ensuring that your database schema evolves alongside your application.

Streaming PostgreSQL Changes to Kafka with Debezium

· 8 min read
Huseyin BABAL
Software Developer

Introduction: Why Send Changes to Kafka

In modern distributed systems, keeping multiple services in sync and maintaining data consistency across microservices can be challenging. When dealing with microservices architecture, it's crucial to have an efficient way to propagate changes in database to other services in real-time. One effective solution is to publish database changes to message broker like Apache Kafka. Kafka acts as an intermediary that allows various services to subscribe to these changes and react accordingly. This approach ensures real-time data synchronization, reduces the complexity of direct service-to-service communication, and enhances the overall scalability and fault tolerance of the system.

Use Cases for Publishing Database Changes to Kafka

  • Real-Time Analytics: Feeding database changes to a real-time analytics system to provide up-to-the-minute insights.
  • Event-Driven Architecture: Enabling services to react to database changes, triggering workflows or business processes.
  • Cache Invalidation: Automatically invalidating or updating cache entries based on database changes to ensure consistency.
  • Data Replication: Replicating data across different data stores or geographic regions for redundancy and high availability.
  • Audit Logging: Keeping a comprehensive audit log of all changes made to database for compliance and debugging purposes.

What is Debezium?

Debezium is an open-source distributed platform that captures database changes and streams them to Kafka in real-time. It leverages the database's transaction log to detect changes and publish them as events in Kafka topics. Debezium supports various databases, including PostgreSQL, MySQL, and MongoDB, making it a versatile choice for change data capture (CDC) needs.

PostgreSQL Configuration: Logical WAL Replication

In this article, we will be using PostgreSQL as our database with logical WAL replication enabled. You can maintain your database in any database management system. For a convenient deployment option, consider cloud-based solutions like Rapidapp, which offers managed PostgreSQL databases with built-in logical WAL replication, simplifying setup and maintenance.

tip

Create a free database with built-in logical WAL replication in Rapidapp in seconds here

If you choose to maintain your own PostgreSQL database, you can enable logical WAL replication with following PostgreSQL configuration.

postgresql.conf
...
wal_level = logical
...

You can see more details about WAL Level in PostgreSQL Documentation.

Deploying Debezium Connect with PostgreSQL Connection

There are several ways to deploy Debezium Connect, but we will use Docker for spin up a container to run Debezium Connect as follows.

docker run --rm --name debezium \
-e BOOTSTRAP_SERVERS=<bootstrap_servers> \
-e GROUP_ID=1 \
-e CONFIG_STORAGE_TOPIC=connect_configs \
-e OFFSET_STORAGE_TOPIC=connect_offsets \
-e STATUS_STORAGE_TOPIC=connect_statuses \
-e ENABLE_DEBEZIUM_SCRIPTING='true' \
-e CONNECT_SASL_MECHANISM=SCRAM-SHA-256 \
-e CONNECT_SECURITY_PROTOCOL=SASL_SSL \
-e CONNECT_SASL_JAAS_CONFIG='org.apache.kafka.common.security.scram.ScramLoginModule required username="<username>" password="<password>";' \
-p 8083:8083 debezium/connect:2.7

BOOTSTRAP_SERVERS: You can set bootstrap server for this env variable. You can find this on Upstash dashboard if you are using their managed Kafka.

CONNECT_SASL_JAAS_CONFIG: This part contains security module and username/password pair. You don't need to set this if you are not using Kafka with authentication. However, if you are using Kafka from Upstash, then you can find username and password values on Kafka cluster details page.

CONFIG_STORAGE_TOPIC: This environment variable is used to specify the Kafka topic where Debezium will store the connector properties.

OFFSET_STORAGE_TOPIC: This environment variable is used to specify the Kafka topic where Debezium will store the connector offsets.

STATUS_STORAGE_TOPIC: This environment variable is used to specify the Kafka topic where Debezium will store the connector statuses.

Debezium connect is ready, but it is empty which means, no source will be tracked which is PostgreSQL, and no data will be sent to sink which is Kafka in our case.

We will also leverage two SaaS solutions:

  • Rapidapp for PostgreSQL: To quickly set up and manage our PostgreSQL database.
tip

Create a free database in Rapidapp Starter in seconds here

  • Upstash Redis: A managed Redis service optimized for low-latency data caching.
tip

Create a free Redis database in Upstash here

Adding Debezium Connector

You can add new connector to Debezium Connect by using its REST API as follows.

curl --location 'http://localhost:8083/connectors' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--data '{
"name": "postgres-connector",
"config": {
"connector.class": "io.debezium.connector.postgresql.PostgresConnector",
"database.hostname": "<pg_host>",
"database.port": "<pg_port>",
"database.user": "<pg_user>",
"database.password": "<pg_pass>",
"database.dbname": "<pg_db>",
"database.server.id": "<unique_id>",
"table.include.list": "<schema.table_name>",
"topic.prefix": "<pg_topic>",
"plugin.name": "pgoutput",
"kafka.bootstrap.servers": "<kafka_host>:<kafka_port>",
"kafka.topic.prefix": "<kafka_topic_prefix>"
}
}'

Line 7: This is needed to tell Debezium how to connect source.

Line 8-12: PostgreSQL connection properties, if you have used Rapidapp, you can grab details on Connection Properties tab in database details page

Line 13: This is the unique database server id which will be used by Debezium to differentiate th sources.

Line 14: This is the list of tables that will be monitored by Debezium.

Line 16: This field is used to tell Debezium which plugin should be used for this connector to serialize/deserialize data from PostgreSQL bin log.

Once the connector is created, you can verify it by listing available connectors with the following;

curl -XGET http://localhost:8083/connectors

Step-by-Step Spring Boot Application Setup

In this section, we will implement a simple Spring Boot CRUD application where whenever you do a modification in PostgreSQL database, it will be synchronized to Kafka automatically. This will be useful especially some other service is interested in those changes. In our case, we will be maintaining Product information in PostgreSQL database. Let's get started!

Project Initialization and Dependencies

We will be using Spring Boot and PostgreSQL to build the application. You can initialize a spring boot project by using Spring Boot CLI. Once installed, you can use following command to initialize a project with required dependencies.

spring init \
--dependencies=web,data-jpa,postgresql,lombok \
--type=maven-project \
--javaVersion=21 \
spring-pg-debezium

Line 2: web for implementing REST endpoints, data-jpa for database persistence, and postgresql for PostgreSQL driver.

Line 3: --type=maven-project for creating a Maven project.

Line 4: --javaVersion=21 we will use Java 21 in Google Cloud Run environment.

Implementing Entity and Repository

We have only one entity here, Product, which will be used to store product information. Let's create a new entity called Product as follows.

@Entity
@Data
@NoArgsConstructor
@AllArgsConstructor
class Product {

@Id
@GeneratedValue
private Long id;

private String title;

@Column(name = "price", precision = 10, scale = 2)
private BigDecimal price;
}

Line 2: Automatically enable getter/setter methods by using Lombok

Line 3: Generate no-arg constructor

Line 4: Generate constructor with all instance variables

Line 13: Define price column that accepts value with 10 digits max and 2 decimal places e.g. 023.99

In order to manage Product entity in database, we will use following repository interface.

interface ProductRepository extends CrudRepository<Product, Integer>{}

Implementing Rest Endpoints

We have one root endpoint /api/v1/products inside one controller and implement 3 actions for create, update, and delete as follows

@RestController
@RequestMapping("/api/v1/products")
@RequiredArgsConstructor
class ProductController {

private final ProductRepository productRepository;

@PostMapping
void create(@RequestBody CreateProductRequest request) {
Product product = new Product();
product.setTitle(request.getTitle());
product.setPrice(request.getPrice());
productRepository.save(product);
}

@PatchMapping("/{id}")
void update(@RequestBody UpdateProductRequest request, @PathVariable("id") Long id) {
Product p = productRepository.findById(id).orElseThrow(() -> new EntityNotFoundException("Product not found"));
p.setPrice(request.getPrice());
productRepository.save(p);
}

@DeleteMapping("/{id}")
void delete(@PathVariable("id") Long id) {
productRepository.deleteById(id);
}
}

create method accepts a request CreateProductRequest which contains title, and price information as shown below.

@Data
@NoArgsConstructor
@AllArgsConstructor
class CreateProductRequest {

private String title;

private BigDecimal price;

}

update is used to update product price, and it accepts a request as follows.

@Data
@NoArgsConstructor
@AllArgsConstructor
class UpdateProductRequest {

private BigDecimal price;

}

Now we have persistence layer and rest endpoints ready and we are ready to configure application.

Application Configuration

This section contains application level configurations such as the application name, datasource, and jpa as shown below:

application.yaml
spring:
application:
name: spring-pg-debezium
datasource:
url: <connection-string-from-rapidapp|or your own managed postgres url>
username: <username>
password: <password>
jpa:
database-platform: org.hibernate.dialect.PostgreSQLDialect
hibernate:
ddl-auto: update

Line 5: Connection URL for the PostgreSQL database. You can obtain this from Rapidapp or your own managed PostgreSQL service. It should have a format like jdbc:postgresql://<host>:<port>/<database>?sslmode=require.

Running Application

You can run application as follows

./mvnw spring-boot:run

Demo

Once you perform any of the following request, you will see it will be published to Kafka cluster where you can consume and see the message.

Create Product

curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/products -d '{"title": "Blue Iphone", "price": "37.3213"}'
``

### Update Product
```bash
curl -XPATCH -H "Content-Type: application/json" http://localhost:8080/api/v1/products/1 -d '{"price": "37.1213"}'

Delete Product

curl -XDELETE  http://localhost:8080/api/v1/products/1

Conclusion

Integrating Debezium with PostgreSQL and Kafka in a Spring Boot environment allows you to efficiently stream database changes to various services. This setup not only enhances data consistency and real-time processing capabilities but also simplifies the architecture of your microservices. By following this guide, you can leverage the power of change data capture to build responsive and scalable applications.

tip

You can find the complete source code for this project on GitHub.

Building Location Based Search Service with Spring Boot PostgreSQL and PostGIS

· 12 min read
Huseyin BABAL
Software Developer

Introduction to Geospatial Data

Geospatial data, also known as spatial data, represents the physical location and shape of objects on the Earth's surface. It includes information such as latitude, longitude, altitude, and the spatial relationships between different objects. Geospatial data is used in a wide range of applications, from mapping and navigation to environmental monitoring and urban planning.

Use Cases for Geospatial Data

Geospatial data has numerous applications across various industries. Some common use cases include:

  • Navigation and Routing: GPS systems use geospatial data to provide real-time navigation and routing information.
  • Environmental Monitoring: Track changes in land use, deforestation, and urban sprawl using satellite imagery and geospatial analysis.
  • Urban Planning: Plan infrastructure projects, analyze traffic patterns, and manage public services using geospatial data.
  • Location-Based Services: Deliver personalized content, offers, and services based on a user's location.

Using Geospatial Data in PostgreSQL with PostGIS Extension

In this article, we will be using PostgreSQL as our database with PostGIS extension. You can maintain your database in any database management system. For a convenient deployment option, consider cloud-based solutions like Rapidapp, which offers managed PostgreSQL databases with built-in postgis extension, simplifying setup and maintenance.

tip

Create a free database with built-in postgis extension in Rapidapp in seconds here

If you choose to maintain your own PostgreSQL database, you can enable PostGIS extension with the following command for each database as shown below.;

CREATE EXTENSION postgis;

Step-by-Step Guide to Creating the Location-Based Search Service

One practical application of geospatial data is a geolocation search application, where users can find nearby points of interest within a specified radius. In this article, we will build a Spring Boot application that searches for cities within specified radius of a given point.

Project Initialization and Dependencies

We will be using Spring Boot and PostgreSQL to build the application. You can initialize a spring boot project by using Spring Boot CLI. Once installed, you can use following command to initialize a project with required dependencies.

spring init \
--dependencies=web,data-jpa,postgresql,lombok \
--type=maven-project \
--javaVersion=21 \
spring-postgres-spatial

Line 2: web for implementing REST endpoints, data-jpa for database persistence, and postgresql for PostgreSQL driver.

Line 3: --type=maven-project for creating a Maven project.

Line 4: --javaVersion=21 we will use Java 21 in Google Cloud Run environment.

There is one more dependency we need to add to enable spatial feature of hibernate: hibernate-spatial. Open pom.xml and add following dependency to dependencies section.

<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-spatial</artifactId>
<version>6.5.2.Final</version>
</dependency>

Now that we initialized the project, go to the folder spring-postgres-spatial and open it with your favourite IDE.

Implementing Entity and Repository

We have only one entity here, City, which will be used to store city information including its location. Let's create a new entity called City as follows.

@Entity
@Data
@NoArgsConstructor
@AllArgsConstructor
class City {

@Id
@GeneratedValue
private Long id;

private String name;

@Column(columnDefinition = "geography(Point, 4326)")
private Point location;
}

Line 2: Automatically enable getter/setter methods by using Lombok

Line 3: Generate no-arg constructor

Line 4: Generate constructor with all instance variables

Line 13: This is for using special PostGIS data type geography described as follows;

geography: This indicates that the column will use the PostGIS geography data type, which is designed for storing geospatial data in a way that accounts for the Earth's curvature. This type is particularly useful for global, large-scale datasets where you want accurate distance and area calculations.

Point: Specifies that the data type for this column is a geographic point. Points are used to store coordinates (latitude and longitude).

4326: This is the Spatial Reference System Identifier (SRID) for WGS 84, which is the standard coordinate system used by GPS. SRID 4326 ensures that the coordinates are stored in a globally recognized format.

In order to manage City entity in database, we will use following repository interface.

interface CityRepository extends CrudRepository<City, Integer>{
@Query("SELECT c FROM City c WHERE function('ST_DWithin', c.location, :point, :distance) = true")
Iterable<City> findNearestCities(Point point, double distance);
}

ST_DWithin returns true if the geometries are within a given distance. In our case it will return cities which has location in City table is in a distance :distance of :point

Implementing Rest Endpoints

We have one root endpoint /api/v1/cities inside one controller and implement 3 actions for create, list, and find nearest locations as follows

@RestController
@RequestMapping("/api/v1/cities")
@RequiredArgsConstructor
class CityController {

private final CityRepository cityRepository;

private final GeometryFactory geometryFactory;

@PostMapping
void create(@RequestBody CreateCityRequest request) {
Point point = geometryFactory.createPoint(new Coordinate(request.getLng(), request.getLat()));
City city = new City();
city.setName(request.getName());
city.setLocation(point);
cityRepository.save(city);
}

@GetMapping
List<CityDto> findAll() {
List<CityDto> cities = new ArrayList<>();
cityRepository.findAll().forEach(c -> {
cities.add(new CityDto(c.getName(), c.getLocation().getY(), c.getLocation().getX()));
});
return cities;
}

@GetMapping("/nearest")
List<CityDto> findNearestCities(@RequestParam("lat") float lat, @RequestParam("lng") float lng, @RequestParam("distance") int distance) {
List<CityDto> cities = new ArrayList<>();
Point point = geometryFactory.createPoint(new Coordinate(lng, lat));
cityRepository.findNearestCities(point, distance).forEach(c -> {
cities.add(new CityDto(c.getName(), c.getLocation().getY(), c.getLocation().getX()));
});
return cities;
}
}

Line 8: This comes from hibernate-spatial and it is used to do basic conversions between geometric shapes. In our case, we convert latitude-longitude pair to Point which will be used for repository operations.

create method accepts a request CreateCityRequest which contains name, latitude and longitude information as shown below.

@AllArgsConstructor
@NoArgsConstructor
@Data
class CreateCityRequest {

private String name;
private double lat;
private double lng;
}

findAll is used to list all available cities in the database.

findNearestCities is used for finding neighbour cities for a given coordinate and radius (meters).

Now we have persistence layer and rest endpoints ready and we are ready to configure application.

Application Configuration

This section contains application level configurations such as the application name, datasource, and jpa as shown below:

application.yaml
spring:
application:
name: spring-postgres-spatial
datasource:
url: <connection-string-from-rapidapp|or your own managed postgres url>
username: <username>
password: <password>
jpa:
database-platform: org.hibernate.dialect.PostgreSQLDialect
hibernate:
ddl-auto: update

Line 5: Connection URL for the PostgreSQL database. You can obtain this from Rapidapp or your own managed PostgreSQL service. It should have a format like jdbc:postgresql://<host>:<port>/<database>?sslmode=require.

Running Application

You can run application as follows

./mvnw spring-boot:run

Demo

Create City

In this section, we will be creating cities of Turkey

Click to see create city requests
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Adana", "lat": "37.0000", "lng": "35.3213"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Adıyaman", "lat": "37.7648", "lng": "38.2786"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Afyonkarahisar", "lat": "38.7507", "lng": "30.5567"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Ağrı", "lat": "39.7191", "lng": "43.0503"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Amasya", "lat": "40.6499", "lng": "35.8353"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Ankara", "lat": "39.9208", "lng": "32.8541"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Antalya", "lat": "36.8841", "lng": "30.7056"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Artvin", "lat": "41.1828", "lng": "41.8183"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Aydın", "lat": "37.8560", "lng": "27.8416"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Balıkesir", "lat": "39.6484", "lng": "27.8826"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Bilecik", "lat": "40.0567", "lng": "30.0665"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Bingöl", "lat": "39.0626", "lng": "40.7696"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Bitlis", "lat": "38.3938", "lng": "42.1232"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Bolu", "lat": "40.5760", "lng": "31.5788"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Burdur", "lat": "37.4613", "lng": "30.0665"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Bursa", "lat": "40.2669", "lng": "29.0634"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Çanakkale", "lat": "40.1553", "lng": "26.4142"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Çankırı", "lat": "40.6013", "lng": "33.6134"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Çorum", "lat": "40.5506", "lng": "34.9556"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Denizli", "lat": "37.7765", "lng": "29.0864"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Diyarbakır", "lat": "37.9144", "lng": "40.2306"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Edirne", "lat": "41.6818", "lng": "26.5623"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Elâzığ", "lat": "38.6810", "lng": "39.2264"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Erzincan", "lat": "39.7500", "lng": "39.5000"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Erzurum", "lat": "39.9000", "lng": "41.2700"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Eskişehir", "lat": "39.7767", "lng": "30.5206"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Gaziantep", "lat": "37.0662", "lng": "37.3833"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Giresun", "lat": "40.9128", "lng": "38.3895"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Gümüşhane", "lat": "40.4386", "lng": "39.5086"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Hakkâri", "lat": "37.5833", "lng": "43.7333"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Hatay", "lat": "36.4018", "lng": "36.3498"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Isparta", "lat": "37.7648", "lng": "30.5566"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Mersin", "lat": "36.8000", "lng": "34.6333"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "İstanbul", "lat": "41.0053", "lng": "28.9770"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "İzmir", "lat": "38.4189", "lng": "27.1287"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Kars", "lat": "40.6167", "lng": "43.1000"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Kastamonu", "lat": "41.3887", "lng": "33.7827"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Kayseri", "lat": "38.7312", "lng": "35.4787"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Kırklareli", "lat": "41.7333", "lng": "27.2167"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Kırşehir", "lat": "39.1425", "lng": "34.1709"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Kocaeli", "lat": "40.8533", "lng": "29.8815"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Konya", "lat": "37.8667", "lng": "32.4833"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Kütahya", "lat": "39.4167", "lng": "29.9833"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Malatya", "lat": "38.3552", "lng": "38.3095"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Manisa", "lat": "38.6191", "lng": "27.4289"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Kahramanmaraş", "lat": "37.5858", "lng": "36.9371"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Mardin", "lat": "37.3212", "lng": "40.7245"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Muğla", "lat": "37.2153", "lng": "28.3636"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Muş", "lat": "38.9462", "lng": "41.7539"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Nevşehir", "lat": "38.6939", "lng": "34.6857"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Niğde", "lat": "37.9667", "lng": "34.6833"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Ordu", "lat": "40.9839", "lng": "37.8764"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Rize", "lat": "41.0201", "lng": "40.5234"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Sakarya", "lat": "40.6940", "lng": "30.4358"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Samsun", "lat": "41.2928", "lng": "36.3313"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Siirt", "lat": "37.9333", "lng": "41.9500"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Sinop", "lat": "42.0231", "lng": "35.1531"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Sivas", "lat": "39.7477", "lng": "37.0179"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Tekirdağ", "lat": "40.9833", "lng": "27.5167"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Tokat", "lat": "40.3167", "lng": "36.5500"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Trabzon", "lat": "41.0015", "lng": "39.7178"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Tunceli", "lat": "39.3074", "lng": "39.4388"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Şanlıurfa", "lat": "37.1591", "lng": "38.7969"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Uşak", "lat": "38.6823", "lng": "29.4082"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Van", "lat": "38.4891", "lng": "43.4089"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Yozgat", "lat": "39.8181", "lng": "34.8147"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Zonguldak", "lat": "41.4564", "lng": "31.7987"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Aksaray", "lat": "38.3687", "lng": "34.0370"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Bayburt", "lat": "40.2552", "lng": "40.2249"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Karaman", "lat": "37.1759", "lng": "33.2287"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Kırıkkale", "lat": "39.8468", "lng": "33.5153"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Batman", "lat": "37.8812", "lng": "41.1351"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Şırnak", "lat": "37.4187", "lng": "42.4918"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Bartın", "lat": "41.5811", "lng": "32.4610"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Ardahan", "lat": "41.1105", "lng": "42.7022"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Iğdır", "lat": "39.8880", "lng": "44.0048"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Yalova", "lat": "40.6500", "lng": "29.2667"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Karabük", "lat": "41.2061", "lng": "32.6204"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Kilis", "lat": "36.7184", "lng": "37.1212"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Osmaniye", "lat": "37.2130", "lng": "36.1763"}'
curl -XPOST -H "Content-Type: application/json" http://localhost:8080/api/v1/cities -d '{"name": "Düzce", "lat": "40.8438", "lng": "31.1565"}'

List Cities

curl -XGET http://localhost:8080/api/v1/cities

Find Nearest Cities

To find nearest cities of Ankara within a radius 300km, you can use the following.

curl -XGET http://localhost:8080/api/v1/cities/nearest\?lat\=39.9208\&lng\=32.8541\&distance\=300000

Conclusion

In this article, we explored the power of geospatial data and how to effectively utilize it within a Spring Boot application using PostgreSQL with the PostGIS extension. We covered the fundamental concepts of geospatial data, the benefits of using PostGIS for geospatial operations, and real-world use cases such as navigation, environmental monitoring, urban planning, and location-based services.

tip

You can find the complete source code for this project on GitHub.

Create and Deploy Spring Boot Todo App to Google Cloud Run

· 5 min read
Huseyin BABAL
Software Developer

Introduction

In the rapidly evolving world of software development, deploying applications in a scalable and efficient manner is critical. With the rise of cloud computing, services like Google Cloud Run have become essential for developers looking to deploy containerized applications quickly and effortlessly. In this blog post, we'll walk through deploying a simple todo app built with Spring Boot and PostgreSQL to Google Cloud Run. We'll cover setting up the project, integrating PostgreSQL, and deploying to the cloud, ensuring your app is ready to handle varying loads efficiently.

Why Connection Pooling is Essential for Serverless?

When deploying applications in a serverless environment like Google Cloud Run, managing database connections efficiently becomes crucial. Traditional connection management can lead to issues such as exhausting database connections, especially under load. This is where PgBouncer, a lightweight connection pooler for PostgreSQL, comes into play. It optimizes the usage of database connections, reducing latency and improving the performance of your serverless app. Additionally, it ensures that the application can handle sudden spikes in traffic without overwhelming the database.

In this article, we will be using PostgreSQL as our database. You can maintain your database in any database management system. For a convenient deployment option, consider cloud-based solutions like Rapidapp, which offers managed PostgreSQL databases, simplifying setup and maintenance.

tip

Create a free database with connection pooling support for the serverless use-cases in Rapidapp in seconds here

Step-by-Step Guide to Creating the Todo App

Project Initialization and Dependencies

We will be using Spring Boot and PostgreSQL to build a todo application. You can initialize a spring boot project by using Spring Boot CLI. Once installed, you can use following command to initialize a project with required dependencies.

spring init \
--dependencies=web,data-jpa,postgresql \
--type=maven-project \
--javaVersion=21 \
cloud-run-todo

Line 2: web for implementing REST endpoints, data-jpa for database persistence, and postgresql for PostgreSQL driver.

Line 3: --type=maven-project for creating a Maven project.

Line 4: --javaVersion=21 we will use Java 21 in Google Cloud Run environment.

Now that we initialized the project, go to the folder cloud-run-todo and open it with your favourite IDE.

Implementing Entity and Repository

We have only one entity here, Todo, which will be used to store our todo items. Let's create a new entity called Todo as follows.

@Entity
class Todo {
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
private Integer id;
private String description;
private Boolean completed;

public Todo(String description, Boolean completed) {
this.description = description;
this.completed = completed;
}

public Todo() {

}

public Integer getId() {
return id;
}

public void setId(Integer id) {
this.id = id;
}

public String getDescription() {
return description;
}

public void setDescription(String description) {
this.description = description;
}

public Boolean getCompleted() {
return completed;
}

public void setCompleted(Boolean completed) {
this.completed = completed;
}
}

In order to manage Todo entity in database, we will use following repository interface.

interface TodoRepository extends CrudRepository<Todo, Integer>{}

TodoRepository will be used to do crud operations for the Todo entity

Implementing Rest Endpoints

Since we have only one entity, we will have one root endpoint /api/v1/todos inside one controller and implement 2 actions for create and listing todo entities as follows

@RestController
@RequestMapping("/api/v1/todos")
class TodoController {

private final TodoRepository todoRepository;

TodoController(TodoRepository todoRepository) {
this.todoRepository = todoRepository;
}

@PostMapping
void create(@RequestBody CreateTodoRequest request) {
this.todoRepository.save(new Todo(request.getDescription(), false));
}

@GetMapping
Iterable<Todo> list() {
return this.todoRepository.findAll();
}
}

create method accepts a request CreateTodoRequest as shown below.

class CreateTodoRequest {
private String description;

public CreateTodoRequest(String description) {
this.description = description;
}

public CreateTodoRequest() {
}

public String getDescription() {
return description;
}

public void setDescription(String description) {
this.description = description;
}
}

Now we have persistence layer and rest endpoints ready and we are ready to configure application.

Application Configuration

In serverless environment, it is best practice to expect PORT environment variable since it might be managed by the serverless provider. We can add following configuration to application.properties

application.properties
server.port=${PORT:8080}

By doing this, if there is an env variable PORT, it will take precedence over the default value of 8080. In order to create tables out of entities automatically, we can use following config.

application.properties
spring.jpa.hibernate.ddl-auto=update

As a final step, we need to create a file called project.toml in the root of the project to tell Cloud Run to use Java 21

project.toml
[[build.env]]
name = "GOOGLE_RUNTIME_VERSION"
value = "21"

Deploying to Google Cloud Run

We will be using gcloud cli to deploy our application to Google Cloud Run. Before running deployment command, you need to prepare datasource url, username, and password for PostgreSQL to pass as an environment variable to application. Use following command to deploy.

gcloud run deploy \
--source . \
--update-env-vars SPRING_DATASOURCE_URL=jdbc:postgresql://<host>:<port>/<db>,SPRING_DATASOURCE_USERNAME=<user>,SPRING_DATASOURCE_PASSWORD=<password>

If you are using Rapidapp as your managed database, do not forget to use Pooling Port as port value to use connection pooling for your database to handle highly concurrent requests.

It will prompt for the name of service, you can press enter to accept default one. It will also prompt for the region, select the number of desired region. If there is no problem, it will deploy your application and print the service url.

Demo

Create Todo

curl -XPOST -H "Content-Type: application/json" https://<your>.a.run.app/api/v1/todos -d '{"description": "buy milk"}'

List Todos

curl -XGET https://<your>.a.run.app/api/v1/todos

Conclusion

Deploying a Spring Boot application to Google Cloud Run is straightforward and efficient, allowing developers to leverage the power of serverless computing. By integrating PostgreSQL with connection pooling using PgBouncer and considering services like RapidApp, you can ensure your application is robust and scalable. With this guide, you're now equipped to deploy your todo app to the cloud, ready to handle real-world workloads with ease.

tip

You can find the complete source code for this project on GitHub.

Securing Your Spring Boot App with JWT Authentication

· 8 min read
Huseyin BABAL
Software Developer

Introduction

This article dives into securing a Spring Boot application using JSON Web Tokens (JWT) for authentication. We'll explore Spring Security, JWT fundamentals, and then implement a secure API with user registration, login, and access control. Our data will be persisted in a PostgreSQL database using Spring Data JPA.

Why Spring Security?

Spring Security is an industry-standard framework for securing Spring applications. It offers comprehensive features for authentication, authorization, and access control. By leveraging Spring Security, we can efficiently manage user access to our API endpoints.

JWT Authentication Explained

JWT is a token-based authentication mechanism. Unlike traditional session-based methods, JWT stores user information in a compact, self-contained token. This token is sent with every request, allowing the server to verify the user's identity without relying on server-side sessions.

Here's a breakdown of JWT's benefits:

  • Stateless: Removes the need for session management on the server.
  • Secure: Employs digital signatures to prevent tampering.
  • Flexible: Can be configured with various claims to store user information.

Persistence Layer

In this article, we will be using PostgreSQL as our database. You can maintain your database in any database management system. For a convenient deployment option, consider cloud-based solutions like Rapidapp, which offers managed PostgreSQL databases, simplifying setup and maintenance.

tip

Create a free database in Rapidapp in seconds here

Step-by-Step Implementation

Dependencies

Be sure you have the following dependencies installed by using your favourite dependency management tool e.g. maven, gradle.

pom.xml
 <dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>42.7.3</version>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<version>1.18.32</version>
</dependency>
<dependency>
<groupId>io.jsonwebtoken</groupId>
<artifactId>jjwt-api</artifactId>
<version>0.12.5</version>
</dependency>
<dependency>
<groupId>io.jsonwebtoken</groupId>
<artifactId>jjwt-impl</artifactId>
<version>0.12.5</version>
</dependency>
<dependency>
<groupId>io.jsonwebtoken</groupId>
<artifactId>jjwt-jackson</artifactId>
<version>0.12.5</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>

Enabling Spring Web Security

In order to enable Spring Web Security, you need to configure it in your SecurityConfig.java file as shown below.

SecurityConfig.java
@Configuration
@EnableWebSecurity
@RequiredArgsConstructor
public class SecurityConfig {

private static final String[] AUTH_WHITELIST = {
"/api/v1/auth/login",
"/api/v1/auth/register"
};

private final JwtAuthFilter jwtAuthFilter;

@Bean
public SecurityFilterChain securityFilterChain(HttpSecurity http) throws Exception {
http
.csrf(AbstractHttpConfigurer::disable)
.authorizeRequests(authorizeRequests ->
authorizeRequests
.requestMatchers(AUTH_WHITELIST).permitAll()
.anyRequest().authenticated()
)
.sessionManagement(sessionManagement ->
sessionManagement
.sessionCreationPolicy(SessionCreationPolicy.STATELESS))
.addFilterBefore(jwtAuthFilter, UsernamePasswordAuthenticationFilter.class);
return http.build();
}
}

Line 2: Add @EnableWebSecurity to the SecurityConfig class to protect the API endpoints.

Line 6: Allow requests from the /api/v1/auth/login and /api/v1/auth/register endpoints without authentication.

Line 16: Disable CSRF protection, since JWT authentication is stateless.

Line 24: Set the session creation policy to STATELESS to ensure sessions are not maintained.

Line 25: Add the JwtAuthFilter to the security filter chain before the UsernamePasswordAuthenticationFilter. We will explain JwtAuthFilter class soon.

JWT Auth Filter

In order to enable JWT authentication, you need to configure it in your JwtAuthFilter.java file as shown below.

JwtAuthFilter.java
@Component
@RequiredArgsConstructor
public class JwtAuthFilter extends OncePerRequestFilter {

private final JwtService jwtService;
private final UserDetailsService userDetailsService;

@Override
protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws ServletException, IOException {
if (request.getServletPath().contains("/api/v1/auth")) {
filterChain.doFilter(request, response);
return;
}

final String authorizationHeader = request.getHeader("Authorization");
final String jwtToken;
final String email;

if (authorizationHeader == null || !authorizationHeader.startsWith("Bearer ")) {
filterChain.doFilter(request, response);
return;
}

jwtToken = authorizationHeader.substring(7);
email = jwtService.extractEmail(jwtToken);

if (email != null && SecurityContextHolder.getContext().getAuthentication() == null) {
UserDetails userDetails = userDetailsService.loadUserByUsername(email);
if (jwtService.validateToken(jwtToken, userDetails)) {
UsernamePasswordAuthenticationToken authenticationToken = new UsernamePasswordAuthenticationToken(userDetails, null, userDetails.getAuthorities());
authenticationToken.setDetails(new WebAuthenticationDetailsSource().buildDetails(request));
SecurityContextHolder.getContext().setAuthentication(authenticationToken);
}
}
filterChain.doFilter(request, response);
}
}

Line 10: Do not apply JWT auth filter for /api/v1/auth endpoints.

Line 24: Extract JWT token from the Authorization header. Its format is Bearer <token>, that's why it is substring(7).

Line 25: Extract email from the JWT token using JwtService which we will take a look at in the next section.

Line 28-32: Validate the JWT token using JwtService, load user details using UserDetails from UserDetailsService and store the authentication in SecurityContextHolder.

Implementing JWTService

This class contains all JWT related functionalities as shown below.

JwtService.java
@Service
public class JwtService {

@Value("${jwt.secret}")
private String secret;

public String extractEmail(String jwtToken) {
return extractClaim(jwtToken, Claims::getSubject);
}

public <T> T extractClaim(String jwtToken, Function<Claims, T> claimsResolver) {
final Claims claims = extractAllClaims(jwtToken);
return claimsResolver.apply(claims);
}

private Claims extractAllClaims(String jwtToken) {
return Jwts.parser().verifyWith(getSigningKey()).build().parseSignedClaims(jwtToken).getPayload();
}

private SecretKey getSigningKey() {
byte [] bytes = Decoders.BASE64.decode(secret);
return Keys.hmacShaKeyFor(bytes);
}

public boolean validateToken(String jwtToken, UserDetails userDetails) {
final String email = extractEmail(jwtToken);
return email.equals(userDetails.getUsername()) && !isTokenExpired(jwtToken);
}

private boolean isTokenExpired(String jwtToken) {
return extractExpiration(jwtToken).before(new Date());
}

private Date extractExpiration(String jwtToken) {
return extractClaim(jwtToken, Claims::getExpiration);
}

public String generateToken(User u) {
return createToken(u.getEmail());
}

private String createToken(String email) {
return Jwts.builder()
.subject(email)
.issuedAt(new Date(System.currentTimeMillis()))
.expiration(new Date(System.currentTimeMillis() + 1000 * 60 * 60 * 10))
.signWith(getSigningKey())
.compact();
}
}

Line 5: This is the secret key used to sign JWT tokens. This should be carefully protected, it is not something that we can share or expose publicly. All the other functions are self-explanatory.

UserDetailsService

UserDetailsService is design for showing spring boot security authentication how to load user details from database as shown below.

UserDetailsService.java
@Service
@RequiredArgsConstructor
public class UserDetailService implements UserDetailsService {
private final UserRepository userRepository;


@Override
public UserDetails loadUserByUsername(String email) throws UsernameNotFoundException {
return userRepository.findByEmail(email)
.map(user -> User.builder().username(user.getEmail())
.password(user.getPassword())
.build())
.orElseThrow(() -> new UsernameNotFoundException("User not found"));
}
}

Until this point, we have only focused on JWT authentication. However, how we will generate JWT tokens in the next section? What is its use-case?

Registering User

Before generating JWT token to authenticate the user, we need to register the user. We will use AuthController to register user.

AuthController.java
@RestController
@RequestMapping(path = "api/v1/auth")
@RequiredArgsConstructor
public class AuthController {

private final AuthService authService;


@PostMapping(path = "/register")
@ResponseStatus(HttpStatus.NO_CONTENT)
public void register(@RequestBody RegisterRequest registerRequest) {
authService.register(registerRequest);
}

@PostMapping(path = "/login")
public ResponseEntity<String> login(@RequestBody LoginRequest loginRequest) {
return ResponseEntity.ok(authService.login(loginRequest));
}
}

In above controller, we are using AuthService to register and login user. AuthService uses UserRepository to interact database for user related operations.

AuthService.java
@Service
@RequiredArgsConstructor
public class AuthService {

private final UserRepository userRepository;
private final AuthenticationManager authenticationManager;
private final JwtService jwtService;
private final BCryptPasswordEncoder bCryptPasswordEncoder;

public void register(RegisterRequest registerRequest) {
User u = User.builder()
.email(registerRequest.getEmail())
.password(bCryptPasswordEncoder.encode(registerRequest.getPassword()))
.firstName(registerRequest.getFirstName())
.lastName(registerRequest.getLastName())
.build();
userRepository.save(u);
}

public String login(LoginRequest loginRequest) {
authenticationManager.authenticate(new UsernamePasswordAuthenticationToken(loginRequest.getEmail(), loginRequest.getPassword()));
User u = userRepository.findByEmail(loginRequest.getEmail()).orElseThrow(() -> new EntityNotFoundException("User not found"));
return jwtService.generateToken(u);

}
}

Line 10: Register user by using the details provided in the request payload. The bCryptPasswordEncoder is used to hash the password before storing it in the database.

Line 21: The login operation is done through authenticationManager since it knows how to validate username and password.

Restricted Access to UserController

You can see a sample endpoint implementation for user object.

UserController.java
@RestController
@RequestMapping(path = "api/v1")
public class UserController {
private final UserRepository userRepository;
public UserController(UserRepository userRepository) {
this.userRepository = userRepository;
}

@GetMapping("/users")
public List<User> getUsers() {
return userRepository.findAll();
}
}

Assume you registered a new user with email admin password ssshhhh. Then in order to generate a JWT token, you can use the following curl request.

curl -X POST -H "Content-Type: application/json" \
-d '{"email": "admin", "password": "ssshhhh"}' http://localhost:8080/api/v1/auth/login

It will return a JWT token, which you can use to authenticate the user. Store it somewhere.

Now in order to access restricted user endpoint, you can use the following curl request.

curl -X GET -H "Authorization: Bearer <token>" http://localhost:8080/api/v1/users

Conclusion

This hands-on tutorial equipped you with the knowledge to implement JWT Authentication in your Spring Boot application. We explored user registration, login, and access control, leveraging Spring Security and JPA for data persistence. By following these steps and customizing the code examples to your specific needs, you can secure your API endpoints and ensure authorized user access. Remember to prioritize security best practices. Here are some additional points to consider:

  • Secret Key Management: Store your JWT secret key securely in environment variables or a dedicated secret management service. Never expose it in your codebase.
  • Token Expiration: Set a reasonable expiration time for JWT tokens to prevent unauthorized access due to compromised tokens.
  • Error Handling: Implement proper error handling mechanisms for invalid or expired tokens to provide informative feedback to users.
  • Advanced Features: Explore advanced JWT features like refresh tokens for longer-lived sessions and role-based access control (RBAC) for granular authorization. With JWT authentication in place, your Spring Boot application is well on its way to becoming a secure and robust platform. Deploy it with confidence, knowing that user access is properly controlled.
tip

You can find the complete source code for this project on GitHub.

Boosting Spring Boot Performance, Implementing Second Level Cache with Redis

· 6 min read
Huseyin BABAL
Software Developer

Introduction

Caching is a fundamental technique to improve the performance of applications by storing frequently accessed data in memory, thereby reducing the need for repeated database queries. In Spring Boot, caching is often implemented at two levels: the first level cache and the second level cache.

First Level and Second Level Cache

First Level Cache: This is the cache associated with the Hibernate session. It is enabled by default and works only within the scope of a session, meaning that the data cached is only available within a single transaction.

Second Level Cache: This is a more sophisticated cache that works at the session factory level. It is shared among all sessions, making it possible to cache data across transactions. This type of cache can significantly reduce database load and improve application performance.

In this blog post, we will focus on integrating the second level cache in a Spring Boot application using Redis.

Spring Starter Projects and Database Integration

For our Spring Boot application, we will use the following starter projects:

  • Spring Data JPA: For ORM and database interactions.
  • PostgreSQL: As our relational database.
  • Redis: For caching purposes.

We will also leverage two SaaS solutions:

  • Rapidapp for PostgreSQL: To quickly set up and manage our PostgreSQL database.
tip

Create a free database in Rapidapp Starter in seconds here

  • Upstash Redis: A managed Redis service optimized for low-latency data caching.
tip

Create a free Redis database in Upstash here

Spring Boot Application Setup

Application

This section contains application level configurations such as the application name, datasource, and jpa as shown below:

application.yaml
spring:
application:
name: sb-l2-redis
datasource:
driver-class-name: org.postgresql.Driver
url: <connection-string-from-rapidapp|or your own managed postgres url>
jpa:
database-platform: org.hibernate.dialect.PostgreSQLDialect
hibernate:
ddl-auto: update
show-sql: true
properties:
hibernate:
cache:
use_second_level_cache: true

Line 5: JDBC driver class for PostgreSQL.

Line 6: Connection URL for the PostgreSQL database. You can obtain this from Rapidapp or your own managed PostgreSQL service. It should have a format like jdbc:postgresql://<host>:<port>/<database>?user=<user>&password=<password>&sslmode=require.

warning

We use connection string here for demo purposes, but it is not secure to use connection string in application.yaml. You should use environment variables (SPRING_DATASOURCE_URL) or secrets management tools to store sensitive information.

Line 10: This will create tables in PostgreSQL database automatically by using entities.

Line 15: Enable second level cache in Hibernate. We will visit this configuration later.

Entities

Let's create a simple Product entity that will be stored in the PostgreSQL database.

Product.java
@Builder
@Data
@AllArgsConstructor
@NoArgsConstructor
@Entity
@Cache(region = "products", usage = CacheConcurrencyStrategy.READ_WRITE)
class Product {
@Id
@GeneratedValue
private Long id;
private String name;
private String description;
private BigDecimal price;
}
  1. @Entity annotation is used to mark the class as a JPA entity.
  2. @Cache annotation is used to enable caching for the entity.
  • region attribute is used to specify the cache name.
  • usage attribute is used to specify the cache concurrency strategy. The remaining attributes are coming from Lombok library to automatically generate getters, setters, and constructors.

Repository

Next, we will create a ProductRepository interface that extends JpaRepository to interact with the Product entity.

ProductRepository.java
interface ProductRepository extends JpaRepository<Product, Long> {}

Controller

Finally, let's set up a simple Spring Boot application endpoint. We will create a ProductController with endpoints to create, get, and list products.

ProductController.java
@RestController
@RequestMapping("/products")
class ProductController {

private final ProductRepository productRepository;

public ProductController(ProductRepository productRepository) {
this.productRepository = productRepository;
}

@GetMapping
public Iterable<Product> getProducts() {
return productRepository.findAll();
}

@GetMapping("/{id}")
public Product getProduct(@PathVariable Long id) {
return productRepository.findById(id).orElseThrow();
}

@PostMapping
public Product createProduct(@RequestBody CreateProductRequest request) {
Product product = Product.builder()
.name(request.getName())
.description(request.getDescription())
.price(request.getPrice())
.build();
return productRepository.save(product);
}
}

Generate Data

Right after running application, we can now generate some sample data with the following curl command.

curl \
--parallel \
-XPOST \
-H "Content-Type: application/json" \
http://localhost:8080/products\?\[1-200\] \
-d '{"name": "book", "description": "desc", "price": "100.3"}'

Line 2: --parallel flag is used to send multiple requests in parallel. Line 5: [1-200] is used to sent same payload 200 times.

Since we have spring.jpa.show_sql=true in our application.yaml, we can see 200 insert SQL statements in the console. In same way, we can see select SQL statements when we call GET /products/{id} endpoint. This is where second level cache comes into play.

Implementing Second Level Cache with Redis

Redis Client Configuration

First, we need to configure the Redis client in our Spring Boot application. We will use redisson as the Redis client library.

redisson.yaml
singleServerConfig:
idleConnectionTimeout: 10000
connectTimeout: 10000
timeout: 3000
retryAttempts: 3
retryInterval: 1500
password: <redis_password>
subscriptionsPerConnection: 5
clientName: sb-l2-redis
address: <redis_address>
subscriptionConnectionMinimumIdleSize: 1
subscriptionConnectionPoolSize: 50
connectionMinimumIdleSize: 10
connectionPoolSize: 64
database: 0
dnsMonitoringInterval: 5000

Line 7: Redis password. You can obtain this from Upstash or your own managed Redis service.

Line 10: Redis server address. You can obtain this from Upstash or your own managed Redis service.

You can see the other configuration internals here. Now we have redis client configuration, and with the help of spring.jpa.hibernate.properties.cache.use_second_level_cache: true in application.yaml file, we can enable second level cache in Hibernate. The cache provider class is automatically injected in to the application with following dependency.

pom.xml
<dependency>
<groupId>org.redisson</groupId>
<artifactId>redisson-hibernate-53</artifactId>
<version>3.30.0</version>
</dependency>

Cache Verification

After running the application, we can see that the select SQL statements are not executed when we call the GET /products/{id} endpoint second time. This is because the data is fetched from the Redis cache instead of the database.

Conclusion

Implementing a second level cache in your Spring Boot application using Redis can significantly improve performance by reducing database load. With the combination of Spring Data JPA, PostgreSQL, and Redis, you can achieve scalable and efficient caching solution. Using managed services like Rapidapp and Upstash for Redis further simplifies the setup and management of your infrastructure.

By following the steps outlined in this blog post, you can easily integrate a second level cache in your Spring Boot application and enjoy the benefits of faster data access and reduced database queries.

tip

You can find the complete source code for this project on GitHub.