Logging in Spring Boot 3: Best Practices for Configuration and Implementation



Logging in a Spring Boot 3 application is essential for monitoring system behavior, diagnosing issues, and analyzing performance. Well-structured logging helps maintain a clear view of the application flow, identify anomalies, and optimize debugging. However, improper use of logging can negatively impact application performance and security. In this article, we will explore best practices for configuring and writing logs effectively, ensuring scalability, readability, and compliance with data protection regulations.

Installing Lombok for Logging

Lombok is a library that simplifies boilerplate code management in Java, including logger declarations. If you want to use @Slf4j to handle logs in a cleaner way, follow these steps to install Lombok.

Adding Lombok as a Dependency

If you use Maven, add this dependency to your pom.xml:.

<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<scope>provided</scope>
</dependency>

If you use Gradle, add this dependency to your build.gradle:

dependencies {
compileOnly 'org.projectlombok:lombok'
annotationProcessor 'org.projectlombok:lombok'
}

Enabling Annotation Processing in Your IDE

For IntelliJ IDEA:

  • Go to File β†’ Settings β†’ Build, Execution, Deployment β†’ Compiler β†’ Annotation Processors.
  • Select Enable annotation processing.
  • Rebuild the project.
For Eclipse:
  • Install the Lombok plugin if it's not already installed.
  • Navigate to Window β†’ Preferences β†’ Java β†’ Compiler β†’ Annotation Processing.
  • Enable Annotation processing.
  • Rebuild the project.

Manually Installing Lombok

If Lombok is not recognized by your IDE, you can install it manually:
Run the following command:
java -jar lombok.jar
  • This will open a popup that allows you to configure Lombok for your IDE.
  • Follow the instructions and restart your IDE.

Using SLF4J and Logback

Spring Boot uses SLF4J as its logging API and Logback as its default implementation. This allows for more efficient and scalable logging compared to using System.out.println() or other less optimized techniques.

Basic Configuration

Spring Boot includes Logback by default. However, if you want to explicitly configure Logback, add a logback-spring.xml file in src/main/resources/:

<configuration>

    <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">

        <file>logs/app.log</file>

        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">

            <fileNamePattern>logs/app-%d{yyyy-MM-dd}.log</fileNamePattern>

            <maxHistory>30</maxHistory>

        </rollingPolicy>

        <encoder>

            <pattern>%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n</pattern>

        </encoder>

    </appender>



    <logger name="com.example" level="DEBUG"/>

    <root level="INFO">

        <appender-ref ref="FILE"/>

    </root>

</configuration>


Declaring the Logger in Code

To write logs, use SLF4J in combination with Lombok to avoid boilerplate code.

With Lombok

import lombok.extern.slf4j.Slf4j;

import org.springframework.stereotype.Service;



@Slf4j

@Service

public class OrderService {

    public void processOrder(Long orderId) {

        log.info("Starting order processing: {}", orderId);

        try {

            // Simulating order processing

            log.debug("Detailed processing for order {}", orderId);

        } catch (Exception e) {

            log.error("Error processing order {}", orderId, e);

        }

        log.info("Order {} processed successfully", orderId);

    }

}


Without Lombok

import org.slf4j.Logger;

import org.slf4j.LoggerFactory;



public class OrderService {

    private static final Logger logger = LoggerFactory.getLogger(OrderService.class);

}


When to Use INFO, DEBUG, WARN, and ERROR

Each log level has a specific purpose:

Level

Usage

TRACE

Very detailed information for advanced debugging.

DEBUG

Useful messages for development and detailed analysis.

INFO

Key information about normal system operations.

WARN

Potential issues that do not stop execution.

ERROR

Critical errors that prevent an operation from completing.

Enabling Behavior

When a log level is enabled, all more severe levels are automatically enabled as well.
Example:

  • If you enable INFO, messages at the INFO, WARN, ERROR (and FATAL, if present) levels will be logged.
  • If you enable DEBUG, messages at the DEBUG, INFO, WARN, ERROR levels will be logged.
  • If you enable ERROR, only messages at the ERROR level will be logged.

Logging at the Start and End of a Method

Writing logs at the beginning and end of a method is useful for tracking execution flow, but excessive logging can slow down the system.

Good Practice

public void processPayment(Long transactionId) {

    log.info("[START] Processing payment for transaction: {}", transactionId);

    try {

        log.debug("Checking fund availability for transaction: {}", transactionId);

    } catch (Exception e) {

        log.error("Payment error for transaction: {}", transactionId, e);

    }

    log.info("[END] Payment completed for transaction: {}", transactionId);

}


Log Management in Production Environments

In production, it's important to avoid overly detailed logs that can impact performance. A useful approach is configuring logs so that TRACE and DEBUG levels are only enabled in development environments, while INFO, WARN, and ERROR should be the primary levels in production. You can achieve this by configuring logback-spring.xml as follows:

<configuration>

    <property name="LOG_LEVEL" value="INFO"/>



    <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">

        <encoder>

            <pattern>%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n</pattern>

        </encoder>

    </appender>



    <root level="${LOG_LEVEL}">

        <appender-ref ref="CONSOLE"/>

    </root>

</configuration>


This configuration allows you to manage the logging level via the LOG_LEVEL environment variable, which can be set during deployment (e.g., through environment variables or configuration files).

Rotating Log Files

You've already mentioned log rotation with Logback, which is a good practice. It is also essential to configure limits on the number of log files retained to control disk space usage. You can do this by configuring the maxHistory and maxFileSize properties:

<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">

    <fileNamePattern>logs/app-%d{yyyy-MM-dd}.%i.log</fileNamePattern>

    <maxHistory>30</maxHistory>

    <maxFileSize>10MB</maxFileSize>

</rollingPolicy>



This setup ensures that log files are rotated based on both date and size.

Structured Logging (JSON Format)

For advanced log management, especially when using analytics tools like ELK (Elasticsearch, Logstash, Kibana) or Splunk, it's recommended to adopt structured JSON logs. You can configure Logback to write logs in JSON format as follows:

<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">

    <pattern>

        {

            "timestamp": "%date{yyyy-MM-dd'T'HH:mm:ss.SSSZ}",

            "level": "%level",

            "logger": "%logger",

            "message": "%message",

            "thread": "%thread"

        }

    </pattern>

</encoder>


This approach makes logs easily searchable and analyzable in monitoring and aggregation systems.

Filtering and Logging Sensitive Events

During development, it is crucial to exclude or mask logs containing sensitive data such as user information or credentials. A best practice is configuring a filter to detect and automatically mask such data:

<logger name="com.example" level="INFO">

    <appender-ref ref="FILE"/>

</logger>



<logger name="org.springframework.web" level="DEBUG">

    <appender-ref ref="CONSOLE"/>

</logger>


Ensure that sensitive logs are never recorded in production to avoid security risks.

Log Monitoring with Prometheus and Grafana

Integrating log monitoring with tools like Prometheus and Grafana provides real-time visibility into your application's health. You can configure Grafana to visualize logs from an aggregator server like Elasticsearch and use Prometheus to collect logging metrics, such as errors and warnings.

Sending Logs to Prometheus and Grafana

While Prometheus does not handle logs directly, you can collect application metrics such as error counters, latency, and system status, and visualize them in Grafana. To monitor logs, you can integrate Grafana Loki, which specializes in log collection and real-time visualization.

Configuring Prometheus

Prometheus collects and stores metrics rather than logs. To expose metrics from your Spring Boot application, ensure that Prometheus can scrape them.

Add Prometheus Dependencies to Spring Boot

Add the Micrometer dependency to integrate metrics collection into Spring Boot in pom.xml:

<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-registry-prometheus</artifactId>
</dependency>

If you're using Gradle, add:

dependencies {
    implementation 'io.micrometer:micrometer-registry-prometheus'
}

Expose Prometheus Endpoint in Spring Boot

Spring Boot automatically exposes a /actuator/prometheus endpoint for collecting metrics. Ensure these configurations are present in application.properties or application.yml:

management.endpoints.web.exposure.include=health,info,prometheus
management.endpoint.prometheus.enabled=true


The Prometheus endpoint will be available at:
πŸ‘‰ http://localhost:8080/actuator/prometheus

Configuring Prometheus

In the Prometheus configuration file (prometheus.yml), add your Spring Boot application as a target:

scrape_configs:

  - job_name: 'spring-boot-app'

    static_configs:

      - targets: ['localhost:8080']


After configuring Prometheus, start the Prometheus server to begin collecting metrics from the application.

Configuring Grafana with Prometheus

Grafana is a powerful visualization tool that easily integrates with Prometheus to display metrics.
Add Prometheus as a Data Source in Grafana
  1. Go to Configuration β†’ Data Sources.
  2. Add a new Prometheus data source.
  3. Enter your Prometheus server URL (e.g., http://localhost:9090).
  4. Save the configuration.
Create a Grafana Dashboard
  1. Navigate to Create β†’ Dashboard and add a Panel.
  2. Select Prometheus as the data source.
  3. Use a PromQL query to visualize collected metrics (e.g., http_requests_total or jvm_memory_used_bytes).

Sending Logs to Grafana with Loki

To send logs to Grafana and view them in real-time, configure Grafana Loki, which is designed to collect and index logs in a Prometheus-like fashion.

Add the Logback Dependency for Loki

Add this dependency in pom.xml to integrate logs with Loki:

<dependency>

    <groupId>com.github.loki</groupId>

    <artifactId>logback-loki-appender</artifactId>

    <version>1.3.0</version>

</dependency>

Configure Logback to Send Logs to Loki

Modify logback-spring.xml to include an appender for Loki:

<appender name="LOKI" class="com.github.loki.LogbackLokiAppender">

    <url>http://localhost:3100/loki/api/v1/push</url>

    <encoder>

        <pattern>

            {"timestamp":"%date{ISO8601}","level":"%level","message":"%message"}

        </pattern>

    </encoder>

</appender>



<root level="INFO">

    <appender-ref ref="LOKI"/>

</root>


Configuring Grafana Loki

Install Grafana Loki following the official documentation.

Configure Loki to collect logs (e.g., by setting up an input file pointing to logs generated by your application).

In Grafana, navigate to Configuration β†’ Data Sources, add Loki, and enter its server URL (e.g., http://localhost:3100).

Create a new Dashboard and add a Panel to visualize logs.

Best Practices

Do Not Log Sensitive Information

Avoid logging sensitive data like:

Passwords, Access Tokens, Personal User Data, Bank Information

Bad Example:

log.info("User authenticated with password: {}", password);

Solution: Mask or exclude sensitive information in logs
Use Lazy Logging for Performance Optimization

Avoid:

log.debug("Request received for user: " + user.getName());

Use:

log.debug("Request received for user: {}", user.getName());

This ensures logging is evaluated only if the DEBUG level is enabled.

Conclusion

Following these best practices helps maintain clear, efficient, and useful logs for debugging and monitoring a Spring Boot 3 application in production. πŸš€

References


About Me

I am passionate about IT technologies. If you’re interested in learning more or staying updated with my latest articles, feel free to connect with me on:

Feel free to reach out through any of these platforms if you have any questions!

I am excited to hear your thoughts! πŸ‘‡




















Comments

Popular posts from this blog

Monitoring and Logging with Prometheus: A Practical Guide

Creating a Complete CRUD API with Spring Boot 3: Authentication, Authorization, and Testing