
📋 Table of Contents
- 1. Introduction: Why Automated Deployment Matters
- 2. Dockerizing a Spring Boot Application
- 3. Integrating Multiple Services with Docker Compose
- 4. Separating Development and Production Compose Configurations
- 5. CI/CD Overview: The Gateway to Automation
- 6. Automating Build & Deployment with GitHub Actions
- 7. Deploying to AWS EC2 or Cloud Servers
- 8. Practical Example: Todo App Auto Deployment
- 9. Maintenance and Monitoring Strategies
- 10. Conclusion: Real Gains from Deployment Automation
1. Introduction: Why Automated Deployment Matters
In the era of agile software development, deployment is no longer a final, manual process — it is a key pillar of the development lifecycle. As applications grow more complex and teams strive for faster delivery cycles, the need for automated deployment pipelines has never been greater.
Combining Docker, Docker Compose, and a robust CI/CD pipeline enables teams to consistently ship reliable code to production, reduce human error, and ensure reproducibility across environments. Especially for Spring Boot developers, the built-in support for executable JARs and embedded servers makes it highly suitable for containerized deployment strategies.
This comprehensive guide walks you through every stage — from containerizing your Spring Boot application, to composing multi-service environments with Docker Compose, and ultimately establishing a production-ready CI/CD pipeline using GitHub Actions.
By the end of this series, you’ll be equipped to:
- Build a fully Dockerized Spring Boot application
- Compose complex service stacks (e.g., app + database + Redis)
- Automate your deployment pipeline using modern DevOps tools
- Deploy to a real-world cloud environment like AWS EC2
This isn’t just about writing code — it’s about shipping it efficiently, securely, and with confidence.
2. Dockerizing a Spring Boot Application
Before building a CI/CD pipeline or deploying to the cloud, we need to first package the application in a format that is portable and reproducible. That’s where Dockerizing comes in. Dockerizing a Spring Boot application means packaging it into a Docker image so that it can be run anywhere, whether it’s a developer’s machine, a staging server, or a production cloud environment.
Step 1: Build the Application (JAR)
First, create your Spring Boot project using Spring Initializr or your preferred method. Once ready, generate a JAR file using Gradle:
./gradlew clean build
After the build completes, the JAR file will typically be located in the build/libs
directory.
Step 2: Create the Dockerfile
The Dockerfile
defines how to build a Docker image of your application. Here’s a simple yet effective Dockerfile for a Spring Boot application:
FROM openjdk:17-jdk-slim
ARG JAR_FILE=build/libs/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
This Dockerfile uses a lightweight OpenJDK image and copies the built JAR into the container, defining the entrypoint to launch the app.
Step 3: Build the Docker Image
In the root directory of your project (where the Dockerfile resides), run the following command to build the Docker image:
docker build -t my-springboot-app .
Step 4: Run the Docker Container
Once the image is built, you can launch the container and expose it on port 8080:
docker run -d -p 8080:8080 --name spring-app my-springboot-app
Your application is now accessible at http://localhost:8080
.
Pro Tip: Multi-stage Docker Build for Optimization
For production environments, it’s important to optimize the image size and remove build-time dependencies. A multi-stage Dockerfile helps you build in one stage and deploy in another:
FROM gradle:7.6-jdk17 AS builder
COPY --chown=gradle:gradle . /home/gradle/project
WORKDIR /home/gradle/project
RUN gradle build --no-daemon
FROM openjdk:17-jdk-slim
COPY --from=builder /home/gradle/project/build/libs/*.jar app.jar
ENTRYPOINT ["java", "-jar", "/app.jar"]
This approach separates the build environment from the runtime, resulting in a much smaller and more secure final image.
Now that we’ve successfully Dockerized the application, the next step is to orchestrate multiple services such as databases or caches using Docker Compose.
3. Integrating Multiple Services with Docker Compose
As your application grows, running it as a standalone container is rarely sufficient. Real-world deployments often require additional services such as databases, caches, messaging queues, or even frontend servers. Managing each service container manually is error-prone and inefficient. This is where Docker Compose shines — allowing you to define and run multi-container applications using a single declarative YAML file.
What is Docker Compose?
Docker Compose is a tool for defining and running multi-container Docker applications. Using a file named docker-compose.yml
, you can configure your application’s services, networks, volumes, and environment variables, then spin everything up with a single command:
docker-compose up -d
This command reads the configuration, pulls required images (if needed), builds local images, creates networks and volumes, and runs all the services in the correct order.
Example: Spring Boot with MySQL
Let’s extend our Spring Boot application by integrating a MySQL database using Docker Compose. Here’s a sample docker-compose.yml
file:
version: "3.8"
services:
app:
build: .
ports:
- "8080:8080"
depends_on:
- db
environment:
SPRING_DATASOURCE_URL: jdbc:mysql://db:3306/appdb
SPRING_DATASOURCE_USERNAME: user
SPRING_DATASOURCE_PASSWORD: pass
db:
image: mysql:8
restart: always
environment:
MYSQL_DATABASE: appdb
MYSQL_USER: user
MYSQL_PASSWORD: pass
MYSQL_ROOT_PASSWORD: rootpass
ports:
- "3306:3306"
volumes:
- db_data:/var/lib/mysql
volumes:
db_data:
Key highlights:
app
is your Spring Boot container, which connects to the database container via its service name (db
)depends_on
ensures the database starts before the app containervolumes
persist MySQL data between container restarts
Service Name = Hostname
Within the same Docker Compose network, each service name automatically becomes a DNS-resolvable hostname. In the example above, the Spring Boot application can connect to MySQL using db:3306
without needing an IP address or external host configuration.
Adding Redis or Other Services
Need Redis for caching? Add it as another service in the same YAML file:
redis:
image: redis:6
ports:
- "6379:6379"
Your application can then access Redis at redis:6379
. Likewise, you can add other services such as MongoDB, RabbitMQ, or Elasticsearch in the same way.
Running the Stack
Once the configuration is ready, bring up the entire application stack with:
docker-compose up -d
To monitor the logs of all services:
docker-compose logs -f
Best Practices: Volumes and Environment Files
For better separation of configuration, especially sensitive data, you can store environment variables in a .env
file and reference them in your Compose file. This keeps your credentials out of version control and allows for different setups across environments (development, staging, production).
Now that your application and supporting services can run together seamlessly, the next step is to manage different configurations for development and production environments.
4. Separating Development and Production Compose Configurations
While Docker Compose simplifies the management of multi-service environments, the needs of development and production environments are rarely the same. Developers might require live reloading, debugging ports, and verbose logs, while production systems demand minimal overhead, hardened security, and performance optimizations.
To accommodate this, Docker Compose offers several built-in mechanisms to help you cleanly separate configurations:
- Override files: e.g.,
docker-compose.override.yml
- Environment files: using
.env
for variable injection - Profiles: conditionally enable/disable services
Option 1: Override Files (docker-compose.override.yml
)
By default, Docker Compose automatically reads two files:
docker-compose.yml
docker-compose.override.yml
(if present)
This allows you to define your base configuration in docker-compose.yml
, and then overlay development-specific settings — such as volume mounts or extra environment variables — in the override file. Example:
services:
app:
volumes:
- ./src:/app/src
environment:
SPRING_PROFILES_ACTIVE: dev
ports:
- "5005:5005"
This setup mounts your source code into the container and exposes port 5005 for remote debugging — ideal for development, but unnecessary (and risky) in production.
Option 2: .env Files for Variable Injection
Docker Compose supports a .env
file that allows you to store commonly reused values like credentials or port numbers without hardcoding them into your YAML files. For example:
APP_PORT=8080
SPRING_DATASOURCE_USERNAME=admin
SPRING_DATASOURCE_PASSWORD=secret
In your docker-compose.yml
, you can reference these variables like so:
services:
app:
ports:
- "${APP_PORT}:8080"
environment:
SPRING_DATASOURCE_USERNAME: ${SPRING_DATASOURCE_USERNAME}
SPRING_DATASOURCE_PASSWORD: ${SPRING_DATASOURCE_PASSWORD}
This approach promotes configuration reuse and keeps sensitive data out of your version-controlled files.
Option 3: Using Profiles to Condition Services
From version 3.9 onward, Docker Compose supports profiles, which enable you to include or exclude services based on context. This is useful for things like optional monitoring tools that should run only in staging or production.
services:
grafana:
image: grafana/grafana
ports:
- "3000:3000"
profiles:
- monitoring
To launch Compose with only the monitoring
profile enabled:
docker-compose --profile monitoring up
Summary: Why Split Configurations?
Separating development and production configurations provides the following benefits:
- Clarity: Each environment has its own isolated and purpose-specific setup.
- Security: Debugging tools and secrets are not leaked into production.
- Maintainability: Developers and operators can manage their respective domains independently.
Now that your environments are cleanly separated, you’re ready to integrate all this into a continuous pipeline. Next, we’ll explore how to design a robust CI/CD workflow that builds, tests, and deploys your application automatically.
5. CI/CD Overview: The Gateway to Automation
So far, we’ve Dockerized our Spring Boot application and integrated supporting services using Docker Compose. While this lays a strong foundation, deploying everything manually every time there’s a code change is both inefficient and error-prone. That’s where CI/CD pipelines come in — bringing automation, reliability, and speed to your software delivery process.
What is CI/CD?
CI/CD stands for Continuous Integration and Continuous Delivery/Deployment. It represents a modern approach to software development that focuses on automating the building, testing, and deployment of applications.
- Continuous Integration (CI): Automatically building and testing your application whenever code is pushed to a shared repository.
- Continuous Delivery (CD): Automatically preparing your application for release to production — including image builds, artifact packaging, etc.
- Continuous Deployment: Automatically deploying the application to the live environment after every successful test and build (zero human intervention).
With a well-designed CI/CD pipeline, you can move from code commit to production deployment in a matter of minutes, with minimal manual work and maximum confidence.
Why Does It Matter in a Docker + Spring Boot Setup?
Spring Boot already encourages rapid development cycles, and Docker ensures consistency across machines. Combining both with CI/CD allows you to:
- Ensure that every code change is automatically tested and built
- Package your app into a Docker image and push to a container registry
- Trigger deployment to a server (e.g., AWS EC2) instantly upon successful build
This pipeline transforms software delivery into a reliable, repeatable, and hands-off process — essential for modern DevOps culture.
Popular CI/CD Tools Comparison
Tool | Strengths | Ideal For |
---|---|---|
GitHub Actions | Built into GitHub; easy to configure YAML workflows; integrates well with container-based apps | Open-source projects, teams already using GitHub |
GitLab CI/CD | Tightly integrated with GitLab; flexible runners and caching | Organizations using GitLab for version control |
Jenkins | Highly customizable with thousands of plugins; on-prem hosting | Complex enterprise workflows, self-hosted environments |
Recommended Stack: GitHub Actions + Docker
For most Spring Boot developers already using GitHub, GitHub Actions offers a seamless and efficient way to automate builds and deployments. It requires no additional infrastructure and supports deep Docker integration out of the box.
A typical workflow may include:
- Trigger the pipeline on a push to
main
ordevelop
branch - Build the Spring Boot application with Gradle
- Create and tag a Docker image
- Push the image to Docker Hub or GitHub Container Registry
- SSH into an EC2 instance and deploy with Docker Compose
In the next section, we will implement this exact pipeline using GitHub Actions and take the theory into a fully functional, production-ready workflow.
6. Automating Build & Deployment with GitHub Actions
GitHub Actions is a powerful CI/CD tool that allows you to automate all stages of your software lifecycle directly within your GitHub repository. With its intuitive YAML-based configuration, it’s especially effective for small-to-medium projects that already use GitHub for version control and collaboration.
In this section, we’ll build a complete GitHub Actions workflow that does the following:
- Triggers on every push to the
main
branch - Builds the Spring Boot application using Gradle
- Builds and tags a Docker image
- Pushes the image to Docker Hub (or another registry)
- Deploys the image to a remote server using SSH and Docker Compose
Step 1: Set Up the Workflow File
Create the following directory and YAML file in your project root:
mkdir -p .github/workflows
touch .github/workflows/deploy.yml
Step 2: Define the Workflow
Here’s a sample GitHub Actions workflow definition:
name: Build and Deploy Spring Boot App
on:
push:
branches:
- main
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Set up JDK 17
uses: actions/setup-java@v3
with:
java-version: '17'
distribution: 'temurin'
- name: Build with Gradle
run: ./gradlew build
- name: Build Docker image
run: docker build -t ${{ secrets.DOCKER_IMAGE_NAME }} .
- name: Log in to Docker Hub
run: echo "${{ secrets.DOCKER_PASSWORD }}" | docker login -u "${{ secrets.DOCKER_USERNAME }}" --password-stdin
- name: Push Docker image
run: docker push ${{ secrets.DOCKER_IMAGE_NAME }}
- name: Deploy to EC2 via SSH
uses: appleboy/ssh-action@v0.1.10
with:
host: ${{ secrets.REMOTE_HOST }}
username: ${{ secrets.REMOTE_USER }}
key: ${{ secrets.REMOTE_KEY }}
script: |
cd /home/ubuntu/myapp
docker pull ${{ secrets.DOCKER_IMAGE_NAME }}
docker-compose down
docker-compose up -d
Step 3: Store Secrets Securely
To keep your credentials safe, store them in your repository’s Settings → Secrets
. The following secrets are used in the workflow above:
DOCKER_USERNAME
– Your Docker Hub usernameDOCKER_PASSWORD
– Your Docker Hub password or access tokenDOCKER_IMAGE_NAME
– Full image name, e.g.,username/springboot-app:latest
REMOTE_HOST
– IP or domain of your EC2 instanceREMOTE_USER
– SSH username (typicallyubuntu
)REMOTE_KEY
– Your private SSH key (Base64-encoded)
Step 4: Push to Trigger Deployment
Now that everything is in place, push a change to your main
branch to trigger the workflow.
You can monitor the status in the “Actions” tab of your GitHub repository.
Bonus: Environment-Specific Branch Logic
You can expand the pipeline to handle multiple environments using branch-based conditions:
on:
push:
branches:
- main # deploy to production
- staging # deploy to staging
Summary
With GitHub Actions, you now have an automated system that builds, packages, and deploys your Spring Boot application to production — all with a single git push. This eliminates manual steps, reduces deployment errors, and significantly shortens the feedback loop.
In the next section, we’ll look at how to prepare a cloud server (e.g., AWS EC2) to host your Docker-based app, and how to connect it securely with your CI/CD pipeline.
7. Deploying to AWS EC2 or Cloud Servers
Once your Docker image is built and pushed to a registry via GitHub Actions, the next step is to deploy it to a live server — such as an AWS EC2 instance. This turns your automation pipeline into a full delivery solution, capable of bringing every code change to life on the web.
This section walks through the process of preparing an EC2 server and automating the deployment using SSH and Docker Compose.
Step 1: Launch and Configure an EC2 Instance
Using AWS Management Console or CLI, launch a new EC2 instance (Ubuntu recommended). During setup:
- Allow inbound traffic on ports
22
(SSH) and80/443
or8080
as needed - Download the PEM key for SSH access
SSH into your instance from your local machine to ensure access works:
ssh -i "my-key.pem" ubuntu@your-ec2-ip
Step 2: Install Docker and Docker Compose
On your EC2 instance, install Docker and Docker Compose:
sudo apt update
sudo apt install docker.io docker-compose -y
sudo usermod -aG docker $USER
Log out and back in to apply Docker group permissions.
Step 3: Prepare the Deployment Directory
Create a directory on your server to host the Docker Compose files:
mkdir -p ~/myapp
cd ~/myapp
Place your docker-compose.yml
and optionally a .env
file with environment variables in this directory.
Step 4: Register the Private Key with GitHub Secrets
To allow GitHub Actions to SSH into your EC2 server, convert your PEM key into a Base64 string and save it as a GitHub Secret:
cat my-key.pem | base64
Save the output in a secret called REMOTE_KEY
and also register:
REMOTE_HOST
– your EC2 IP addressREMOTE_USER
– typicallyubuntu
Step 5: Execute Remote Deployment from GitHub Actions
In your workflow file (deploy.yml
), add a deployment step using appleboy/ssh-action
:
- name: Deploy to EC2
uses: appleboy/ssh-action@v0.1.10
with:
host: ${{ secrets.REMOTE_HOST }}
username: ${{ secrets.REMOTE_USER }}
key: ${{ secrets.REMOTE_KEY }}
script: |
cd ~/myapp
docker pull ${{ secrets.DOCKER_IMAGE_NAME }}
docker-compose down
docker-compose up -d
This step performs the following:
- SSH into the EC2 instance
- Navigate to the deployment directory
- Pull the latest Docker image
- Restart the services using Docker Compose
Step 6: Verify the Deployment
After deployment, verify that your application is up and running:
docker ps
curl http://localhost:8080
You can also visit your EC2 instance’s public IP in a browser to access your app.
Summary
You now have a fully functioning deployment workflow that builds your application, pushes a Docker image, and automatically delivers it to a cloud server. From a single git push
, your app goes live — repeatably, reliably, and without manual steps.
In the next section, we’ll bring it all together with a real-world case study: deploying a functional Todo application from development to production with full CI/CD integration.
8. Practical Example: Todo App Auto Deployment
Now that we’ve laid the technical foundation — from Dockerizing a Spring Boot app to building an automated CI/CD pipeline — let’s put everything into practice by deploying a real, functional application. In this example, we’ll use a simple RESTful Todo application to demonstrate the end-to-end process of building, containerizing, and deploying to an EC2 instance via GitHub Actions.
Overview of the Stack
- Backend: Spring Boot + Spring Data JPA
- Database: MySQL (Dockerized)
- Build Tool: Gradle
- CI/CD: GitHub Actions
- Deployment: AWS EC2 + Docker Compose
Step 1: Create the Spring Boot Todo Application
This REST API provides endpoints for managing Todo items. Here’s a basic entity and controller setup:
@Entity
public class Todo {
@Id @GeneratedValue
private Long id;
private String title;
private boolean completed;
}
@RestController
@RequestMapping("/api/todos")
public class TodoController {
private final TodoRepository repository;
public TodoController(TodoRepository repository) {
this.repository = repository;
}
@GetMapping
public List<Todo> getAll() {
return repository.findAll();
}
@PostMapping
public Todo create(@RequestBody Todo todo) {
return repository.save(todo);
}
}
Step 2: Dockerize the Application
Use a multi-stage Dockerfile for optimized image size:
FROM gradle:7.6-jdk17 AS builder
COPY --chown=gradle:gradle . /home/gradle/project
WORKDIR /home/gradle/project
RUN gradle build --no-daemon
FROM openjdk:17-jdk-slim
COPY --from=builder /home/gradle/project/build/libs/*.jar app.jar
ENTRYPOINT ["java", "-jar", "/app.jar"]
Step 3: Define the Docker Compose File
We’ll define both the Spring Boot app and the MySQL database in docker-compose.yml
:
version: "3.8"
services:
app:
build: .
ports:
- "8080:8080"
environment:
SPRING_DATASOURCE_URL: jdbc:mysql://db:3306/tododb
SPRING_DATASOURCE_USERNAME: todo_user
SPRING_DATASOURCE_PASSWORD: secret
depends_on:
- db
db:
image: mysql:8
restart: always
environment:
MYSQL_DATABASE: tododb
MYSQL_USER: todo_user
MYSQL_PASSWORD: secret
MYSQL_ROOT_PASSWORD: rootpass
ports:
- "3306:3306"
Step 4: Set Up GitHub Actions
Use the previously defined deploy.yml
GitHub Actions workflow to build the image, push to Docker Hub, and deploy to the EC2 instance using SSH. You can reuse the exact pipeline configuration from Section 6.
Step 5: Test the Application After Deployment
Once deployed, the API should be accessible via your EC2 public IP:
GET /api/todos
— Fetch all todosPOST /api/todos
— Create a new todo
You can test it using curl or Postman:
curl -X POST http://your-ec2-ip:8080/api/todos \
-H "Content-Type: application/json" \
-d '{"title":"CI/CD Success","completed":false}'
Final Result
Your Spring Boot Todo application is now:
- Fully containerized with Docker
- Running alongside MySQL via Docker Compose
- Automatically built and deployed using GitHub Actions
- Accessible via a real-world cloud server (EC2)
This demonstrates a complete, production-grade DevOps flow using modern tools and minimal infrastructure.
Next, we’ll focus on keeping this deployment healthy with monitoring, logging, and rollback strategies.
9. Maintenance and Monitoring Strategies
Automating deployment is only the beginning. Once your application is live, the focus shifts to keeping it healthy, responsive, and secure. That means you need monitoring, logging, and a strategy for rolling back changes if something goes wrong.
In this section, we’ll cover the essential strategies for maintaining and observing a Dockerized Spring Boot application running in a production environment.
1. Enable Docker Health Checks
Docker lets you define a HEALTHCHECK instruction inside your Dockerfile to verify the container’s health. Combined with Spring Boot’s /actuator/health
endpoint, this provides automatic feedback on service availability.
HEALTHCHECK --interval=30s --timeout=10s --retries=3 \
CMD curl -f http://localhost:8080/actuator/health || exit 1
With this in place, Docker can automatically restart unhealthy containers (when combined with restart: always
or unless-stopped
policies in Docker Compose).
2. View Logs in Real Time
To inspect logs from a specific container or service:
docker logs [container_name]
docker-compose logs -f
For long-term storage and structured analysis, consider a centralized logging solution such as:
- ELK Stack (Elasticsearch + Logstash + Kibana)
- Grafana + Loki
- Fluentd or Filebeat
3. Collect Metrics for Monitoring
To go beyond logs and monitor system and application metrics, use tools like:
- Prometheus: Pulls metrics from apps (e.g., via Spring Actuator)
- Grafana: Visualizes metrics in dashboards
- cAdvisor: Monitors resource usage (CPU, memory) of Docker containers
These tools help detect slowdowns, memory leaks, or spikes in traffic before they escalate into outages.
4. Plan for Rollbacks
Despite your best efforts, bad deployments can happen. When they do, you need to be able to revert quickly.
Some rollback strategies include:
- Tagging Docker images with version numbers rather than using
latest
- Maintaining a history of Compose files with Git versioning
- Manual fallback: SSH into the server and redeploy the previous image
Example rollback:
docker pull myapp:1.0.3
docker tag myapp:1.0.3 myapp:latest
docker-compose down
docker-compose up -d
5. Add Notifications for CI/CD Status
Keep your team informed with real-time alerts on build and deployment outcomes using GitHub Actions’ integrations:
- Slack – receive workflow success/failure messages
- Email – get notified on deploy errors
- Webhook – trigger custom actions on pipeline events
This ensures rapid awareness and faster incident response.
Summary
Automated deployment is powerful, but it’s only half the story. A production-ready system also includes:
- Proactive health checks and restarts
- Actionable logs and monitoring dashboards
- Clear rollback strategies
- Alerting and observability integrations
With these systems in place, your application becomes more resilient, more reliable, and more professional.
Now, let’s wrap everything up and reflect on the benefits of bringing CI/CD and container orchestration together in the final section.
10. Conclusion: Real Gains from Deployment Automation
Throughout this guide, we’ve taken a comprehensive journey — from Dockerizing a Spring Boot application to composing services, establishing CI/CD pipelines, deploying to cloud servers, and setting up monitoring. Each piece contributes to a bigger picture: a modern, automated, and resilient software delivery system.
Key Takeaways
- Docker + Spring Boot enables consistent packaging and portability
- Docker Compose simplifies multi-service orchestration
- GitHub Actions automates the entire build and deployment lifecycle
- AWS EC2 serves as a scalable and accessible cloud host
- Monitoring & rollback strategies ensure system stability in production
What used to take hours or days — building, testing, deploying, configuring environments — can now be achieved in minutes with a single push. That’s not just efficiency; that’s freedom.
The DevOps Mindset
This isn’t just about tools or scripts — it’s a cultural shift. Adopting CI/CD and containerization enables your team to embrace a DevOps mindset, where development and operations collaborate seamlessly and continuously improve delivery practices.
In doing so, your team gains:
- Faster feedback loops
- More reliable releases
- Lower human error risk
- Greater confidence to ship frequently
The Final Message
Software that isn’t deployed is software that doesn’t exist. Building something great is only part of the challenge — delivering it quickly, safely, and continuously is what sets professional teams apart.
By embracing the practices outlined in this guide, you’re not just improving deployment — you’re leveling up your entire development lifecycle.
Write code. Commit. Deploy. Repeat. Welcome to the future of modern software delivery.